halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
8
| timestamp
stringclasses 938
values | year
stringclasses 55
values | url
stringlengths 43
389
| text
stringlengths 16
2.18M
| size
int64 16
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 0
229
|
---|---|---|---|---|---|---|---|---|---|
01474692 | en | [
"info"
] | 2024/03/04 23:41:46 | 2012 | https://inria.hal.science/hal-01474692/file/978-3-642-40919-6_6_Chapter.pdf | Jan Mendling
email: [email protected]
Managing Structural and Textual Quality of Business Process Models
Business process models are increasingly used for capturing business operations of companies. Such models play an important role in the requirements elicitation phase of to-be-created information systems and in as-is analysis of business efficiency. Many process modeling initiatives have grown that big in size involving dozens of modelers with varying expertise creating and maintaining hundreds, sometimes thousands of models. One of the roadblock towards a more effective usage of these process models is the often insufficient provision of quality assurance. The aim of this paper is to give an overview on how empirical research informs structural and textual quality assurance of process models. We present selected findings and show how they can be utilized as a foundation for novel automatic analysis techniques.
Introduction
Nowadays, many companies document their business processes in terms of conceptual models. These models provide the basis for activities associated with the business process management lifecycle such as process analysis, process redesign, workflow implementation and process evaluation. Many process modeling initiative have resulted in hundreds or thousands of process models created by process modelers of diverging expertise. One of the major roadblocks towards a more effective usage of these process models is the often insufficient provision of quality assurance. This observation establishes the background for the definition of automatic analysis techniques, which are able to support quality assurance.
In recent years, research into quality assurance of process models and corresponding analysis techniques has offered various new insights. The objective of this paper is to summarize some of the essential contributions in this area. To this end, we aim to integrate both technical contributions and empirical findings. The paper is structured accordingly. In Section 2 we describe the background of quality research distinguishing structural and textual quality. Section 3 discusses how quality factors can be analyzed in terms of their capability to predict aspects of quality. Section 4 discusses different techniques for automatically refactoring process models with the aim to improve their quality. Finally, Section 5 summarizes the discussion and concludes the paper.
Background
Research on conceptual modeling often distinguishes syntax, semantics and pragmatics of process models with a reference to semiotic theory [START_REF] Lindland | Understanding quality in conceptual modeling[END_REF][START_REF] Krogstie | Process Models Representing Knowledge for Action: a Revised Quality Framework[END_REF]. The idea behind this distinction is that a message, here codified as a conceptual model, first has to be understood in terms of its syntax by a model reader before the semantics can be interpreted. Comprehension on the semantic level then provide the foundation for taking appropriate action in a pragmatic way. This semiotic ladder has one major implication for process modeling as a specific area of conceptual modeling and one major research directive. The implication of a semiotic perspective on process modeling is that the comprehension of a process model by a model reader has to be regarded as the central foundation for discussing its quality. As appropriate pragmatics, which comes down to actions taken by a model reader, define the successful progression on the semiotic ladder, research has to establish a thorough understanding how quality on each step of this ladder can be achieved. Indeed, it has been shown empirically that none of the three steps of the semiotic ladder can be neglected, and that they appear to be of equal importance for conceptual modeling [START_REF] Moody | Evaluating the quality of process models: Empirical testing of a quality framework[END_REF]. As much of research on process modeling has advanced syntactic analysis of process models, but rather neglected semantic and pragmatic aspects, it is an important directive for future research to complement syntactic analyses with insights on semantics and pragmatics. In the following, we try to give a balanced account of research on process model quality on a syntactic and semantic level while leaving out pragmatics. Our focus in this context is on structural and textual characteristics of a process model. The activity nodes together with the gateways and arcs define the syntax or the formal structure of the process model. In this model, there are two types of gateways used. The first one, an XOR-split, defines a decision point to progress either with the upper or the lower branch, but never with both. There is also a corresponding XOR-join in BPMN, it is not used in the example. Towards the right-hand side of the model, there is an AND-join. This element is used to synchronize concurrent branches. There is no corresponding AND-split in the model. The arcs define the flow relation between activity nodes and gateways.
Is this process model of good quality?
The quality of a process model like the one in the example can be discussed from the perspective of syntax and of semantics. The quality of the syntax of the model relates to the question whether its formal structure can be readily understood by a model reader. In this context, prior research has focused on the question whether the size and the complexity might be overwhelming. Furthermore, there are formal correctness criteria that can be automatically checked. For the example, we can see that it apparently includes a deadlock: the single branch activated by the XOR-split eventually activates the AND-join, which will then wait forever for the not activated alternative branch to complete. The quality of the semantics of the model relates to the question whether its textual content can be readily understood by a model reader. Here, we observe that the activity labels follow different grammatical structure. Make decision starts with a verb and continues with a business object. This is usually considered to be the norm structure of an activity label [START_REF] Mendling | Activity Labeling in Process Modeling: Empirical Insights and Recommendations[END_REF][START_REF] Silver | BPMN Method and Style, with BPMN Implementer's Guide[END_REF][START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF][START_REF] Allweyer | BPMN 2.0 -Business Process Model and Notation[END_REF][START_REF] Leopold | On the refactoring of activity labels in business process models[END_REF]. The other three labels use a gerund or a noun to express the work content of the activity. Altogether, we can summarize that the example model has both issues with its syntax and with its semantics.
In practice a considerable percentage of process models has quality issues, with often 5% to 30% of the models having problems with soundness [START_REF] Mendling | Empirical Studies in Process Model Verification[END_REF]. The reason for at least some of these issues is the growth of many process modeling initiatives. This development causes problems at the stage of model creation and model maintenance. An increasing number of employees is becoming involved with modeling. Many of these casual modelers lack modeling experience and adequate training such that newly created models are not always of good quality [START_REF] Rosemann | Potential Pitfalls of Process Modeling: Part A[END_REF][START_REF] Rosemann | Potential pitfalls of process modeling: part b[END_REF]. Furthermore, the fact that many companies maintain several thousand models calls for automatic quality assurance, which is mostly missing in present tools [START_REF] Rosemann | Potential Pitfalls of Process Modeling: Part A[END_REF][START_REF] Rosemann | Potential pitfalls of process modeling: part b[END_REF]. A promising direction for increasing process model quality is automatic guideline checking and refactoring. The next section discusses the corresponding foundations.
Factors of Process Model Understanding
Various factors for process model understanding have been identified. Characteristics of the modeling notation have be investigated in several experiments [START_REF] Sarshar | Comparing the control-flow of epc and petri net from the end-user perspective[END_REF][START_REF] Hahn | Why are some diagrams easier to work with? effects of diagrammatic representation on the cognitive integration process of systems analysis and design[END_REF][START_REF] Agarwal | Comprehending object and process models: An empirical study[END_REF]. Two different factors have to be discussed in this context. First, ontological problems of the notation, e.g. when there are two options to represent the same meaning, might lead to misinterpretations of singular models [START_REF] Weber | Ontological Foundations of Information Systems. Coopers & Lybrand and the Accounting Association of Australia and[END_REF]. Survey research has found support for this argument [START_REF] Recker | Do ontological deficiencies in modeling grammars matter[END_REF]. Second, properties of the symbol set of a notation might cause problems, e.g. with remembering or distinguishing them [START_REF] Moody | The é;physicsé; of notations: Toward a scientific basis for constructing visual notations in software engineering[END_REF]. Empirical support for this hypothesis is reported in [START_REF] Figl | The influence of notational deficiencies on process model comprehension[END_REF]. The secondary notation plays an important role as well. The concept of secondary notation covers all representational aspects of a model that do not relate to its formal structure. This might relate to the usage of color as a means of highlighting [START_REF] Rosa | Managing process model complexity via concrete syntax modifications[END_REF]. Corresponding support was found in an experiment in [START_REF] Reijers | Syntax highlighting in business process models[END_REF]. The visual layout of the model graph is also well-known for its importance to facilitate good understanding [START_REF] Moher | Comparing the Comprehensibility of Textual and Graphical Programs: The Case of Petri Nets[END_REF][START_REF] Purchase | Which aesthetic has the greatest effect on human understanding? In: Graph Drawing[END_REF]. In this section, we focus on structural properties of the process model and properties of its text labels.
Structural Factors of Process Model Understanding
Structural properties of a process model are typically operationalized by the help of different metrics. Many of them are inspired by metrics from software engineering like lines of code, cyclomatic number, or object metrics [START_REF] Mccabe | A complexity measure[END_REF][START_REF] Chidamber | A metrics suite for object oriented design[END_REF][START_REF] Fenton | Software Metrics. A Rigorous and Practical Approach[END_REF]. Early contributions in the field of process modeling focus on the definition of metrics [START_REF] Lee | An empirical study on the complexity metrics of petri nets[END_REF][START_REF] Nissen | Redesigning reengineering through measurement-driven inference[END_REF][START_REF] Morasca | Measuring attributes of concurrent software specifications in petri nets[END_REF]. More recent work puts a strong emphasis on the validation of metrics. In these works, different sets of metrics are used as input variables for conducting experiments to test their statistical connection with dependent variables that relate to quality. For instance, the control-flow complexity (CFC) [START_REF] Cardoso | Evaluating Workflows and Web Process Complexity[END_REF] is validated with respect to its correlation with perceived complexity of models [START_REF] Cardoso | Process control-flow complexity metric: An empirical validation[END_REF]. Metrics including size, complexity and coupling are validated for their correlation with understanding and maintainability [START_REF] Canfora | A family of experiments to validate metrics for software process models[END_REF][START_REF] Aguilar | An exploratory experiment to validate measures for business process models[END_REF]. Further metrics aim to quantify cognitive complexity and modularity [START_REF] Vanderfeesten | On a Quest for Good Process Models: The Cross-Connectivity Metric[END_REF][START_REF] Vanhatalo | Faster and more focused control-flow analysis for business process models through sese decomposition[END_REF][START_REF] Aalst | Translating unstructured workflow processes to readable BPEL: Theory and implementation[END_REF][START_REF] Reijers | Human and automatic modularizations of process models to enhance their comprehension[END_REF]. Various metrics have been validated as predictors of error probability [START_REF] Mendling | Detection and Prediction of Errors in EPCs of the SAP Reference Model[END_REF], which is assumed to be a symptom of bad understanding by the modeler during the process of model creation. A summary of metrics is presented in [START_REF] Mendling | Metrics for Process Models: Empirical Foundations of Verification, Error Prediction, and Guidelines for Correctness[END_REF], an overview of experiments can be found in [START_REF] Reijers | A Study Into the Factors That Influence the Understandability of Business Process Models[END_REF][START_REF] Mendling | Factors of process model comprehension -findings from a series of experiments[END_REF]. In summary, it can be stated that increase in size, decrease in complexity and decrease in structuredness leads is related to greater issues with quality.
One of the major objectives of research into the factors of process model understanding is to establish a set of sound and precise guidelines for process modeling. Guidelines such as the Guidelines of Process Modeling [START_REF] Becker | Guidelines of Business Process Modeling[END_REF] have been available for a while, but they had hardly been tied to experimental findings. The Seven Process Modeling Guidelines (7PMG) might be regarded as a first attempt towards building guidelines based on empirical insight [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF]. The challenge in this context is to adapt statistical methods in such a way that metrics can be related to threshold values. In its most basic form, this problem can be formulated as a classification problem: if we consider a particular metric like number of nodes, in how far is it capable of distinguishing e.g. good and bad models.
A specific stream of research in this area investigates in how far different process model metrics are capable of separating models with and without errors. The work reported in [START_REF] Mendling | Thresholds for error probability measures of business process models[END_REF] uses logistic regression and error probability as a dependent variable. Logistic regression is a statistical model for estimating the probability of binary choices (error or no error in this case) [START_REF] Hosmer | Applied Logistic Regression[END_REF]. The logistic regression estimates the odds of error or no error based on the logit function.
This model can be adapted by using structural metrics such as size or complexity of a process model as input variables and observations in terms of whether these models are sound or not. The relationship between input and dependent variables follows an S-shaped curve of the logit curve converging to 0 for -∞ and to 1 for ∞. The value 0.5 is used as a cut-off for predicting error or no error. Based on the coefficient of the input variables in the logit function, one can predict whether an error would be in the model or not.
The quality of such a function to classify process models correctly as having an error or not can be judged based on four different sets: the set of true positive (TP) classifications, the set of false positives (FP), the set of true negatives (TN) and the set of false negatives (FN). A perfect classification based on the logit function would imply that there are only true positives and true negatives. An optimal threshold of separation can then be determined using Receiver Operating Characteristic (ROC) curves [START_REF] Zweig | Receiver-operating characteristic (roc) plots: a fundamental evaluation tool in clinical medicine[END_REF]. These curves visualize the ability of a specific process metric to discriminate between error and no error models. Each point on the ROC curve defines a pair of sensitivity and 1 -specif icity values of a metric. The best threshold can then be found based on sensitivity and specificity values with: sensitivity = true positives(TP) rate = T P P , specif icity = 1 -false positives(FP) rate = 1-F P P . Using this approach, several guidelines of the 7PMG could be refined in [START_REF] Mendling | Thresholds for error probability measures of business process models[END_REF]. Table 1 provides an overview of the results showing, among others, that process models with more than 30 nodes should be decomposed. No more than 3 inputs or outputs per connector.
G3 Start and End
Use no more than 2 start and end events.
G4.a Structuredness
Model as structured as possible.
G4.b Mismatch
Use design patterns to avoid mismatch.
G5.a OR-connectors
Avoid OR-joins and OR-splits.
G5.b Heterogeneity
Minimize the heterogeneity of connector types.
G5.c Token Split
Minimize the level of concurrency.
G6 Text
Use verb-object activity labels. G7 Nodes Decompose a model with more than 31 nodes.
Although there have been considerable advancements in this area, there are several challenges that persist. Thresholds have been identified based on error probability as a dependent variable, which can be easily expressed in a binary way. An important antecedent of quality is understanding. However, thresholds of understanding are much more difficult to establish as it is mostly measured using score values summed up for a set of comprehension tasks. In this case, good and bad models cannot be exactly discriminated. Furthermore, understanding can be associated with different types of comprehension questions ranging from simple
Labeling Style as a Factor of Process Model Understanding
Empirical research has found that process models from practice do not always follow naming conventions such as the verb-object style for activities. There are three general classes of activity labeling styles [START_REF] Mendling | Activity Labeling in Process Modeling: Empirical Insights and Recommendations[END_REF] (see Figure 2). First, the verb-object style defines an activity label as a verb followed by the corresponding business object (Make decision). Second, there are different ways of defining activities as action-noun labels. For such a label, the action is not formulated as a verb, but rather as a gerund (Executing) or a substantivated verb (Execution from to execute). There is also a third category of activity labels that miss referring to an action. An example is the label information system, which fails to mention an action, neither as a verb or noun. With these categories defined, it has to be noted that labeling style is a factor with characteristics quite different to structural metrics. While metrics can be measured on a metric scale, labeling styles can only be distinguished in a nominal way. This means that in the simplest case the input variable can be defined in a binary way, distinguishing usage of verb-object style versus not usage of another style. In terms of defining quality preferences, this makes the task easier: while metrics require a threshold to distinguish good and bad, labeling styles can be directly compared to be better or worse. An experiment reported in [START_REF] Mendling | Activity Labeling in Process Modeling: Empirical Insights and Recommendations[END_REF] takes activity labels of different labeling styles as treatments in order to investigate their potential ambiguity and their usefulness in facilitating domain understanding. ANOVA tests demonstrate that verb-object labels are perceived to be significantly better in this regard, followed by action-noun labels. Labels of the rest category were judged to be most ambiguous.
While the usage of labeling style is covered well in the literature, there are still various challenges in dealing with terminology. From a quality perspective, terms should have a clear-cut meaning. This implies that synonyms (several words with the same meaning) and homonyms (several meanings of the same word) should be avoided in process modeling. This problem is acknowledged in various papers [START_REF] Dean | Technological support for group process modeling[END_REF][START_REF] Rosemann | Evaluation of workflow management systems-a meta model approach[END_REF][START_REF] Rolland | L'e-lyee: coupling l'ecritoire and lyeeall[END_REF]; however, a proper solution for automatic quality assurance is missing.
Automatic Refactoring
The empirical results reported in the previous section provide a basis for the development of automatic refactoring techniques. The general idea of refactoring was formulated for software and relates to "changing a software system in such a way that it does not alter the external behavior of the code, yet improves its internal structure" [START_REF] Fowler | Refactoring: improving the design of existing code[END_REF]. For process models, often the notion of trace equivalence [START_REF] Weber | Refactoring large process model repositories[END_REF] or one of the notions of bisimulation [START_REF] Polyvyanyy | Structuring acyclic process models[END_REF] is considered when refactoring models. In the following, we summarize work on refactoring the structure of a process model and its activity labels. Frameworks for categorizing refactorings have been proposed in [START_REF] Weber | Refactoring large process model repositories[END_REF][START_REF] Rosa | Managing process model complexity via concrete syntax modifications[END_REF][START_REF] Rosa | Managing process model complexity via abstract syntax modifications[END_REF].
Refactoring the Structure of Process Models
Insight into factors of process model comprehension provides a solid basis for optimizing its structure. The challenge in this context is to define a transformation from an unstructured model towards a structured model. It is well known that a structured model can always be constructed for process models without concurrency, but that some concurrent behaviour is inherently unstructured [START_REF] Kiepuszewski | On structured workflow modelling[END_REF]. The research reported in [START_REF] Polyvyanyy | Structuring acyclic process models[END_REF] presents a approach based on the identification of ordering relations which leads to a maximally structured model under fully concurrent bisimulation.
Here, two cases have to be distinguished. There are process models for which making them structured comes at the price of increasing its size. Such a case is shown in Figure 2. This increase stems from the duplication of activities in unstructured paths. There are also cases where a process model can be structured without having to duplicate activities. In practice, making a model structured without duplication appears to be rather rare. An investigation with more than 500 models from practice has shown that structuring leads to an increase in size of about 50% on average [START_REF] Dumas | Understanding business process models: The costs and benefits of structuredness[END_REF]. It is also important to note that duplication might be more harmful than a usual increase in size. The user experiment reported in [START_REF] Dumas | Understanding business process models: The costs and benefits of structuredness[END_REF] points to a potential confusion by model readers who are asked about behavioural constraints that involve activities that are shown multiple times in the model.
The problem of duplicating activities is a key challenge in this area. It is an open research question how the beneficial effects of structuring can be best balanced with the harmful introduction of duplicate activities. Further experiments are needed for identifying a precise specification of the trade-off between structuredness and duplication. In this context, also the size of the model has to be taken into consideration.
Refactoring Text Labels of Process Models
Experiments and best practices from industry suggest a preference for the verbobject labeling style. The challenge in this context is to achieve an accurate parsing of the different labeling styles such that they can be transformed to the verb-object style. An accurate parsing is difficult in English for two reasons. First, many nouns in English are built from a verb using a zero-derivation mechanism. This means that the noun is morphologically equivalent to the verb. For a word like plan we do not directly know whether it refers to a verb or a noun (the plan versus to plan). Second, the activity labels of a process model usually do not cover a complete grammatically correct sentence structure. Therefore, it has been found difficult to use standard natural language processing tools like the Stanford parser. The approach reported in [START_REF] Leopold | On the refactoring of activity labels in business process models[END_REF] uses different contextual information to map a label that, for instance, starts with the word plan to its correct labeling style. Once the labeling style is known, tools like WordNet can be used to find a verb that matches an action that was formulated as a noun (see Figure 3). It has been shown that this approach works accurately for three different modeling collections from practice including altogether more than 10,000 activity labels [START_REF] Leopold | On the refactoring of activity labels in business process models[END_REF]. It is a topic of ongoing research how these refactoring techniques can be defined in such a way that they do not depend upon the rich set of natural language processing tools available for English. An alternative could be to directly work with annotated corpora. Also, an related to the terminology problem identified above, it is up until now not clear how problems of synonyms and homonyms can be automatically reworked.
Submitting Letter
Conclusion
In this paper we have discussed the management of structural and textual quality of business process models. The growth of many process modeling initiatives towards involving dozens of modelers with varying expertise creating and maintaining thousands of models raises the question of how quality assurance can be defined and implemented in an automatical way. Insights into the factors of process model understanding provide the foundation for building such automatic techniques. On the structural side of process model quality, size and structuredness have been found to be major factors. Guidelines like 7PMG have been formulated based on empirical findings, pointing to the need for rework when certain thresholds are surpassed.
A topic of ongoing research is how refactoring techniques can be defined that balance different structural properties such as size and structuredness while minimizing the duplication of activities. On the side of activity labels, the usage of the verb-object style is recommended. Automatic techniques in this context have to provide an accurate parsing of the labels with a potential reformulation of actions that might be stated as nouns. In this area it is a topic of ongoing research to what extent such automatic techniques for style recognition can be defined without relying on tools like WordNet such that they can be adapted for languages different to English.
Fig. 1 .
1 Fig. 1. Example of a process model with structural and textual issues
Fig. 2 .
2 Fig. 2. Example of an unstructured and corresponding structured process model.
Fig. 3 .
3 Fig. 3. Example of a label refactored from Action-Noun to Verb-Object style.
Table 1 .
1 Ten Process Modeling Rules
Rule Associated measure Explanation
G1 Nodes Do not use more than 31.
G2 Conn. Degree
Table 2 .
2 Activity Labeling Styles
Labeling Style Structure Example
Verb-Object Action(Infinitive) + Object Submit Letter
Action-Noun (np) Object + Action(Noun) Letter Submission
Action-Noun (of) Action(Noun) + 'of' + Object Submission of Letter
Action-Noun (gerund) Action(Gerund) + Object Submitting Letter
Action-Noun (irregular) undef ined Submission: Letter
Descriptive DES Role + Action(3P) + Object Student submits Letter
No Action undef ined Letter
recall of a model, understanding its semantics to pragmatic problem solving tasks.
Up until now, it has not yet been studied in how far the same or different metrics
influence each of these comprehension tasks. | 28,929 | [
"977590"
] | [
"452095"
] |
01474712 | en | [
"info"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01474712/file/2017-Sobrie-et-al-EMO.pdf | Olivier Sobrie
email: [email protected]
Vincent Mousseau
email: [email protected]
Marc Pirlot
email: [email protected]
A Population-Based Algorithm for Learning a Majority Rule Sorting Model with Coalitional Veto
MR-Sort (Majority Rule Sorting) is a multiple criteria sorting method which assigns an alternative a to category C h when a is better than the lower limit of C h on a weighted majority of criteria, and this is not true with the upper limit of C h . We enrich the descriptive ability of MR-Sort by the addition of coalitional vetoes which operate in a symmetric way as compared to the MR-Sort rule w.r.t. to category limits, using specific veto profiles and veto weights. We describe a heuristic algorithm to learn such an MR-Sort model enriched with coalitional veto from a set of assignment examples, and show how it performs on real datasets.
Introduction
Multiple Criteria Sorting Problems aim at assigning alternatives to one of the predefined ordered categories C1 , C 2 , ..., C p , C 1 and C p being the worst and the best category, respectively. This type of assignment method contrasts with classical supervised classification methods in that the assignments have to be monotone with respect to the scores of the alternatives. In other words, an alternative which has better scores on all criteria than another cannot be assigned to a worse category.
Many multiple criteria sorting methods have been proposed in the literature (see e.g., [START_REF] Doumpos | Multicriteria Decision Aid Classification Methods[END_REF][START_REF] Zopounidis | Multicriteria classification and sorting methods: a literature review[END_REF]). MR-Sort (Majority Rule Sorting, see [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF]) is an outranking-based multiple criteria sorting method which corresponds to a simplified version of ELECTRE TRI where the discrimination and veto thresholds are omitted 1 . MR-Sort proved able to compete with state-of-the-art classification methods such as Choquistic regression [4] on real datasets.
In the pessimistic version of ELECTRE TRI, veto effects make it possible to worsen the category to which an alternative is assigned when this alternative has very bad performances on one/several criteria. We consider a variant of MR-Sort which introduces possible veto effects. While in ELECTRE TRI, a veto involves a single criterion, we consider a more general formulation of veto (see [START_REF] Sobrie | New veto relations for sorting models[END_REF]) which can involve a coalition of criteria (such a coalition can be reduced to a singleton).
The definition of such a "coalitional veto" exhibits a noteworthy symmetry between veto and concordance. To put it simple, in a two-category context (Bad/Good ), an alternative is classified as Good when its performances are above the concordance profile on a sufficient majority of criteria, and when its performances are not below the veto profile for a sufficient majority of criteria. Hence, the veto condition can be viewed as the negation of a majority rule using a specific veto profile, and specific veto weights.
Algorithms to learn the parameters of an MR-Sort model without veto (category limits and criteria weights) have been proposed, either using linear programming involving integer variables (see [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF]) or using a specific heuristic (see [START_REF] Sobrie | Learning the parameters of a non compensatory sorting model[END_REF]7]). When the size of the learning set exceeds 100, only heuristic algorithms are able to provide a solution in a reasonable computing time.
Olteanu and Meyer [START_REF] Olteanu | Inferring the parameters of a majority rule sorting model with vetoes on large datasets[END_REF] have developed a simulated annealing based algorithm to learn a MR-Sort model with classical veto (not coalitional ones).
In this paper, we propose a new heuristic algorithm to learn the parameters of a MR-Sort model with coalitional veto (called MR-Sort-CV) which makes use of the symmetry between the concordance and the coalitional veto conditions. Preliminary results obtained using an initial version of the algorithm can be found in [START_REF] Sobrie | Learning MR-sort rules with coalitional veto[END_REF]. The present work describes an improved version of the algorithm and the results of tests on real datasets involving two or more categories (while the preliminary version was only tested for classifying in two categories).
The paper is organized as follows. In Sect. 2, we recall MR-Sort the present work describes and define its extension when considering monocriterion veto and coalitional veto. After a brief reminder of the heuristic algorithm to learn an MR-Sort model, Sect. 3 is devoted to the presentation of the algorithm to learn an MR-Sort model with coalitional veto. Section 4 presents experimentations of this algorithm and Sect. 5 groups conclusions and directions for further research.
Considering Vetoes in MR-Sort
MR-Sort Model
MR-Sort is a method for assigning objects to ordered categories. It is a simplified version of ELECTRE TRI, another MCDA method [START_REF] Yu | Aide multicritère à la décision dans le cadre de la problématique du tri: méthodes et applications[END_REF][START_REF] Roy | Aide multicritère à la décision: méthodes et cas[END_REF].
The MR-Sort rule works as follows. Formally, let X be a set of objects evaluated on n ordered attributes (or criteria), F = {1, ..., n}. We assume that X is the Cartesian product of the criteria scales, X = n j=1 X j , each scale X j being completely ordered by the relation ≥ j . An object a ∈ X is a vector (a 1 , . . . , a j , . . . , a n ), where a j ∈ X j for all j. The ordered categories which the objects are assigned to by the MR-Sort model are denoted by C h , with h = 1, . . . , p. Category C h is delimited by its lower limit profile b h-1 and its upper limit profile b h , which is also the lower limit profile of category C h+1 (provided 0 < h < p). The profile b h is the vector of criterion values (b h 1 , . . . , b h j , . . . , b h n ), with b h j ∈ X j for all j. We denote by P = {1, ...., p} the list of category indices. By convention, the best category, C p , is delimited by a fictive upper profile, b p , and the worst one, C 1 , by a fictive lower profile, b 0 . It is assumed that the profiles dominate one another, i.e.: b h j ≥ j b h-1 j , for h = {1, . . . , p} and j = {1, . . . , n}. Using the MR-Sort procedure, an object is assigned to a category if its criterion values are at least as good as the category lower profile values on a weighted majority of criteria while this condition is not fulfilled when the object's criterion values are compared to the category upper profile values. In the former case, we say that the object is weakly preferred to the profile, while, in the latter, it is not2 . Formally, if an object a ∈ X is weakly preferred to a profile b h , we denote this by a b h . Object a is preferred to profile b h whenever the following condition is met:
a b h ⇔ j:aj ≥j b h j w j ≥ λ, ( 1
)
where w j is the nonnegative weight associated with criterion j, for all j and λ sets a majority level. The weights satisfy the normalization condition j∈F w j = 1; λ is called the majority threshold.
The preference relation defined by ( 1) is called an outranking relation without veto or a concordance relation ( [START_REF] Roy | Aide multicritère à la décision: méthodes et cas[END_REF]; see also [START_REF] Bouyssou | A characterization of concordance relations[END_REF][START_REF] Bouyssou | Further results on concordance relations[END_REF] for an axiomatic description of such relations). Consequently, the condition for an object a ∈ X to be assigned to category C h reads:
j:aj ≥j b h-1 j w j ≥ λ and j:aj ≥j b h j w j < λ.
(
) 2
The MR-Sort assignment rule described above involves pn + 1 parameters, i.e. n weights, (p -1)n profiles evaluations and one majority threshold.
A learning set A is a subset of objects A ⊆ X for which an assignment is known. For h ∈ P , A h denotes the subset of objects a ∈ A which are assigned to category C h . The subsets A h are disjoint; some of them may be empty.
MR-Sort-MV
In this section, we recall the traditional monocriterion veto rule as defined by [START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in MCDM, I: the case of two categories[END_REF][START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in MCDM, II: more than two categories[END_REF]. In a MR-Sort model with monocriterion veto, an alternative a is "at least as good as" a profile b h if it has at least equal to or better performances than b h on a weighted majority of criteria and if it is not strongly worse than the profile on any criterion. In the sequel, we call b h a concordance profile and we define "strongly worse than the profile" b h by means of a veto profile
v h = (v h 1 , v h 2 , ..., v h n ), with v h j ≤ j b h j .
It represents a vector of performances such that any alternative having a performance worse than or equal to this profile on any criterion would be excluded from category C h+1 . Formally, the assignment rule is described by the following condition: a b h ⇐⇒ j:aj ≥j b h j w j ≥ λ and not aV b h , with aV b h ⇐⇒ ∃j ∈ F : a j ≤ j v h j . Note that non-veto condition is frequently presented in the literature using a veto threshold (see e.g. [START_REF] Roy | The outranking approach and the foundations of ELECTRE methods[END_REF]), i.e. a maximal difference w.r.t. the concordance profile in order to be assigned to the category above the profile. Using veto profiles instead of veto thresholds better suits the context of multicriteria sorting. We recall that a profile b h delimits the category C h from C h+1 , with C h+1 C h ; with monocriterion veto, the MR-Sort assignment rule reads as follows:
a ∈ C h ⇐⇒ ⎡ ⎢ ⎣ j:aj ≥j b h-1 j w j ≥ λ and j ∈ F : a j ≤ v h-1 j ⎤ ⎥ ⎦ and ⎡ ⎣ j:aj ≥j b h j w j < λ or ∃j ∈ F : a j ≤ v h j ⎤ ⎦ .
(
We remark that a MR-Sort model with more than 2 categories remains consistent only if veto profiles v h j do not overlap, i.e., are chosen such that v h j ≥ v h j for all {h, h } s.t. h > h . Otherwise, an alternative might be on the one hand in veto against a profile b h , which prevents it to be assigned to C h+1 and, on the other hand, not in veto against b h+1 , which does not prevent it to be assigned to C h+2 .
MR-Sort-CV
We introduce here a new veto rule considering vetoes w.r.t. coalitions of criteria, which we call "coalitional veto". With this rule, the veto applies and forbids an alternative a to be assigned to category C h+1 when the performance of an alternative a is not better than v h j on a weighted majority of criteria. As for the monocriterion veto, the veto profiles are vectors of performances
v h = (v h 1 , v h 2 , ..., v h n )
, for all h = {1, .., p}. Coalitional veto also involves a set of nonnegative veto weights denoted z j , for all j ∈ F . Without loss of generality, the sum of z j is set to 1. Furthermore, a veto cutting threshold Λ is also involved and determines whether a coalition of criteria is sufficient to impose a veto. Formally, we express the coalitional veto rule aV b h , as follows:
aV b h ⇐⇒ j:aj ≤j v h j z j ≥ Λ. ( 4
)
Using coalitional veto, the outranking relation of MR-Sort (2.2) is modified as follows:
a b h ⇐⇒ j:aj ≥j b h j w j ≥ λ and j:aj ≤j v h j z j < Λ. (5)
Using coalitional veto with MR-Sort modifies the assignment rule as follows:
a ∈ C h ⇐⇒ ⎡ ⎢ ⎣ j:aj ≥j b h-1 j w j ≥ λ and j:aj ≤j v h-1 j z j < Λ ⎤ ⎥ ⎦ and ⎡ ⎣ j:aj ≥j b h j w j < λ or j:aj ≤j v h j z j ≥ Λ ⎤ ⎦ (6)
In MR-Sort, the coalitional veto can be interpreted as a combination of performances preventing the assignment of an alternative to a category. We call this new model, MR-Sort-CV.
The coalitional veto rule given in Eq. ( 5) is a generalization of the monocriterion rule. Indeed, if the veto cut threshold Λ is equal to 1 n (n being the number of criteria), and each veto weight z j is set to 1 n , then the veto rule defined in Eq. ( 4) corresponds to a monocriterion veto for each criterion.
The Non Compensatory Sorting (NCS) Model
In this subsection, we recall the non compensatory sorting (NCS) rule as defined by [START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in MCDM, I: the case of two categories[END_REF][START_REF] Bouyssou | An axiomatic approach to noncompensatory sorting methods in MCDM, II: more than two categories[END_REF], which will be used in the experimental part (Sect. 4) for comparison purposes. These rules allow to model criteria interactions. MR-Sort is a particular case of these, in which criteria do not interact.
In order to take criteria interactions into account, it has been proposed to modify the definition of the global outranking relation, a b h , given in [START_REF] Doumpos | Multicriteria Decision Aid Classification Methods[END_REF]. We introduce the notion of capacity. A capacity is a function μ
: 2 F → [0, 1] such that: -μ(B) ≥ μ(A), for all A ⊆ B ⊆ F (monotonicity); -μ(∅) = 0 and μ(F ) = 1 (normalization).
The Möbius transform allows to express the capacity in another form:
μ(A) = B⊆A m(B), ( 7
)
for all A ⊆ F , with m(B) defined as:
m(B) = C⊆B (-1) |B|-|C| μ(C) (8)
The value m(B) can be interpreted as the weight that is exclusively allocated to B as a whole. A capacity can be defined directly by its Möbius transform also called "interaction". An interaction m is a set function m : 2 F → [-1, 1] satisfying the following conditions:
j∈K⊆J∪{j} m(K) ≥ 0, ∀j ∈ F, J ⊆ F \{i} (9)
and
K⊆F m(K) = 1.
If m is an interaction, the set function defined by μ(A) = B⊆A m(B) is a capacity. Conditions (9) guarantee that μ is monotone [START_REF] Chateauneuf | Derivation of some results on monotone capacities by Mobius inversion[END_REF].
Using a capacity to express the weight of the coalition in favor of an object, we transform the outranking rule as follows:
a b h ⇔ μ(A) ≥ λ with A = {j : a j ≥ j b h j } and μ(A) = B⊆A m(B) (10)
Computing the value of μ(A) with the Möbius transform induces the evaluation of 2 |A| parameters. In a model composed of n criteria, it implies the elicitation of 2 n parameters, with μ(∅) = 0 and μ(F ) = 1. To reduce the number of parameters to elicit, we use a 2-additive capacity in which all the interactions involving more than 2 criteria are equal to zero. Inferring a 2-additive capacity for a model having n criteria requires the determination of n(n+1) 2 -1 parameters. Finally, the condition for an object a ∈ X to be assigned to category C h can be expressed as follows: μ(F a,h-1 ) ≥ λ and μ(F a,h ) < λ [START_REF] Roy | Aide multicritère à la décision: méthodes et cas[END_REF] with
F a,h-1 = {j : a j ≥ j b h-1 j } and F a,h = {j : a j ≥ j b h j }.
Learning MR-Sort
Learning the parameters of MR-Sort and ELECTRE TRI models has been studied in several articles [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF][START_REF] Sobrie | Learning a majority rule model from large sets of assignment examples[END_REF][START_REF] Mousseau | Inferring an ELECTRE TRI model from assignment examples[END_REF][START_REF] Mousseau | Using assignment examples to infer weights for ELECTRE TRI method: some experimental results[END_REF][START_REF] The | Using assignment examples to infer category limits for the ELECTRE TRI method[END_REF][START_REF] Dias | An aggregation/disaggregation approach to obtain robust conclusions with ELECTRE TRI[END_REF][START_REF] Doumpos | An evolutionary approach to construction of outranking models for multicriteria classification: the case of the ELECTRE TRI method[END_REF][START_REF] Cailloux | Eliciting ELECTRE TRI category limits for a group of decision makers[END_REF][START_REF] Zheng | Learning criteria weights of an optimistic ELECTRE TRI sorting rule[END_REF]. In this section, we recall how to learn the parameters of a MR-Sort model using respectively an exact method [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF] and a heuristic algorithm [START_REF] Sobrie | Learning a majority rule model from large sets of assignment examples[END_REF]. We then extend the heuristic algorithm to MR-Sort-CV.
Learning a Simple MR-Sort Model
It is possible to learn a MR-Sort model from a learning set using Mixed Integer Programming (MIP), see [START_REF] Leroy | Learning the parameters of a multiple criteria sorting method[END_REF]. Such a MIP formulation is not suitable for large data sets because of the high computing time required to infer the MR-Sort parameters. In order to learn MR-Sort models in the context of large data sets, a heuristic algorithm has been proposed in [START_REF] Sobrie | Learning a majority rule model from large sets of assignment examples[END_REF]. As for the MIP, the heuristic algorithm takes as input a set of assignment examples and their vectors of performances. The algorithm returns the parameters of a MR-Sort model. The heuristic algorithm proposed in [START_REF] Sobrie | Learning a majority rule model from large sets of assignment examples[END_REF] works as follows. First a population of N mod MR-Sort models is initialized. Thereafter, the following two steps are repeated iteratively on each model in the population:
min a∈A (x a + y a ) s.t. j:aj ≥j b h-1 j w j -x a + x a = λ ∀a ∈ A h , h = {2, ..., p} j:aj ≥j b h j w j + y a -y a = λ - ∀a ∈ A h , h = {1, ..., p -1} n j=1 w j = 1 w j ∈ [0; 1] ∀j ∈ F λ ∈ [0; 1] x a , y a , x a , y a ∈ R + 0 ε a small positive number. ( 12
)
1. A linear program optimizes the weights and the majority threshold on the basis of assignment examples while keeping the profiles fixed. 2. Given the inferred weights and the majority threshold, a heuristic adjusts the profiles of the model on the basis of the assignment examples.
After applying these two steps to all the models in the population, the N mod 2 models restoring the least numbers of examples are reinitialized. These steps are repeated until the heuristic finds a model that fully restores all the examples or after a number of iterations specified a priori. This approach can be viewed as a sort of evolutionary metaheuristic (without crossover) in which a population of models is evolved.
The linear program designed to learn the weights and the majority threshold is given by [START_REF] Bouyssou | A characterization of concordance relations[END_REF]. It minimizes a sum of slack variables, x a and y a , that is equal to 0 when all the objects are correctly assigned, i.e. assigned to the category defined in the input data set. We remark that the objective function of the linear program does not explicitly minimize the 0/1 loss but a sum of slacks. This implies that compensatory effects might appear, with undesirable consequences on the 0/1 loss. However in this heuristic, we consider that these effects are acceptable. The linear program doesn't involve binary variables. Therefore, the computing time remains reasonable when the size of the problem increases.
The objective function of the heuristic varying the profiles maximizes the number of examples compatible with the model. To do so, it iterates over each profile b h and each criterion j and identifies a set of candidate moves for the profile, which correspond to the performances of the examples on criterion j located between profiles b h-1 and b h+1 . Each candidate move is evaluated as a function of the probability to improve the classification accuracy of the model. To assess whether a candidate move is likely to improve the classification of one or several objects, the examples which have an evaluation on criterion j located between the current value of the profile, b h j , and the candidate move, b h j +δ (resp. b h jδ), are grouped in different subsets:
V +δ h,j (resp. V -δ h,j ): the sets of objects misclassified in C h+1 instead of C h (resp. C h instead of C h+1 ), for which moving the profile b h by +δ (resp. -δ) on j results in a correct assignment. W +δ h,j (resp. W -δ h,j ): the sets of objects misclassified in C h+1 instead of C h (resp. C h instead of C h+1 ), for which moving the profile b h by +δ (resp. -δ) on j strengthens the criteria coalition in favor of the correct classification but will not by itself result in a correct assignment. Q +δ h,j (resp. Q -δ h,j ): the sets of objects correctly classified in C h+1 (resp. C h+1 ) for which moving the profile b h by +δ (resp. -δ) on j results in a misclassification. R +δ h,j (resp. R -δ h,j ): the sets of objects misclassified in C h+1 instead of C h (resp. C h instead of C h+1 ), for which moving the profile b h by +δ (resp. -δ) on j weakens the criteria coalition in favor of the correct classification but does not induce misclassification by itself. T +δ h,j (resp. T -δ h,j ): the sets of objects misclassified in a category higher than C h (resp. in a category lower than C h+1 ) for which the current profile evaluation weakens the criteria coalition in favor of the correct classification.
A formal definition of these sets can be found in [START_REF] Sobrie | Learning a majority rule model from large sets of assignment examples[END_REF]. The evaluation of the candidate moves is done by aggregating the number of elements in each subset. Finally, the choice to move or not the profile on the criterion is determined by comparing the candidate move evaluation to a random number drawn uniformly. These operations are repeated multiple times on each profile and each criterion.
Learning a MR-Sort-CV Model
As compared with MR-Sort, a MR-Sort-CV model requires the elicitation of additional parameters: a veto profile, veto weights and a veto threshold. In (2), the MR-Sort condition j:aj ≥j b h-1 j w j ≥ λ is a necessary condition for an alternative to be assigned to a category at least as good as C h . Basically, the coalitional veto rule can be viewed as a dual version of the majority rule. It provides a sufficient condition for being assigned to a category worse than C h . An alternative a is assigned to such a category as soon as j:aj ≤v h-1 j z j ≥ λ. This condition has essentially the same form as the MR-Sort rule except that the sum is over the criteria on which the alternative's performance is at most as good as the profile (instead of "at least as good", for the MR-Sort rule).
In order to learn a MR-Sort-CV model, we modify the heuristic presented in the previous subsection as shown in Algorithm 1. The main differences with the MR-Sort heuristic are highlighted in grey.
In the rest of this section, we give some detail about the changes that have been brought to the heuristic.
Concordance weights optimization. The concordance profiles being given,
the weights are optimized using the linear program [START_REF] Bouyssou | A characterization of concordance relations[END_REF]. The sum of the error variables x a + y a was the objective to be minimized. In the linear program, Algorithm 1. Metaheuristic to learn the parameters of an MR-Sort-CV model.
Generate a population of N model models with concordance profiles initialized with a heuristic repeat for all model M of the set do Learn the concordance weights and majority threshold with a linear program, using the current concordance profiles
Apply the heuristic Nit times to adjust the concordance profiles, using the current concordance weights and threshold.
Initialize a set of veto profiles, taking into account the concordance profiles (in the first step and in case the coalitional veto rule was discarded in the previous step)
Learn the veto weights and majority threshold with a linear program, using the current profiles Adjust the veto profiles by applying the heuristic Nit times, using the current veto weights and threshold Discard the coalitional veto if it does not improve classification accuracy end for Reinitialize the N model 2 models giving the worst CA until Stopping criterion is met x a is set to a positive value whenever it is not possible to satisfy the condition which assigns a to a category at least as good as C h , while a actually belongs to C h . Impeding the assignment of positive values to x a amounts to favor false positive assignments. Hence, positive values of x a should be heavily penalized. In contrast, positive values of y a correspond to the case in which the conditions for assigning a to the categories above the profile are met while a belongs to the category below the profile. Positive values of y a need not be discouraged as much as those of x a and therefore we changed the objective function of the linear program into min a∈A 10x a + y a . Adjustment of the concordance profiles. In order to select moves by a quantity ±δ applied to the profile level on a criterion, we compute a probability which takes into account the sizes of the sets listed at the end of Section 3.1. To ensure the consistency of the model, the candidate moves are located
in the interval [max(b h-1 j , v h j ), b h+1 j
]. In all cases, the movements which lower the profile (-δ) are more favorable to false positive than the opposite movements. Therefore, all other things being equal (i.e. the sizes of the sets), the probability of choosing a downward move -δ should be larger than that of an upward move +δ. The probability of an upward move is thus computed by the following formula
P (b h j + δ) = 2|V -δ h,j | + 1|W -δ h,j | + 0.1|T -δ h,j | |V -δ h,j | + |W -δ h,j | + |T -δ h,j | + 5|Q -δ h,j | + |R -δ h,j | , ( 13
)
while that of a downward move is
P (b h j -δ) = 4|V +δ h,j | + 2|W +δ h,j | + 0.1|T +δ h,j | |V +δ h,j | + |W +δ h,j | + |T +δ h,j | + 5|Q +δ h,j | + |R +δ h,j | . ( 14
)
The values appearing in ( 13) and ( 14) were determined empirically. Initialization of veto profiles. A randomly selected veto profile is associated with each concordance profile. The veto profiles are initialized in ascending order, i.e. from the profile delimiting the worst category to the one delimiting the best category. The generation of a veto profile is done by drawing a random number in the interval
[b h j , v h-1 j
] on each criterion j. For the profile delimiting the worst categories, v h-1 j corresponds to the worst possible peron criterion j. Veto rules that do not prove useful at the end of the improvement phase (i.e., that do not contribute to reduce misclassification) are discarded and new veto profiles are generated during the next loop. Veto weights optimization. The same type of linear program is used as for the concordance weights. One difference lies in the objective function in which we give the same importance to the positive and negative slack variables. Another difference lies in the constraints ensuring that an alternative a ∈ C h should not outrank the profile b h+1 . In order not to outrank the profile b h+1 , an alternative should either fulfill the condition n j=1 w j < λ or the condition n j=1 z j ≥ Λ Since this is a disjunction of conditions, it is not necessary to ensure both. Therefore we do not consider the alternatives a ∈ C h that satisfy the first condition ( n j=1 w j < λ) in the linear program. Adjustment of the veto profiles. The same heuristic as for the concordance profiles is used. The coefficients in ( 13) and ( 14) are modified in order to treat equally upward and downward moves.
Experiments
Datasets
In view of assessing the performance of the heuristic algorithm designed for learning the parameters of a MR-Sort-CV model, we use the algorithm to learn MR-Sort-CV models from several real data sets available at http:// www.uni-marburg.de/fb12/kebi/research/repository/monodata, which serve as benchmark to assess monotone classification algorithms [4]. They involve from 120 to 1728 instances, from 4 to 8 monotone attributes and from 2 to 36 categories (see Table 1).
In our experiments, categories were initially binarized by thresholding at the median. We split the datasets in a twofold 50/50 partition: a learning set and a test set. Models are learned on the first set and evaluated on the test set; this is done 100 times on learning sets drawn at random.
Results Obtained with the Binarized Datasets
In a previous experimental study [START_REF] Sobrie | Learning the parameters of a non compensatory sorting model[END_REF] performed on the same datasets, we compared the classification accuracy on the test sets obtained with MR-Sort and NCS. These results are reproduced in columns 2 and 3 of Table 2. No significant improvement in classification accuracy was observed when comparing NCS to MR-Sort. We have added the results for the new MR-Sort-CV heuristic in the fourth column of this table. In four cases, no improvement is observed as compared with MR-Sort. A slight improvement (of the order of 1%) is obtained in three cases (CPU, ESL, LEV). In one case (BCC), the results are slightly worse (2%).
Results Obtained with the Original Datasets
We also checked the algorithm for assigning alternatives to the original classes, in case there are more than two classes. Datasets with more than two classes We observe improvements w.r.t. MR-Sort on all four datasets. The gain ranges between a little less than 1% for ERA till 4% for CPU.
Results Obtained Using Randomly Generated MR-Sort-CV models
In view of the slight improvements w.r.t. MR-Sort obtained on the benchmark datasets, one may wonder whether this result should not be ascribed to the design of our heuristic algorithm. In order to check whether our algorithm is able to learn a MR-Sort-CV model in case the objects have been assigned to categories according to a hidden MR-Sort-CV rule, we performed the following experiments.
For a number of criteria ranging from 4 to 7, we generate at random a MR-Sort-CV model with two categories. We generate a learning set composed of 1000 random vectors of alternatives. Then we assign the alternatives to the two categories using the MR-Sort-CV model. We use the algorithm to learn a MR-Sort-CV model that reproduces as accurately as possible the assignments of the alternatives in the learning set. Having generated 10000 additional alternatives at random and having assigned them to a category using the generated MR-Sort-CV model, we compare these assignments with those produced by the learned model. We repeat this 100 times for each number of criteria. The average classification accuracy for the learning and the test sets is displayed in Table 4.
We observe that, on average, the learned model correctly restores more than 98.5% of the assignments in the learning set and more than 97.5% of the assignments in the test set. This means that our heuristic algorithm is effective in learning a MR-Sort-CV model when the assignments are actually made according to such a model. An alternative explanation of the modest improvements yielded by using the MR-Sort-CV model is that the latter has limited additional descriptive power as compared to MR-Sort. We check this hypothesis by running the MR-Sort learning algorithm on the same artificial datasets generated by random MR-Sort-CV rules as in Table 4. The experimental results are reported in Table 5. By comparing Tables 4 and5, we see that MR-Sort models are able to approximate very well MR-Sort-CV models. Indeed, the classification accuracy obtained with MR-Sort models is quite high on the test sets and only about 2% below that obtained with a learned MR-Sort-CV model.
Conclusion
We have presented MR-Sort-CV, a new original extension of the MR-Sort ordered classification model. This model introduces a new and more general form of veto condition which applies on coalitions of criteria rather than a single criterion. This coalitional veto condition can be expressed as a "reversed" MR-Sort rule. Such a symmetry enables us to design a heuristic algorithm to learn an MR-Sort-CV model, derived from an algorithm used to learn MR-Sort.
The experimental results obtained on benchmark datasets show that there is no significant improvement in classification accuracy as compared with MR-Sort in the case of two categories. The new model and the proposed learning algorithm lead to modest but definite gains in classification accuracy in the case of several categories. We checked that the learning algorithm was not responsible for the weak improvement of the assignment accuracy on the test sets. Therefore, the conclusion should be that the introduction of coalitional veto only adds limited descriptive ability to the MR-Sort model. This was also checked empirically.
The fact that coalitional veto (and this holds a fortiori for ordinary, singlecriterion, veto) adds little descriptive power to the MR-Sort model is an important information by itself. In a learning context, the present study indicates that there is little hope to substantially improve classification accuracy by moving from a MR-Sort to a MR-Sort-CV model (as was also the case with the NCS model [START_REF] Sobrie | Learning the parameters of a non compensatory sorting model[END_REF]). Note that small improvements in classification accuracy may be valuable, for instance, in medical applications (see e.g. [START_REF] Sobrie | A new decision support model for preanesthetic evaluation[END_REF]). Therefore it may be justified to consider MR-Sort-CV in spite of the increased complexity of the algorithm and of the model interpretation as compared to MR-Sort. It should be emphasized that the MR-Sort models lean themselves to easy interpretation in terms of rules [START_REF] Sobrie | A new decision support model for preanesthetic evaluation[END_REF]. The MR-Sort-CV model, although more complex, inherits this property since the coalitional veto condition is a reversed MR-Sort rule. Therefore, MR-Sort-CV models may be useful in specific applications.
Table 1 .
1 Data sets characteristics
Data set #instances #attributes #categories
DBS 120 8 2
CPU 209 6 4
BCC 286 7 2
MPG 392 7 36
ESL 488 4 9
MMG 961 5 2
ERA 1000 4 4
LEV 1000 4 5
CEV 1728 6 4
Table 2 .
2 Average and standard deviation of the classification accuracy on the test sets obtained with three different heuristics MPG, ESL, ERA, LEV and CEV. We did not consider MPG and ESL in which alternatives are respectively assigned to 36 and 9 categories. It is not reasonable indeed to aim at learning 35 (resp. 9) concordance and veto profiles on the basis of 392 (resp. 488) assignment examples in the dataset, half of them being reserved for testing purposes. The results obtained with the four remaining datasets are reported in Table3.
Data set MR-Sort NCS MR-Sort-CV
DBS 0.8377 ± 0.0469 0.8312 ± 0.0502 0.8390 ± 0.0476
CPU 0.9325 ± 0.0237 0.9313 ± 0.0272 0.9429 ± 0.0244
BCC 0.7250 ± 0.0379 0.7328 ± 0.0345 0.7044 ± 0.0299
MPG 0.8219 ± 0.0237 0.8180 ± 0.0247 0.8240 ± 0.0391
ESL 0.8996 ± 0.0185 0.8970 ± 0.0173 0.9024 ± 0.0179
MMG 0.8268 ± 0.0151 0.8335 ± 0.0138 0.8267 ± 0.0119
ERA 0.7944 ± 0.0173 0.7944 ± 0.0156 0.7959 ± 0.0270
LEV 0.8408 ± 0.0122 0.8508 ± 0.0188 0.8551 ± 0.0171
CEV 0.8516 ± 0.0091 0.8662 ± 0.0095 0.8516 ± 0.0665
Table 3 .
3 Average and standard deviation of the accuracy of classification in more than two categories obtained for the test sets
Data set MR-Sort MR-SortCV
CPU 0.8039 ± 0.0354 0.8469 ± 0.0426
ERA 0.5123 ± 0.0233 0.5230 ± 0.0198
LEV 0.5734 ± 0.0213
CEV 0.7664 ± 0.0193 0.7832 ± 0.0130
Table 4 .
4 Average and standard deviation of the classification accuracy of MR-Sort-CV models learned on data generated by random MR-Sort-CV models with 2 categories and 4 to 7 criteria. The learning set is composed of 1000 alternatives and the test set is composed of 10000 alternatives.
#criteria Learning set Test set
4 0.9908 ± 0.01562 0.98517 ± 0.01869
5 0.9904 ± 0.01447 0.98328 ± 0.01677
6 0.9860 ± 0.01560 0.97547 ± 0.02001
7 0.9827 ± 0.01766 0.96958 ± 0.02116
Table 5 .
5 Average and standard deviation of the classification accuracy of MR-Sort models learned on data generated by random MR-Sort-CV models with 2 categories and 4 to 7 criteria. The learning set is composed of 1000 alternatives and the test set is composed of 10000 alternatives.
#criteria Learning set Test set
4 0.9760 ± 0.0270 0.9700 ± 0.0309
5 0.9713 ± 0.0275 0.9627 ± 0.0318
6 0.9645 ± 0.0248 0.9525 ± 0.0307
7 0.9639 ± 0.0264 0.9518 ± 0.0301
It is worth noting that outranking methods used for sorting are not subject to Condorcet effects (cycles in the preference relation), since alternatives are not compared in pairwise manner but only to profiles limiting the categories.
"weak preference" means being "at least as good as". | 36,844 | [
"6636",
"4625"
] | [
"160918",
"11769",
"160918"
] |
01343404 | en | [
"phys"
] | 2024/03/04 23:41:46 | 2016 | https://univ-rennes.hal.science/hal-01343404/file/In-phase%20and%20antiphase%20self-intensity%20OL%20Alouini.pdf | Abdelkrim El Amili
Kevin Audo
Mehdi Alouini
email: [email protected]
In-phase and anti-phase self-intensity regulated dual frequency laser using two-photon absorption
Keywords: relaxations, relative intensity noise, and Two-photon absorption
A 25 dB reduction of resonant intensity noise spectra is experimentally demonstrated for both the anti-phase and in-phase relaxation oscillations of a dual-frequency solid-state laser operating at the telecommunication wavelengths. Experimental results demonstrate that incorporation of an intra-cavity two-photon absorber that acts as a buffer reservoir reduces efficiently the inphase noise contribution, while it is somewhat ineffective in lowering the anti-phase noise contributions. A slight spatial separation of the two modes in the nonlinear twophoton absorber reduces the anti-phase resonant intensity noise component. These experimental results provide a new approach in the design of ultra-low noise dual-frequency solid state lasers.
Dual-frequency solid-state lasers are of great interest for many practical applications, such as metrology [1,2], lidar-radar [3], wireless communications [4] and microwave photonics [5][6][7]. In particular, the dual-frequency lasers, whose their two modes are cross-polarized, offer tunability of the frequency difference with an inherently narrow linewidth of the beatnote. Moreover, the solidstate lasers are also known for their intrinsic low-phase noise and high optical power. Unfortunately, they suffer from resonant noise peaks at the relaxation oscillation (RO) frequency. Particularly, the dual-frequency solid-state lasers that support two laser modes, exhibit resonant noise at two eigen-frequencies associated with the so-called anti-phase and in-phase noise spectra. The in-phase noise corresponding to the common RO results from the nonlinear interaction between the population inversion and the total intracavity photon population. The RO mechanism is inherently present in any laser that the population inversion lifetime is longer than the cavity photon lifetime and are classified as class B lasers [8,9]. The anti-phase noise dominates at lower frequencies and follows the coupling dynamics of the two laser modes through nonlinear gain saturation of the active gain medium [10,11]. Its dynamics leads to a damped resonant exchange of energy between the two laser modes (cf. Fig. 2). The presence of this anti-phase noise is undoubtedly a major drawback of dual-polarized solid state lasers since its resonance suppression mechanism is not so straightforward contrary to the RO noise suppression. It therefore limits utility of the orthogonally polarized lasing modes in applications where a low intensity noise levels are required for the generated microwave beat note.
Therefore, different methods have been reported to reduce either the anti-phase noise or the in-phase noise spectra; the antiphase noise can be reduced by designing a two axis laser topology, where the two modes are spatially separated in the active medium, leading to the cancellation of their coupling mechanism [12][13][14]. Another approach consists of an active medium with the appropriate crystal cut in conjunction with a proper orientation of the two eigen-polarizations [15]. Moreover, the in-phase intensity noise can be dampened either electronically or optically using feedback loop mechanisms [16,17]. An attractive alternative solution is based on dramatic modification of the laser dynamics by establishing a class-A laser [18], where the population inversion lifetime is shorter than the cavity photon lifetime. This approach is a viable solution only for the high finesse external cavity based semiconductor lasers [19].
Recently a promising approach has been explored for the solid state lasers by incorporating a buffer reservoir (BR) into the laser cavity, where the exclusive interaction between the population inversion and the photon population [20][21][22] are restricted. This approach has been experimentally validated in the single frequency lasers, but never in dual frequency solid state lasers. More precisely, one can explore whether the incorporation of such a BR in a dual frequency lasers can reduce contribution of the inphase and anti-phase noise dynamics. This letter presents the incorporation of BR approach in a dual frequency solid-state laser system. In addition, we also propose for the first time a new laser cavity design to get rid of both in-phase and anti-phase noise sources.
The laser system used for this investigation (shown in Fig. 1(a)) is based on a 4.9-cm-long planar-concave cavity. The active medium is a 1.5-mm-long phosphate glass co-doped with Erbium and Ytterbium (Er,Yb:glass). The first side of the Er,Yb:glass plate, which acts as the cavity input mirror, is coated for high reflectivity at 1550 nm (R>99.9 %) and high transmission (T=95 %) at the pump wavelength (975 nm). The cavity is closed by a 5-cm-radius of curvature mirror transmitting 0.5 % of the intensity at 1560 nm. The gain medium is pumped by a multimode fiber-coupled semiconductor laser diode operating at 975 nm. An AR-coated 200-µm-thick birefringent YVO4 crystal, cut at 45° with respect to its optical axis, is inserted into the laser cavity in order to slightly reduce the coupling strength between the two polarization modes and thus increase the robustness of dual mode operation. This birefringent crystal basically introduces a spatial separation of 20 µm between the two orthogonal modes in the active medium. This separation is very small compared to the ~120 µm of beam diameter and thus considered as a single axis dual-frequency laser and can consequently be pumped with a single pump spot. The two modes are continuously monitored with a Fabry-Perot cavity to check that the laser remains single mode without any mode hopping during data acquisition. The laser output is analyzed using a 3.7-MHz-bandwidth photodiode. A half-wave plate (HWP) followed by an optical isolator is inserted in front of the photodiode. By rotating the HWP one can project the laser output on any linear polarization state so that the two laser eigenpolarizations can be analyzed independently or combined. The noise spectra are finally recorded using an electrical spectrum analyzer (ESA) from Rohde & Schwarz (model FSV 10 Hz -3.6 GHz). To conduct our study a silica etalon is added to the standard solid state laser structure to induce single longitudinal mode oscillation of the two polarization states. This etalon also being used as the BR to control RIN spectra in this study. Fig. 2 depicts a typical recorded relative intensity noise (RIN) spectra, when the optical amplitudes of both modes are carefully equalized. The RIN spectra labelled (1) and ( 2) correspond respectively to that of the x-and y-polarized modes, where they exhibit a strong peak at around 20 kHz corresponding to the anti-phase resonant noise and a second peak at around 45 kHz corresponding to the in-phase resonant noise. The presence of the strong anti-phase peak proves that the spatial separation between the two modes is so weak that the two modes remain highly coupled in the active medium. This is confirmed by rotating the HWP in front of the polarizing isolator by 22.5°. In this case, the optical amplitudes of the two modes are constructively added and the anti-phase peak cancels out, while the in-phase peak remains present. Fig. 2. Typical RIN spectra of a dual mode solid state laser recorded using an electrical spectrum analyzer (ESA) with 100Hz resolution bandwidth without the nonlinear absorber into the laser.
The impact of a nonlinear absorber is studied in terms of the excess noise measured around resonance frequencies of the inphase and anti-phase frequencies. By incorporating the 100-µmthick Silicon (Si) plate in (100)-cut, intensity-dependent losses are induced in the laser cavity due to the Two-Photon Absorption (TPA) mechanism. Furthermore, this uncoated Si plate acts as an intra-cavity filter, which replaces the silica étalon.
We first consider the configuration shown in Fig. 1(a). In this first configuration, the nonlinear absorber is placed in between the birefringent crystal and the output coupler, i.e., where the two laser modes are perfectly superimposed in the cavity. Fig. 3(a) renders the recorded RIN spectra, where the in-phase peak is reduced by about 32 dB as compared to the laser without TPA (cf. RIN spectra depicted in Fig. 2). However, the anti-phase peak remains unchanged even though its amplitude is slightly reduced by a few dB compared to case without. This behaviour can be easily understood keeping in mind that the antiphase noise occurs for a constant total power. In other words, the antiphase noise does not bring any intensity fluctuations provided that the total power is measured (see Fig. 2), whereas the nonlinear losses brought by the TPA mechanism depend on the total optical power.
Although this behavior is intuitively expected, this performance observed is not straightforward. Indeed this behavior suggests that the TPA mechanism in Si is not sensitive to the polarization state of the interacting lightwaves but only to the total power. To go further and validate this intuitive prediction, the orientation of the crystallographic axes of the Si plate was checked with respect to the polarization states of the laser to observe whether it has any impact on anti-phase noise level. To this aim, the Si plate is placed on a rotating mount that enables rotation around the optical axis of the laser. Fig. 3(b) shows the evolution of the RIN level for the xpolarized mode at the anti-phase frequency versus the rotation angle of the Si plate. For each orientation, the intensities of the cross-polarized modes (x and y) have been carefully equalized by a fine adjustment of the nonlinear plate position as reported in the inset of Fig. 3(b). These experimental results render that the orientation of the crystallographic axes of the Si plate does not play any significant role on the antiphase fluctuations. Moreover, these results suggest that the two-photon absorption mechanism in Si is not able to act selectively on the lightwave intensity fluctuations of each polarization state. Accordingly, if one targets to reduce the antiphase noise, the nonlinear absorber has to act on the two polarization modes independently. In essence, in the buffer reservoir description (see Ref. [18]), one has to introduce at least two reservoirs in order to dampen the intensity fluctuations of the two photon populations. To end up with such operating condition, we slightly lift the spatial degeneracy of the two modes in the Si plate as depicted in configuration of Fig. 1b. In practice, when the Si plate is just inserted in between the active medium and the birefringent displacer, the spatial separation hence occurs in both the Si plate and the active medium. Note that the mode coupling strength is kept constant in the active medium as compared to configuration shown in Fig. 1a, which ensures potentially the same level of antiphase noise for the two configurations depicted in Fig. 1. Moreover, it must be noted that the spatial separation being small (20 µm) with respect to the mode diameter (120 µm), the two modes share a large area of the nonlinear two photon absorber, while only a small part of each mode interacts with independent areas of the nonlinear absorber. As a result, the Si plate is now expected to induce at the same time common losses and selective losses for the two modes leading to a reduction of both in-phase and anti-phase noise peaks. Due to solid state laser realization based on this configuration, a 25 dB noise level reduction is obtained on the two peaks as depicted in Fig. 4 compared to the ones depicted in Fig. 2 and Fig. 3. These results demonstrate a detrimental role played by even a tiny spatial separation of the dual modes in the Si plate. In summary, we have investigated and reported for the first time the impact of an intra-cavity nonlinear absorber on the noise properties of a dual-frequency solid-state laser. This study is conducted in a dual frequency Er,Yb:glass laser containing a Silicon plate which induces intensity-dependent two-photon absorption losses. In particular, we have demonstrated a reduction in the RO noise similar to the case of single frequency solid state laser, provided that this noise originates from in-phase fluctuations. Conversely, the TPA mechanism fails in damping the noise originating from antiphase fluctuations. By modifying the laser architecture so that a slight spatial separation in the nonlinear absorber is created between the two modes, we have been able to observe a simultaneous 25 dB reduction of both in-phase and antiphase noise spectra. This latest configuration, in which each laser mode interacts with an independent region of the nonlinear TPA, makes the anti-phase fluctuations impactful in the nonlinear absorber.
This work opens promising perspectives in several domains, where dual-frequency lasers are employed, such as in LIDAR-RADAR and in the generation and optical distribution of high purity microwave signals [5,7]. Indeed, while in-phase noise spectra can be dampened using a large variety of techniques, the antiphase noise contributions are extremely difficult to remove. Up to date, this is the first demonstration of a dual frequency solidstate laser inherently free of both anti-phase and in-phase RIN noise spectra. Finally, this work also opens understanding of various fundamental applications where two laser beams are required and where their intensity noise spectra play a central role, such as in coherent population trapping employed in atomic clocks [23]. Future work is planned to extend the three-rateequation formalism [21] to dual-frequency lasers in order to further reduce the remaining RIN peaks tentatively up to the shot noise level.
Funding. Délégation Générale de l'Armement (DGA) ; Région Bretagne, France
Fig. 1 .
1 Fig. 1. Schematic representation of the two studied laser architectures. The birefringent displacer is an YVO4 crystal cut at 45° from it optical axis that induces a 20 µm spatial separation between the two orthogonally polarized modes. A 100-µm-thick Silicon crystal is used as the nonlinear absorber. (a) The two polarized modes are slightly separated only in the active medium. (b) The two polarized modes are slightly separated in both the active medium and the nonlinear absorber.
Fig. 3 .
3 Fig. 3. (a) RIN spectra corresponding to the laser configuration shown in Fig. 1(a) (ESA resolution bandwidth of 100 Hz). (b) Evolution of the RIN level at the anti-phase frequency versus the relative rotation angle between crystal axes of the Si plate and polarization directions is depicted in dotted line. Evolution of the RIN level at the anti-phase frequency without the Si plate into the laser is depicted in dashed line. The inset shows the evolution of the intensities ratio of two orthogonal modes versus the relative rotation angle.
Fig. 4 .
4 Fig. 4. RIN spectra corresponding to the laser configuration of Fig. 1(b) (ESA resolution bandwidth: 100 Hz).
Acknowledgment. The authors would like to acknowledge Afshin Daryoush (Drexel Univ.) for fruitful discussions and Cyril Hamel for technical support. | 15,373 | [
"945008",
"490"
] | [
"57111",
"57111"
] |
01474744 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474744/file/978-3-642-41641-5_10_Chapter.pdf | Andreas Rogge-Solti
email: [email protected]
Ronny S Mans
email: [email protected]
Wil M P Van Der Aalst
Mathias Weske
email: [email protected]
Improving Documentation by Repairing Event Logs
Keywords: documentation quality, missing data, stochastic Petri nets, Bayesian networks
In enterprises, business process models are used for capturing as-is business processes. During process enactment correct documentation is important to ensure quality, to support compliance analysis, and to allow for correct accounting. Missing documentation of performed activities can be directly translated into lost income, if accounting is based on documentation. Still, many processes are manually documented in enterprises. As a result, activities might be missing from the documentation, even though they were performed. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing entries in the logs. The repaired logs can be used for direct feedback to ensure correct documentation, i.e., participants can be asked to check, whether they forgot to document activities that should have happened according to the process models. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital.
Introduction
Enterprises invest a lot of time and money to create business process models in order to use them for various purposes: documentation and understanding, improvement, conformance checking, performance analysis, etc. The modeling goal is often to capture the as-is processes as accurately as possible. In many cases, process activities are performed and documented manually. We call the documentation of activities in a business process event logs. When event logs are subject to manual logging, data quality problems are common, resulting in incorrect or missing events in the event logs [START_REF]IEEE Task Force on Process Mining: Process Mining Manifesto[END_REF]. We focus on the latter and more frequent issue, as in our experience it is often the case that activities are performed, but their documentation is missing.
For an enterprise, it is crucial to avoid these data quality issues in the first place. Accounting requires activities to be documented, as otherwise, if documentation is missing, potential revenues are lost. In the healthcare domain, for example, we encountered the case that sometimes the activity preassessment of a patient is not documented, although it is done always before treatment. In this paper, we provide a technique to automatically repair an event log that contains missing entries. The idea is to use repaired event logs to alert process participants of potential documentation errors as soon as possible after process termination. We employ probabilistic models to derive the most likely timestamps of missing events, i.e., when the events should have occurred based on historical observations. This novel step assists process participants in correcting missing documentation directly, or to identify the responsible persons, who performed the activities in question.
State-of-the-art conformance checking methods [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF] do not consider timing aspects. In contrast, we provide most likely timestamps of missing events. To achieve this, we use stochastically enriched process models, which we discover from event logs [START_REF] Rogge-Solti | Discovering Stochastic Petri Nets with Arbitrary Delay Distributions From Event Logs[END_REF]. As a first step, using path probabilities, it is determined which are the most likely missing events. Next, Bayesian networks [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[END_REF] capturing both initial beliefs of the as-is process and real observations are used to compute the most likely timestamp for each inserted event. Inserted events are marked as artificial, as long as they are not corrected by the process participants. An extended version of this paper is available as a technical report [START_REF] Rogge-Solti | Repairing Event Logs Using Stochastic Process Models[END_REF].
The remainder of this paper is organized as follows. First, we present background on missing data methods along other related works in Section 2. Afterwards, preliminaries are given in Section 3. Our approach for repairing individual traces in an event log is described in Section 4 followed by a presentation of the algorithmic details in Section 5. An evaluation of our approach using both synthetic and real-life event data is given in Section 6. Finally, conclusions are presented in Section 7.
Background and Related Work
Missing data has been investigated in statistics, but not in the context of conformance checking of business processes. There are different types of missing data: missing completely at random (MCAR), missing at random (MAR), and not missing at random (NMAR), cf. the overview by Schafer and Graham in [START_REF] Schafer | Missing Data: Our View of the State of the Art[END_REF]. These types refer to the independence assumptions between the fact that data is missing (missingness) and the data values of missing and observed data. MCAR is the strongest assumption, i.e., missingness is independent of both observed and missing data. MAR allows dependencies to observed data, and NMAR assumes no independence, i.e., captures cases where the missingness is influenced by the missing values, too. Dealing with NMAR data is problematic, as it requires a dedicated model for the dependency of missingness on the missing values, and is out of scope of this paper. We assume that data is MAR, i.e., whether data is missing does not depend on the value of the missing data, but may depend on observed data values.
Over the years, multiple methods have been proposed to deal with missing data, cf. [START_REF] Schafer | Missing Data: Our View of the State of the Art[END_REF]. However, these techniques are focusing on missing values in surveys and are not directly applicable to event logs, as they do not consider control flow relations in process models and usually assume a fixed number of observed variables.
Related work on missing data in process logs is scarce. Nevertheless, in a recent technical report, Bertoli et al. [START_REF] Bertoli | Reasoning-based Techniques for Dealing with Incomplete Business Process Execution Traces[END_REF] propose a technique to reconstruct missing events in process logs. The authors tackle the problem by mapping control flow constraints in BPMN models to logical formulae and use a SAT-solver to find candidates for missing events. In contrast, we use an alignment approach based on Petri nets, allowing us to deal with loops and probabilities of different paths. We also consider the time of the missing events, which allows performance analysis on a probabilistic basis. Some techniques developed in the field of process mining provide functionality that enables analysis of noisy or missing event data. In process mining, the quality of the event logs is crucial for the usefulness of the analysis results and low quality poses a significant challenge to the algorithms [START_REF]IEEE Task Force on Process Mining: Process Mining Manifesto[END_REF]. Therefore, discovery algorithms which can deal with noise, e.g., the fuzzy miner [START_REF] Günther | Fuzzy Mining: Adaptive Process Simplification Based on Multi-perspective Metrics[END_REF], and the heuristics miner [START_REF] Van Der Aalst | Process Mining: Discovery, Conformance and Enhancement of Business Processes[END_REF], have been developed. Their focus is on capturing the common and frequent behavior and abstract from any exceptional behavior. These discovery algorithms take the log as granted and do not try to repair missing events.
Another example is the alignment of traces in the context of conformance checking [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF]. Here, the aim is to replay the event log within a given process model in order to quantify conformance by counting skipped and inserted model activities. We build upon this technique and extend it to capture path probabilities as gathered from historical observations. Note that the lion's share of work focuses on repairing models based on logs, rather than logs based on models. Examples are the work by Fahland and van der Aalst [START_REF] Fahland | Repairing Process Models to Reflect Reality[END_REF] that uses alignments to repair a process model to decrease inconcistency between model and log, and the work by Buijs et al. [START_REF] Buijs | Improving Business Process Models using Observed Behavior[END_REF], which uses genetic mining to find similar models to a given original model.
Preliminary Definitions and Used Methods
In this section, we give a formal description of the used concepts, to describe the approach to the repair of missing values in process logs. We start with event logs and Petri nets.
Definition 1 (Event logs). An event log over a set of activities A and time domain TD is defined as L A,TD = (E, C, α, γ, β, ), where:
-E is a finite set of events.
-C is a finite set of cases (process instances).
α : E → A is a function relating each event to an activity.
γ : E → TD is a function relating each event to a timestamp.
β : E → C is a surjective function relating each event to a case.
-⊆ E × E is the succession relation, which imposes a total ordering on the events in E. We use e 2 e 1 as shorthand notation for (e 2 , e 1 ) ∈ . We call the ordered set of events belonging to one case a "trace".
Definition 2 (Petri Net). A Petri net is a tuple PN = (P, T, F, M 0 ) where: -P is the set of places.
-T is the set of transitions.
-F ⊆ (P × T ) ∪ (T × P) is the set of connecting arcs representing flow relations.
-M 0 ∈ P → IN + 0 is the initial marking. There have been many extensions of Petri nets to capture time, both deterministic and stochastic. In [START_REF] Ciardo | A Characterization of the Stochastic Process Underlying a Stochastic Petri Net[END_REF], Ciardo et al. give an overview of different classes. In terms of this classification, we use stochastic Petri nets with generally distributed transition durations.
Definition 3 (GDT_SPN).
A stochastic Petri net with generally distributed transition durations is a seven-tuple: GDT_SPN = (P, T, P, W, F, M 0 , D), where (P, T, F, M 0 ) is the underlying Petri net. Additionally:
-The set of transitions T = T i ∪ T t is partitioned into immediate transitions T i and timed transitions T t . -P : T → IN + 0 is an assignment of priorities to transitions, where ∀t ∈ T i : P(t) ≥ 1 and ∀t ∈ T t : P(t) = 0.
-W : T i → IR + assigns probabilistic weights to the immediate transitions.
-D : T t → D(x) is an assignment of arbitrary probability distribution functions D(x) to timed transitions, capturing the random durations of the corresponding activities.
Although this definition of GDT_SPN models allows us to assign arbitrary duration distributions to timed transitions, in this work, we assume normally distributed durations. Note that normal distributions are defined also in the negative domain, which we need to avoid. Therefore, we assume that most of their probability mass is in the positive domain, such that errors introduced by correction of negative durations are negligible. An example GDT_SPN model is shown in Fig. 1 and has immediate transitions (bars), as well as timed transitions (boxes). In the figure, immediate transitions are annotated with their weights, e.g., the process will loop back with a probability of 0.25, and leave the loop with 0.75 probability. We omitted priorities and define priority 1 for all immediate transitions. The timed transitions are labeled from A to H and their durations are normally distributed with the parameters annotated underneath. In this example, activity A's duration is normally distributed with a mean of 20, and a standard deviation of 5. Note that the model is sound and free-choice, and contains parallelism, choices, and a loop.
Because we allow generally distributed durations in the model, we require an execution policy [START_REF] Marsan | The Effect of Execution Policies on the Semantics and Analysis of Stochastic Petri Nets[END_REF]. We use race semantics with enabling memory as described in [START_REF] Marsan | The Effect of Execution Policies on the Semantics and Analysis of Stochastic Petri Nets[END_REF]. This means that concurrently enabled transitions race for the right to fire, and transitions will only be reset, if they get disabled by another transition firing.
For our purposes, we reuse the existing work in the ProM framework that extracts performance information of activities from an event log and enriches plain Petri nets to GDT_SPN models [START_REF] Rogge-Solti | Discovering Stochastic Petri Nets with Arbitrary Delay Distributions From Event Logs[END_REF]. In [START_REF] Rogge-Solti | Discovering Stochastic Petri Nets with Arbitrary Delay Distributions From Event Logs[END_REF], we discuss the challenges for discovering GDT_SPN models with respect to selected execution semantics of the model. The discovery algorithm uses replaying techniques, cf. [START_REF] Van Der Aalst | Replaying History on Process Models for Conformance Checking and Performance Analysis[END_REF], to gather historical performance characteristics and enriches a given Petri net to a GDT_SPN model with that performance information.
Cost-Based Fitness Alignment
(a) example log: Consider the example log in Fig. 2a consisting of two traces t 1 , and t 2 . To check, whether the trace fits to the model, we need to align them. We reuse the technique described by Adriansyah et al. in [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF], which results in a sequence of movements that replay the trace in the model. These movements are either synchronous moves, model moves, or log moves. A formal description of the alignment technique is provided in [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF] and remains out of scope of this paper. We only give the intuition. For an alignment, the model and the log are replayed side by side to find the best mapping of events to activities in the model. Thereby, a synchronous move represents an event in the log that is allowed in the respective state in the model, such that both the model and the log progress one step together. However, if an activity in the model or an event in the log is observed with no counterpart, the model and log have to move asynchronously. Then, a model move represents an activity in the model, for which no event exists in the log at the current position and conversely, a log move is an event in the log that has no corresponding activity in the model that is enabled in the current state during replay. It is possible to assign costs to the different types of moves for each activity separately.
t 1 : A, C, D, B, E, F, G, H t 2 : E, G, H (b) alignment for trace t 1 : log A C D B E F G H model A C D B E F G H (c) alignments for trace t 2 : (c.1) log E G H model B E F G H (c.2) log E G H model B F E G H
Fig. 2 shows some example alignments of the model in Fig. 1 and log in Fig. 2a. In Fig. 2b, a perfect alignment is depicted for trace t 1 , i.e., the trace can be replayed completely by a sequence of synchronous moves. A closer look at trace t 2 and the model in Fig. 1 reveals that the two events B, and F are missing from the trace, which might have been caused by a documentation error. Because activity F is parallel to E, there exist two candidate alignments for t 2 , as shown in Fig. 2c. The symbol denotes a step that is used to show empty moves, i.e., modeled and recorded behavior disagree. In this example, there are two model moves necessary to align the trace t 2 to the model.
Summarizing, the alignment technique described in [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF][START_REF] Van Der Aalst | Replaying History on Process Models for Conformance Checking and Performance Analysis[END_REF] can be used to find the cost-optimal matches between a trace in a log and a model. However, the approach only considers the structure of the model and the sequence of events encountered in the log without considering timestamps or probabilities. In this paper, we enhance the alignment technique to also take path probabilities into account.
Bayesian Networks
GDT_SPN models capture probabilistic information about the durations of each activity in the process. We use Bayesian networks [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[END_REF][START_REF] Koller | Probabilistic Graphical Models: Principles and Techniques[END_REF] to capture the dependencies between the random durations given by the GDT_SPN model structure. Fig. 3 shows an example Bayesian network that captures the relations for a part of the process model in Fig. 1. The arcs between activities B, F, and G, and between B and E, are sequential dependencies. Note that there is no direct dependency between F and E, since they are executed in parallel, and we assume that the durations of these activities are independent. More generally, a Bayesian network is a directed acyclic graph and captures dependencies between random variables in a probabilistic model [START_REF] Koller | Probabilistic Graphical Models: Principles and Techniques[END_REF]. An arc from a parent node to a child node indicates that the child's probability distribution depends on the parent's values.
B F E G Fig. 3: Bayesian network for a fragment of Fig. 1.
We use Bayesian networks to reason about our updated probabilistic beliefs, i.e., the posterior probability distributions in a model, once we assigned specific values to some of the random variables. Suppose that we observe trace t 2 in the log in Fig. 2a, with times γ(E) = 30, γ(G) = 35, and γ(H) = 40. Initially, the random variable of node B in the example has a duration distribution of N(16, 3 2 ), i.e., a normally distributed duration with mean 16, and standard deviation 3. However, after inserting the observed times of events E, and event G into the network in Fig. 3, we can calculate the resulting posterior probability distributions by performing inference in the Bayesian network. In this case, the posterior probability distribution of B is N(14.58, 1.83 2 ). Note that by inserting evidence, i.e., constraining the variables in a Bayesian network, the posterior probability distributions get more accurate. In this example, the standard deviation is reduced from 3 to 1.83. The intuition is that we narrow the possible values of the unobserved variables to be in accordance with the observations in the log. There exist algorithms for Bayesian networks automating this process [START_REF] Murphy | The Bayes Net Toolbox for Matlab[END_REF]. A complete explanation of Bayesian networks, however, is not the aim in this paper, and the interested reader is referred to the original work by Pearl [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[END_REF] and the more recent text book by Koller and Friedman [START_REF] Koller | Probabilistic Graphical Models: Principles and Techniques[END_REF].
Repairing Events in Timed Event Logs
In this paper, we propose a method to probabilistically restore events in logs which contain missing events. In particular, we are interested in knowing when things happened most likely. The problem that we try to solve is to identify the parts in the model that are missing from the trace (which) and also to estimate the times of the activities in those parts (when).
In theory, we need to compare the probabilities of all possible paths in the model that are conforming to the trace. Each path may allow for different assignments of events in the trace to the activities in the model. For example, for trace t 2 : E, G, H and the model in Fig. 1 two cost-minimal paths through the model are given by the alignments in Fig. 2.c. But, there might be further possibilities. It is possible that a whole iteration of the loop happened in reality, but was not documented. In that case, the path B, E, F, G, A, C, D, H would also be an option to repair trace t 2 . Furthermore, the second iteration could have taken another path in the model: B,E,F,G,B,F,E,G,H . In this case it is not clear to which iteration the events E and G belong. In general, there are infinitely many possible traces for a model that contains loops.
In order to compare the probabilities of these paths, we need to compute the probability distributions of the activities on the paths and compare which model path and which assignment explains the observed events' timestamps best. To reduce the complexity, we propose to decompose the problem into two separate problems, i) repair structure
Repair Logs Method
Repair Structure Insert Time
Repaired log
Log with missing entries
GDT_SPN model
Fitting Log with missing time entries Fig. 4: We divide the problem into two subproblems: repairing the control flow, and repairing the timestamps. and ii) insert time, as sketched in Fig. 4. The method uses as input a log that should be repaired and a GDT_SPN model specifying the as-is process.
Note that by choosing this approach, we accept the limitation that missing events on a path can only be detected, if at least one event in the log indicates that the path was chosen.
Realization of Repairing Logs
In this section, we explain a realization of the method described above. For this realization, we make the following assumptions:
-The supported models, i.e., the GDT_SPN models, are sound, cf. [START_REF] Van Der Aalst | Verification of Workflow Nets[END_REF], and freechoice, cf. [START_REF] Best | Structure Theory of Petri Nets: The Free Choice Hiatus[END_REF], but do not necessarily need to be (block-)structured. This class of models captures a large class of process models and does not impose unnecessary constraints. -The GDT_SPN model is normative, i.e., it reflects the as-is processes in structural, behavioral and time dimension. -Activity durations are independent and have normal probability distributions, containing most of their probability mass in the positive domain. -The recorded timestamps in the event logs are correct.
-Each trace in the log has at least one event, and all events contain a timestamp.
-The activity durations of a case do not depend on other cases, i.e., we do not consider the resource perspective and there is no queuing. -We assume that data is MAR, i.e., that the probability that an event is missing from the log does not depend on the time values of the missing events. The algorithm is depicted in Fig. 5, and repairs an event log as follows.
For each trace, we start by repairing the structure. This becomes trivial, once we identified a path in the model that fits our observations in the trace best. The notion of cost-based alignments [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF] that we introduced in Section 3, is used for this part. It tells us exactly:
a) when the model moves synchronously to the trace, i.e., where the events match b) when the model moves alone, i.e., an event is missing from the trace c) when the log moves alone, i.e., there is an observed event that does not fit into the model at the recorded position Fig. 5: The repair approach described in more detail.
We set the costs of synchronous and model moves to 0, and the cost of log moves to a high value, e.g., 1000. The alignment algorithm returns all paths through the model, where the events in the trace are mapped to a corresponding activity. This works well for acyclic models. For cyclic models, where infinite paths through a model exist, we need to assign some small costs to model moves, in order to limit the number of resulting alignments that we compare in the next step.
In the next step, cf. box Pick alignment in Fig. 5, we decide which of the returned cost-minimal alignments to pick for repair. The algorithm replays the path taken through the model and multiplies the probabilities of the decisions made along the path. This allows us to take probabilistic information into account when picking an alignment and enhances the alignment approach introduced in [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF]. We also consider that, for one trace, paths with many forgotten activities are less likely than others. That is, we allow to specify the parameter of the missing data mechanism, i.e., the rate of missingness. We let the domain expert define the probability to forget an event. The domain expert can specify how to weigh these probabilities against each other, i.e., to give preference to paths with higher probability, i.e., determined by immediate transition weights, or to paths with less missing events that are required to be inserted into the trace. This novel post-processing step on the cost-optimal alignments allows to control the probability of paths in the model that are not reflected in a log by any event.
For example, consider a loop in a GDT_SPN model with n activities in the loop. By setting the chance of missing entries low, e.g., setting the missingness probability to 0.1 (10% chance that an event is lost), an additional iteration through the loop will become more unlikely, as its probability will be multiplied by the factor 0.1 n . This factor is the probability that all n events of an iteration are missing. We select the alignment with the highest probability. Once we decided on the structure of the repaired trace, we can continue and insert the times of the missing events in the trace, i.e., the identified model moves.
To insert the timing information, it is not enough to look at the GDT_SPN model alone. We need to find a way to add the information that we have for each trace, i.e., the timestamps of the recorded events. Fortunately, as mentioned in Section 3, there exists a solution for this task: Inference in Bayesian networks. Therefore, we convert the GDT _SPN model into a Bayesian network to insert the evidence given by the observations to be able to perform the inference.
In the previous step, we identified a probable path through the GDT_SPN model. With the path given, we eliminate choices from the model by removing branches of the process model that were not taken. We unfold the net from the initial marking along the chosen path. Consider trace t 3 = A, D, C, C, D, H and assume, we picked the following alignment:
log A D C ⊥ C D H model A D C A C D H
Then, the unfolded model looks like Fig. 6, where the black part marks the path taken in the model. The grey part is removed while unfolding. Note that the unfolded model still contains parallelism, but it is acyclic. Thus, we can convert it into a Bayesian network with a similar structure, where the random variables represent timed transitions. As, due to multiple iterations of loops, activities can happen multiple times, we differentiate them by adding an index of their occurrence, e.g., A1 and A2 correspond to the first and second occurrence of the transition A. The unfolding is done by traversing the model along the path dictated by the alignment and keeping track of the occurrences of the transitions. We transform the unfolded model into a Bayesian network with a similar structure. Most immediate transitions are not needed in the Bayesian network, as these do not take time and no choices need to be made in the unfolded process. Only immediate transitions joining parallel branches will be kept. Fig. 7 shows transformation patterns for sequences, parallel splits, and synchronizing joins. These are the only constructs remaining in the unfolded form of the GDT_SPN model. In the resulting Bayesian network, we use the sum and max relations to define the random variables given their parents. More concretely, if timed transition t i is followed by timed transition t j in a sequence, we can convert this fragment into a Bayesian network with variables X i and X j . From the GDT_SPN model, we use the transition duration distributions D(t i ) = D i (x) and D(t j ) = D j (x). Then, the parent variable X i has the unconditional probability distribution P(X i ≤ x) = D i (x) and the child variable the probability distribution P(X j ≤ x | X i ) = P(X j + X i ≤ x). For each value of the parent x i ∈ X i , the probability distribution is defined as P(X j ≤ x | X i = x i ) = D j (xx i ), i.e., the distribution of X j is shifted by its parent's value to the right. A parallel split, cf. lower left part in Fig. 7, is treated as two sequences sharing the same parent node.
The max relation that is required for joining branches at synchronization points, cf. the lower right pattern in Fig. 7, is defined as follows. Let X i and X j be the parents of X k , such that X k is the maximum of its parents. Then, P(X k ≤ x | X i , X j ) = P( max(X i , X j ) ≤ x) = P(X i ≤ x) • P(X j ≤ x) = D i (x) • D j (x), i.e., the probability distribution functions are multiplied. Note that the maximum of two normally distributed random variables is not normally distributed. Therefore, we use a linear approximation, as described in [START_REF] Zhang | Statistical Static Timing Analysis With Conditional Linear MAX/MIN Approximation and Extended Canonical Timing Model[END_REF]. This means that we express the maximum as a normal distribution, with its parameters depending linearly on the normal distributions of the joined branches. The approximation is good, when the standard deviations of the joined distributions are similar, and degrades when they differ, cf. [START_REF] Zhang | Statistical Static Timing Analysis With Conditional Linear MAX/MIN Approximation and Extended Canonical Timing Model[END_REF]. The resulting Bayesian network model is a linear Gaussian model, which is a class of continuous type Bayesian networks, where inference is efficiently possible. More precisely, inference can be done in O n 3 where n is the number of nodes [START_REF] Koller | Probabilistic Graphical Models: Principles and Techniques[END_REF]. Otherwise, inference in Bayesian networks is an NP-hard problem [START_REF] Cooper | The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks[END_REF].
Once we constructed the Bayesian network, we set the values for the observed events for their corresponding random variables, i.e., we insert the evidence into the network. Then, we perform inference in the form of querying the posterior probability distributions of the unobserved variables. We use the Bayesian network toolkit for Matlab [START_REF] Murphy | The Bayes Net Toolbox for Matlab[END_REF], where these inference methods are implemented. This corresponds to the second step in the insert time part of Fig. 5.
The posterior probabilities of the queried variables reflect the probabilities, when the conditions are given according to the evidence. Our aim is to get the most likely time values for the missing events. These most likely times are good estimators for when the events occurred in reality, and thus can be used by process participants as clues during root cause analysis. For example, in order to find the responsible person for the task in question, an estimation of when it happened most likely can be helpful. Note that repaired values with most likely time values need to be treated with caution, as they do not capture the uncertainty in the values. Therefore, we mark repaired entries in the log as artificial.
Once we determined probable values for the timestamps of all missing events in a trace, we can proceed with the next trace starting another iteration of the algorithm. that deals with both ambulant patients and ordered stationary patients. Each transition corresponds to a treatment step that a nurse records in a spread sheet with timestamps. In the process, the patient arrives in the lock to be prepared for the surgery. Once the operating room (OR) is ready, the patient leaves the lock and enters the OR. In the OR, the anesthesia team starts the induction of the anesthesia. Afterwards, the patient optionally gets an antibiotica prophylaxis treatment. The surgery starts with the incision, i.e., the first cut with the scalpel, and finishes with the suture, i.e., the closure of the tissue with stitches. Next, the anesthesia team performs the emergence from the anesthesia, which ends when the patient has regained consciousness. Finally, the patient leaves the OR and is transported to the recovery. The other cases contain one or more missing events, which motivated our research. We use the 570 fitting cases to evaluate, how well we can repair them after randomly removing events. Figure 11 shows the evaluation results of the hospital event log. Observe that the structure can be repaired better than in the artificial example in Fig. 9. This is due to the sequential nature of the model-it comprises twelve sequential, and two optional activities. With increasing number of missing events, the number of correctly repaired events (synchronous moves) approaches twelve. That is, only twelve activities are restored, because the algorithm is unable to repair single undetected optional events.
The mean absolute error in the restored events is higher than the artificial example. This value depends on the variance in the activity durations. In this evaluated example, the variance of certain activity durations in the model is high, due to outliers. Latter activity durations exhibit many short durations with a few outliers, which can be better captured with other distributions than the normal distribution.
Obviously, the ability to repair a log depends on the information content of observed events in the trace and the remaining variability in the model. For instance, we can repair a sequential model always with fitness 1.0 of the repaired log-if we observe only one activity. However, the chance to pick the same path through a model composed of n parallel activities with equally distributed times is only 1 n! . The presented approach is unable to restore optional branches without structural hints, i.e., at least one activity on an optional branch needs to be recorded. This affects single optional activities most, as their absence will not be repaired. Still, many real-life processes comprise only a sequence of activities, and can be repaired correctly.
Conclusion
We introduced a method to repair event logs to assist timely correction of documentation errors in enterprises. Thereby, we present which, and also when activities should have happened most likely according to a given stochastic model. The method decomposes the problem into two sub-problems: i) repairing the structure, and ii) repairing the time.
Repairing the structure is done with a novel extension of the alignment approach [2] based on path probabilities. And repairing the time is achieved by using inference in a Bayesian network representing the structure of the individual trace in the model. The algorithm can deal with a large and representative class of process models (any sound, free-choice workflow net).
Our preliminary evaluations indicate that we can repair structure and time, if noise is limited. Models exhibiting a high degree of parallelism are less likely to be repaired in correct order than models with more dependencies between activities. However, there are some limitations that we would like to address in subsequent research:
1. Separating structure from time during repair is a heuristic to reduce the computational complexity of the problem, as timestamps of events also influence path probabilities. 2. The normal distribution, though having nice computational properties, is of limited suitability to model activity durations, since its support also covers the negative domain. 3. The independence assumption between activity durations and between traces might be too strong, as resources play an important role in processes. 4. We assumed that the GDT_SPN model contains the truth, and deviations in the log are caused by documentation errors, instead of deviations from the process model. This assumption only is feasible for standardized processes with few deviations that are captured in the model. Therefore, we advise to use this approach with care and try to correct documentation errors using repaired logs as assistance. Future work also needs to address the question of how to model causalities of activities more directly. Thus, missing events that are very likely to be documentation errors, e.g., the missing event for enter OR, when exit OR is documented, need to be separately treated from missing events of rather optional activities, e.g., missing event of do antibiotica prophelaxe, where it is not clear, whether the absence of the event is caused by a documentation error. An integration with the proposed technique in [START_REF] Bertoli | Reasoning-based Techniques for Dealing with Incomplete Business Process Execution Traces[END_REF], seems promising to address this issue.
Fig. 1 :
1 Fig. 1: Example unstructured free-choice GDT_SPN model.
Fig. 2 :
2 Fig. 2: Example log and possible alignments for the traces.
Fig. 6 :
6 Fig. 6: Unfolded model in Fig. 1 for path A, D, C, A, C, D, H .
Fig. 7 :
7 Fig. 7: Transformation of GDT_SPN models to Bayesian networks.
Fig. 10 :
10 Fig. 10: Real surgery model for a surgical procedure in a Dutch hospital. Out of 1310 patient treatment cases, only 570 fit the model shown in Fig. 10 perfectly.The other cases contain one or more missing events, which motivated our research. We use the 570 fitting cases to evaluate, how well we can repair them after randomly removing events.
Fig. 11 :
11 Fig. 11: Evaluation results for model in Fig. 10.
We have implemented our approach in ProM 1 . To evaluate the quality of the algorithm, we follow the experimental setup described in Fig. 8. The problem is that in reality we do not know whether events did not happen, or only were not recorded. Therefore, we conduct a controlled experiment. In order to have actual values to compare our repaired results with, we first acquire traces that fit the model. We do this either by selecting the fitting ones from original cases, or by simulation in artificial scenarios. In a second step, we randomly remove a percentage of the events from these fitting traces. We pass the log with missing entries to the repair algorithm, along with the model, according to which we perform the repair.
Real-life example
Approach
Data
Synthetic example
Petri net (see Fig. 10) Fig. 8: Approach used to evaluate repair quality.
The repair algorithm's output is then evaluated against the original traces to see, how well we could restore the missing events. We use two measures for assessing the quality of the repaired log. The cost-based fitness measure as defined in [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF] compares how well a model fits a log. Here, we compare the traces of the original and repaired log. Therefore, we convert each original trace into a sequential Petri net model and measure its fitness with the repaired trace.
Fitness deals with the structural quality, i.e., it is a good measure to check, whether we repaired the right events in the right order. For measuring the quality of repaired timestamps, we compare the real event's time with the repaired event's time. We use the mean absolute error (MAE) of the events that have been inserted. This is the mean of the absolute differences between repaired event times and original event times.
Artificial Example
We first evaluate the repair algorithm according to the artificial model introduced in Section 3 in Fig. 1. 1 See package RepairLog in ProM http://www.promtools.org The experiment was done with a log of 1000 simulated traces. Figure 9 displays the resulting quality measures of the repaired traces. Each dot is based on the repair results of this log with a different percentage of randomly removed events. On the left-hand side of the figure, you can see the performance values of the alignment. The solid line with squares shows the number of synchronous moves. The other two lines are the number of model moves (dotted line with circles) and the number of log moves (gray dashed line with triangles) necessary to align the two traces.
Because of the structural properties of the model in Fig. 1, i.e., there is a choice between two branches containing three (upper), and four (lower) activities, we can restore the correct activities at low noise levels (around 30%). But we can not guarantee for their ordering due to parallelism in the model. A change in the ordering of two events in the repaired trace results in a synchronous move for one event, and a log move and a model move for the other (to remove it from one position and insert it in another). Note that at lower noise levels the number of log moves and model moves are equal. This indicates incorrect ordering of parallel activities. At higher noise levels the number of model moves increase further. Then, it gets more likely that there remains no single event of an iteration of the loop in Fig. 1. The size of the gap between model moves and log moves shows how much the repair quality suffers from the fact that the presented algorithm, which repairs events with the most likely values, does not restore optional paths of which no event is recorded in the trace.
On the right-hand side of Fig. 9 we see the mean absolute error in relative time units specified in the model. The graph shows that the offset between original event's time and repaired event's time increases with the amount of noise non-linearly.
Repairing a real example log of a hospital
In this second part of the evaluation, we look at the results obtained from repairing a real log of a hospital. In contrast to the experimental setup, where we used the model to generate the example log, now the log is given, and we try to estimate the model parameters. To avoid using a model that was learned from the events, which we try to repair, we use 10-fold cross-validation. That is, we divide the log into ten parts and use nine parts to learn the model parameters and one to perform the repair with.
We use the log of a Dutch clinic for the ambulant surgery process, described in [START_REF] Kirchner | Embedding Conformance Checking in a Process Intelligence System in Hospital Environments[END_REF]. The process is depicted as a GDT_SPN model in Fig. 10. It is a sequential process | 43,032 | [
"1002445",
"1002446",
"871257",
"977589"
] | [
"65266",
"4629",
"4629",
"65266"
] |
01474745 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474745/file/978-3-642-41641-5_13_Chapter.pdf | Jiri Kolar
email: [email protected]
Lubomir Dockal
Tomas Pitner
A Dynamic Approach to Process Design: A Pattern for Extending the Flexibility of Process Models
Keywords: BPM, process design pattern, Agile process design, process flexibility, ad-hoc processes, ad-hoc process pattern process discovery
This paper presents a specific approach to Business Process design by combining selected principles of Adaptive Case Management, traditional modeling of processes executable in Business Process Management Systems, and a constraint-based approach to process design. This combined approach is intended for business situations, where traditional process models with rigid structures can lead to limitations of business flexibility. We propose a process design pattern that is suitable for the modeling of ad-hoc processes within common BPMS-based systems. The pattern can be used to define a process structure in a declarative constraint-based manner. Further, we present an application of the approach in an actual project, which is an end-to-end BPM project from an insurance business. The project uncovered needs for an extended flexibility of process structures. This along with requirements based on ad-hoc processes led to advancement in the presented approach. This paper presents a versatile, generally applicable solution, which was later tailored for the purpose of the aforementioned project and led to the successful satisfaction of the requirements. The approach is part of a more comprehensive research effort -complex BPM adoption methodology BPM4SME designed primarily for Small and Medium Enterprises, which put emphasis on the agility of the BPM adoption process and consequent flexible implementations of BPMS-based systems.
Introduction
In the scope of traditional business process models, we usually define a set of work tasks, their performers, and an explicit order in which those tasks should be performed [START_REF] Jeston | Business Process Management: Practical Guidelines to Successful Implementations[END_REF]. Nowadays, an increasing amount of process models are defined in modeling languages such as Business Process Modeling Notation (BPMN), based on Petri-nets formalism [START_REF] Van Der Aalst | The application of Petri nets to workflow management[END_REF], [START_REF] Ramadan | Bpmn formalisation using coloured petri nets[END_REF]. BPMN 2.0 was recently recognized as an industry standard for process modeling and is widely accepted in practice [START_REF] Ko | Business process management (BPM) standards: a survey[END_REF]. By modeling a process in such a language, we define the allowed sequences in which process tasks can be performed by defining possible paths through the process graph. Such rigidly defined process models are well applicable to situations where the explicit definition of task order is known in design-time. This has several positive outcomes, such as the establishment of a uniformed workprocess, efficient Business Activity Monitoring [START_REF] Kolar | Business activity monitoring[END_REF] and other general outcomes of Business Process Management (BPM) [START_REF] Rudden | Making the case for bpm-a benefits checklist[END_REF], [START_REF] Mertens | How bpm impacts jobs: An exploratory field study[END_REF]. Such BPMN process models can also be consequently executed on a process execution engine -a core component of every Business Process Management System (BPMS) [START_REF] Jeston | Business Process Management: Practical Guidelines to Successful Implementations[END_REF].
In business, situations other than the ones outlined above may also arise. For certain type of work, it is optimal to decide about task ordering in run time, according to decisions carried out by task performers. For this kind of work, we will use the well-established term, knowledge work [START_REF] Magal | Essentials of Business Processes and Information Systems[END_REF], [START_REF] Swenson | Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize the Way That Knowledge Workers Get Things Done[END_REF].
For knowledge work, the desired order of tasks can differ from case to case. In these situations, process models would have to cover all possible scenarios. This can result in very chaotic process models and model clarity -the important benefit of process modeling [START_REF] Mertens | How bpm impacts jobs: An exploratory field study[END_REF] is lost. Therefore, there are different needs in the context of supportive Information Systems for knowledge intensive work. Systems built on approaches such as Adaptive Case Management (ACM) provide palettes of tasks [START_REF] Swenson | Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize the Way That Knowledge Workers Get Things Done[END_REF] which can be performed in any order. Such systems do not put hard restrictions on ordering. Instead, they help knowledge workers find similarities among cases and provide soft recommendations based on the orders of tasks previously performed in similar or related cases. In this way, another very important outcome of the BPM-based approach, the codification of business know-how, is preserved and the flexibility of knowledge work is not limited [START_REF] Van Der Aalst | Case handling: A new paradigm for business process support[END_REF], [START_REF] Schonenberg | Supporting flexible processes through recommendations based on history[END_REF], [START_REF] Swenson | Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize the Way That Knowledge Workers Get Things Done[END_REF].
However, in business practice, it is very common to mix these two kinds of work [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF]. Management processes are often performed on the top level [START_REF] Pesic | Constraint-Based Workflow Management Systems: Shifting Control to Users[END_REF] where traditional rigid process definitions lead to better monitoring and process unification. Such management processes consequently instantiate sub-processes dealing with certain business activities and only these sub-processes are often knowledgeintensive. The schism of selection among these approaches during consequent Information System development often leads to a paralysis in decision-making, as it is not clear which paradigm to follow. This was recognized in practice as a strong showstopper of many BPMS-based system implementations.
One could clearly argue that BPMS systems should not be used in the business context where such ad-hoc processes appear. However, in practice, we often find situations where traditional BPM seems to be a perfect solution for major amounts of work and some minor cases involving ad-hoc processes are identified much later when BPM adoption is already in progress. In such cases, it often does not make sense to combine BPM with different paradigms such as ACM. This is because it would significantly raise costs, increase complexity, and confuse business users, which already have problems understanding the processbased BPM paradigm. This is also the case of the BPM project presented later in this paper.
Before we clearly define this problem and propose a solution, we should clarify the terminology of Dynamic characteristics of Process Models. We will follow the terminology of [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF] and define three terms related to business process dynamics:
-Dynamism is a characteristic related to the evolution of process model initiated either by changes in business environment or process re-engineering efforts. These changes are made in design-time and they involve a non-trivial problem -the migration of previously executed instances [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF]. Although this characteristic is not directly in scope with this paper, the proposed solution should decrease the number of situations where a change in the process model is needed. -Adaptability is the process ability to cope with exceptional circumstances and non-standard behavior. This can be partially solved in design-time by adjusting the process model structure. However, there are still many situations when we have to rely on BPMS-specific technological solutions and run-time work-arounds. We will partially touch on this problem later in the paper. -Flexibility is a characteristic of the process model related to loose or partially defined model structures specified in design-time. The full specification of the process is completed at run-time and it may differ for each process instance. Flexibility is the main focus of this paper, and its improvement is the main objective of our pattern-based approach.
Problem Description
Let us sum up three important facts for a clearer definition of the problem.
1. Traditional process models with an explicit ordering of tasks can limit workprocess flexibility in certain cases of knowledge-intensive work in which we are not able to determine the exact ordering of tasks before executing a particular process instance [START_REF] Deng | Enhancement of workflow flexibility by composing activities at run-time[END_REF]. On the other hand, traditional process models significantly help to codify the know-how in their structure [START_REF] Wyssusek | Business process modelling as an element of knowledge management -a model theory approach[END_REF], [16]. 2. To codify know-how of knowledge work, the previous individual decisions of knowledge workers have to be recorded and related to new cases with similar characteristics [START_REF] Swenson | Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize the Way That Knowledge Workers Get Things Done[END_REF]. 3. In practice, major parts of the process models often correspond to traditional rigid structure and only relatively small parts demand a high level of flexibility [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF].
Our focus is restricted to situations where we generally want to have a traditional BPMS-based solution. This is due to the fact that nowadays, BPMSes are generally available technologies and we can find relatively mature BPM products [START_REF] Sinur | Magic quadrant for business process management[END_REF]. In opposition, ACM-based solutions are usually developed as a custom solution [START_REF] Swenson | Mastering the Unpredictable: How Adaptive Case Management Will Revolutionize the Way That Knowledge Workers Get Things Done[END_REF]. Therefore, our desire is to achieve the extended flexibility of process models to make them suitable for knowledge work and avoid mixing of ACM and BPM technologies. Obviously, limited flexibility is a common obstacle of BPM adoptions [START_REF] Schonenberg | Process flexibility: A survey of contemporary approaches[END_REF], [START_REF] Imanipour | Obstacles in business process management (bpm) implementation and adoption in smes[END_REF]. This was also confirmed during the practice project in which we participated.
According to the previously described circumstances, we are trying to find a process design pattern, which is applicable either during the initial process design, or even to redesign the existing process. The pattern should meet the following requirements, which were defined during the aforementioned project and later refined according to the related research [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF], [START_REF] Pesic | Constraint-Based Workflow Management Systems: Shifting Control to Users[END_REF]:
1. The pattern will be applicable to traditional rigid BPMN process models. 2. The pattern will isolate the ad-hoc process parts into sub-process without interfering with the rigid structure of the parent process. 3. The pattern will provide a mechanism to influence such isolated subprocesses from their parent process and provide a mechanism for defining declarative constraints on an ad-hoc sequence. 4. The pattern will record sequences of tasks claimed in run-time for each instance of an ad-hoc sub-process and provide valuable historical data for the discovery of soft structures. This gives certain guidance to the worker, recommending but not directing him on how to proceed in the work process. At the same time, it preserves the know-how codification feature of BPM. 5. The pattern should be usable within conventional modeling methods and implementable in various BPMSes. Therefore, to keep the approach as versatile as possible, it should only use standardized BPMN constructs that are available in most BPMSes.
Related Work
Our approach is a specific application of several autonomous principles, which were the subject of several research efforts in the past.
Research related to process design patterns has been well known since the very beginning of BPM era [START_REF] Van Der Aalst | Workflow patterns[END_REF], [START_REF] Aalst | Advanced workflow patterns[END_REF]. Most of these efforts focus primarily on describing the best practices for modeling certain logical structures in processes. Other efforts [START_REF] Van Der Aalst | Workflow patterns[END_REF] describe patterns for exception handling aimed towards the extension of process adaptability [START_REF] Russell | Arthur: Exception Handling Patterns in Process-Aware Information Systems[END_REF]. We can also find later updates of pattern approaches [START_REF] Russell | Workflow Control-Flow Patterns: A Revised View[END_REF], [START_REF] Russell | Workflow data patterns: identification, representation and tool support[END_REF] covering more recent advancements in BPM. Such patterns are generally applicable in any process and business context. In opposition, we propose a more specific pattern for handling ad-hocness which is suitable for situations described later in this paper.
We can find several sources related to the characteristics of process flexibility in [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF] and [START_REF] Pesic | Constraint-Based Workflow Management Systems: Shifting Control to Users[END_REF]. A very relevant topic is research related to the flexibility of processes, particularly a construct called pockets of flexibility [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF], and the later effort to solve this problem with constraints [START_REF] Sadiq | Specification and validation of process constraints for flexible workflows[END_REF], [START_REF] Mangan | On building workflow models for flexible processes[END_REF]. The constraint approach is partially used in our work as well. We build our pattern on top of these approaches and use the terminology established mainly in [START_REF] Sadiq | Pockets of flexibility in workflow specifications[END_REF]. However, these approaches are rather general and discuss the ad-hoc principles in a general workflow-modeling perspective. Since this research was published, BPM has made big steps forward and we are therefore able to be more specific and propose solutions, which are directly applicable in the context of currently available BPM technologies. Another interesting approach, which could solve our problem, is based on Worklets [START_REF] Adams | Worklets: A service-oriented implementation of dynamic flexibility[END_REF], [START_REF] Aalst | Flexibility as a service[END_REF]. However, this is something purely implementationspecific and therefore does not meet our requirement in versatility.
Furthermore, we can also find attempts to achieve flexibility by the adaptation of process definitions, such as [START_REF] Weber | Change patterns and change support features in process-aware information systems[END_REF]. Certain principles used here are also highly relevant in our context. Surveys assessing the current advancements in research on process flexibility such as [START_REF] Schonenberg | Process flexibility: A survey of contemporary approaches[END_REF] can be found as well.
Probably the most complete work about declarative approach to process definitions can be found in [START_REF] Pesic | Constraint-Based Workflow Management Systems: Shifting Control to Users[END_REF] and consequently in more recent publications [START_REF] Van Der Aalst | Declarative workflows: Balancing between flexibility and support[END_REF], [START_REF] Pesic | Declarative workflow[END_REF] related to complementary Declare tooling. Some case studies from practical applications of this approach exist such as [START_REF] Mulyar | Declarative and procedural approaches for modelling clinical guidelines: addressing flexibility issues[END_REF].
Relevant recommendation-based approach is described in [START_REF] Schonenberg | Supporting flexible processes through recommendations based on history[END_REF]. Research focused on process discovery and mining which is discussed at the end of this paper can be found in [START_REF] Van Der Aalst | Finding structure in unstructured processes: The case for process mining[END_REF] and [START_REF] Maggi | User-guided discovery of declarative process models[END_REF].
Our research context: BPM4SME methodology for agile BPM adoption
The presented approach is a part of a more comprehensive research effort, BPM4SME methodology for small-scale BPM adoptions. As confirmed by related research sources [START_REF] Singer | Business process management -s-bpm a new paradigm for competitive advantage[END_REF], [START_REF] Mertens | How bpm impacts jobs: An exploratory field study[END_REF], the adoption of the BPM paradigm in the Small and Medium Enterprise (SME) sector has several specific obstacles. We try to overcome these obstacles and develop a methodology, which provides helpful guidelines for successful and flexible BPM end-to-end adoptions in organizations of SME sizing. Our methodology is built on the following key principles. This paper is mainly contribution to the second one.
1. The application of agile collaboration while defining the business motivation model, process architecture, and process design 2. The creation of patterns for designing non-restrictive processes, which do not interfere with the turbulent character of SME business 3. A design for lightweight BPMS-based systems resulting in easy customizable solutions with low initial and consequent maintenance costs 4. A simplified documentation structure where documented processes can be easily transformed into their executable form and vice versa
An agile practice research approach is applied; thusly we verify each component of the methodology in a practical project in an actual business environment at the end of each milestone. Results of those projects are regularly published and discussed with practitioners from different business environments. Thus far, the methodology has been applied in practice in the following projects:
1. A BPM adoption in a web-app oriented SME -the software company IT Logica s.r.o. [START_REF] Kolar | Process analysis at it logica s.r.o. Business analytical document[END_REF].
2. Process design and optimization in the context of human resources in the ICT department of Masaryk University in Brno [START_REF] Kolar | Process analysis at ict department faculty of arts masaryk unversity. Business analytical document[END_REF]. 3. Analysis at the headquarters of Masaryk University, several results discussed in the paper related to collaboration in process design [START_REF] Kolar | Collaborative process design in cloud environment[END_REF] and [START_REF] Kolar | Agile bpm in the age of cloud technologies[END_REF]. 4. An end-to-end implementation of a BPMS-based solution in a global insurance company, presented at the end of this paper.
Proposed Solution
It is now appropriate to clarify certain terminology: task vs. activity. According to the BPMN standard, activity is a process element, which can contain either one task or the nested sub-process consisting of several tasks and other modeling elements. Our pattern is intended to handle not just ad-hoc sequences of tasks but also sequences of activities. Therefore, starting for now, to stay consistent with BPMN, we will use the term of activity without differentiating between task and nested sub-process.
We now present our pattern with a step-by-step application in process design. To model and execute ad-hoc sub-process according to our approach, the following steps must be performed:
The Separation of Ad-hoc Process Parts
The first step is to identify parts of existing processes where an ad-hoc order of activities is desired. We should include all process activities which can be ordered ad-hoc and separate them into one or more sub-process. All activities in one adhoc sub-process should be logically related and ideally belonging to one unit of work. Such a separation assures that the rest of the process will be isolated from the on-demand order of activity execution. The separated sub-process will be managed from the parent process. This means that it can be multi-instantiated, repeated several times, or terminated on-demand by events triggered from the parent process.
The Application of an Ad-hoc Pattern
As depicted in (Fig. 1), the pattern consists of three basic sections. The two on the left serve for assigning tasks. This can be done either by performers, which can freely choose any allowed activity they want, or the assignment can be directed from the Managing process with the use of inter-process communication. These sections are synchronized and re-executed in loops. In these sections we also record the sequence of performed activities, evaluate Constraint Business Rules on the recorded sequence and determine which activities are allowed to be assigned in each process state. We will explain the details of the constraint concept in following paragraph. In the activity section, assigned activities are being performed and they are either completed or re-assigned. This mechanism Fig. 1. Process pattern is used for delegation, which is common in the context of ad-hoc processes. The process can be terminated either by the intervention of the Managing process or it can terminate itself after conditions defined by the business rules are met.
The Definition of Constraints
The pattern contains business rule tasks, which are responsible for deciding which activities can be assigned to a performer in a certain moment. As an input, this business rule takes a list of assigned activities ordered by time, evaluates constraints that can prohibit the assigning of certain activities in the next assignment step, and produces a list of activities, which can be assigned as an output. Those lists are saved into dedicated objects in process data in the case of data-model-based BPMS, or into an external business object in the case of pure service-based BPMS.
For particular constraint scenarios, we can use the declarative structural patterns defined in a constraint-based approach [START_REF] Pesic | Constraint-Based Workflow Management Systems: Shifting Control to Users[END_REF]. In this way we can use the rule language to define complex Linear Time Logic (LTL) based constraints. In practice this concept can be eventually replaced by simple if-then-else rules. Nevertheless, the sophisticated usage of the rule language for defining adequate LTL [START_REF] Rozier | Survey: Linear temporal logic symbolic model checking[END_REF] provides mechanisms for an implementation of the declarative constraintbased approach [START_REF] Pesic | Constraint-Based Workflow Management Systems: Shifting Control to Users[END_REF] within a conventional BPMN 2.0 based BPMS.
The Identification of Common Patterns in Activity Order
Suppose we have an ad-hoc process modeled according to our pattern with constraint rules. To achieve the last goal and satisfy the requirement for detecting the relationship between activity order and case characteristics, we record each path through ad-hoc process and group similar sequences into a set of common patterns. Consequently we compare these patterns to the active process instance and we are able to provide soft recommendations based on similarities. For that, we have to use two mechanisms not described by the pattern. First, we have to use some mechanism to find similarities. This has to be performed by a service outside of the process. Secondly, we use such a similarity detection mechanism to provide just-in-time intentions to the user in a user interface. For example, we can highlight certain activities, which are usually successors of the last claimed activity. The implementation of this construct can be different for each BPMS and therefore we do not propose any general solution.
Approach Limitations
Let us discuss the limitations of our approach. Despite the fact that we try to propose a generally applicable solution, we presume that the used BPMS will provide several core functionalities. First, it must implement BPMN 2.0 specifications to the extent in which used modeling elements are covered. Second, for a full-blown constraint driven solution, an adequate Business Rule Management System (BRMS) must be used. Third, there must be a mechanism for in-process reassigning of activity executors. All of these features are generally available in many BPMS products, however, not necessarily in all of them. Last, we expect a certain business setting in which we are able to identify Managing processes in the process architecture; moreover, the ad-hocness is mostly present in nested processes, not processes on the top level. Our pattern is also generally applicable for cases where the last condition is not met. However, some modifications of terminology and eventually of the pattern itself may be needed.
Application of the Approach in Insurance Business
We are going to present a real BPM project from an insurance business. The problem we present in this paper was introduced in this project. We searched for an optimal solution, which led to the creation of the presented approach. The light-weighted version of the approach was later applied in this project and led to the process redesign, which satisfied given requirements on extended process flexibility. Certain customizations of the approach were made in accordance to specifics of used Bizagi BPMS.
The project was elaborated in a global insurance company. The company recently acquired several smaller insurance companies across Eastern Europe. These acquired companies had similar business processes customized to local regulations and business environments in particular countries. Acquisitions were transformed into Business Units (BU) of the new owner. The transformation typically brought about the need for process unification and the development of a centralized process-driven Information System, which could be used across these newly created BUs. For the implementation of such a system, Bizagi BPMS was chosen as a main BPM platform.
Project Goals
Three main goals were set in the project:
1. The unification of processes across all business units with small customizations per country respecting differences in country-specific legal restrictions and locally used systems 2. The consequent unification of process monitoring and reporting processes, which could enable the mother company to compare business results achieved across BUs 3. The integration of locally used systems to collect critical business data in one system.
A Change in Requirements and Process Redesign
According to the initial settings and consequent prototype developed by the BPMS supplier Bizagi, all of the processes were modeled as hard-ordered and the groups of related ad-hoc activities were typically concentrated into one formbased blob task executed in loops. Therefore, performers of blob tasks had to wait for other performers to complete their all activites before they could claim the blob task. From the outer process, there was no control over activities performed inside these blob tasks.
The problem was identified after initial testing in one BU, where related adhoc parts of the process were performed by various performers. Local project coordinators in the BU complained about inefficient collaboration and they expressed several change requests to solve this issue. Such change requests were analyzed on the side of the BPM team and led to the advancement of the presented approach and consequent application of the pattern to the process redesign. An important influence on the redesign had also the RIVA method [START_REF] Ould | Business Process Management: A Rigorous Approach[END_REF] used in the initial process design. About 15 core processes were implemented in the first iteration. However, some of those were nested in each other. A need for ad-hocness was identified in three cases. We mention two of the most important top-level processes to understand the context and then we take a closer look at the Quote evaluation sub-process, one of three processes, where our pattern was applied.
The Opportunity management process
This is the most important top-level process used for managing potential business opportunities, the state of each opportunity, and communication with the potential customer. The process is started by the first contact with a potential customer and ends ideally with the acceptance of a proposed offer from the customer's side, which leads to consequent signatures of the policy document. This process usually instantiates the important underwriting process described below. The underwriting process is executed in parallel with Opportunity management and the results of underwriting are sent back to the Opportunity Management process.
The Underwriting process
This top-level process is selecting risks for insurance in respect of a plan and classifying members according to their degrees of insatiability so that the appropriate premium may be charged and the terms offered may be reviewed. This is the process responsible for handling particular offers consisting of the assessment of the customer's requests regarding insurance products. Underwriting itself, and carrying out the quoting process and agreements on particular business contracts is the ideal case.
The Quote evaluation sub-process
In this process parameters for each product included in the resulting quotes are considered. There are one or more Underwriters assigned to the preparation of each particular product and one Chief Underwriter responsible for the whole quote. He must confirm every completed product preparation. He also has the rights to re-initiate any of the Prepare product activities, assign particular Underwriter to a particular product preparation activity, and terminate the whole process by quote completion. This sub-process was the subject of redesign in accordance with our pattern and the result is depicted in (Fig. 2).
We applied several specific customizations of the general pattern presented in this paper due to the specifics of Bizagi BPMS:
1. Business rules were hard-coded into a complex event gateway 2. Re-assignation messaging was not applied as Bizagi BPMS has built-in features for delegation 3. The activity of Manage Quote allows the Quote supervisor to perform all management tasks, such as activity re-assignation, delegation, termination of activities, parametric changes in constraint conditions, etc. and has the same role as the Managing process in our pattern
Results
Redesigned processes were deployed for testing in one BU and offered to others. Within two months, other local project coordinators of three other countries also requested the redesign of their version of the process according to this approach. Finally, four out of five BUs are now working according to the new processes. In two BUs the process change was deployed into production environment, in the other two the change was made during the pilot system testing period. For evaluation of results, three employees from each BU were asked to provide feedback.
In each BU, each of the following roles was represented by one individual -Project Coordinator (PC) -the person responsible for the definition of requirements on the system, customization of the processes in respective BU and moderation of the communication between the users and the system developers. -Chief Underwriter (CU) -the person responsible for managing the underlying sub-processes, thus the performer of the Managing Process -Underwriter (U) -the employee performing particular ad-hoc ordered product preparation tasks.
Each of them was asked to compare the former and the redesigned process and rate the impact of the process redesign against the aspects listed in (Table 5.3). For each aspect, there were the following possible ratings: "-1" for worse than before the change, "0" for no improvement, "1" for improvement and "-" for no answer.
.
BU1 (production) BU2 (production) BU3 (pilot) BU4 (pilot) Sum per aspect <-12,12> Role PC CU U PC CU U PC CU U PC CU U Process model clarity -1 0 0 0 -1 0 0 -1 0 0 0 0 -3 Work-flow flexibility 1 1 1 0 1 1 - 1 1 - 1 1 9 Time efficiency - 0 1 - 0 1 - 1 1 1 0 1 6 Process manageability - 1 1 1 1 1 1 0 1 - 1 -1 7 Sum per role <- 4,4> 0 2 3 1 1 3 1 1 3 1 2 1 Sum per BU <-12,12> 5 5 5 4
According to acquired figures in (Table 5.3), there is a slight decrease of process model clarity observed by the Project Coordinators and some of the Chief Underwriters, the main users of the process model. On the other hand, there was a significant improvement of work-flow flexibility observed by all the Underwriters and more than half of the Chief Underwriters, as they could operate more flexibly without dependence on each other's tasks. As we can see in (Table 5.3), the overall perception of the process redesign was rather positive, especially in BUs where the system was deployed into the production environment with more intensive system usage. The best rating was generally given by Underwriters, thus we claim that the highest improvement was perceived by ad-hoc task performers.
Discussion and Conclusions
The presented approach evolved from the request for process redesign during the elaboration of the presented project. We developed the approach as a general solution of the change request, as generally applicable pattern for the modeling of ad-hoc processes with standard BPMN. We consider the approach versatile enough to be used within most modern BPMS platforms. We further customized this approach for purposes of the presented project and applied it during the process redesign. The redesign impact was evaluated by several process performers and well accepted across four out of five BUs of the company, as we describe in Results section. Our future aim is to verify this approach in different business environments and to perform further extensions to establish a versatile best practice, which can be used in the context of our BPM4SME methodology.
During the advancement of the approach, we noticed interesting fact that will be the subject of further research. The recorded sequences of activities in ad-hoc parts of the process produce data, which can be used for several purposes. First, we are able to use it to find common ordering patterns related to process data. Second, we can use them to improve our constraints on fly to make them more restrictive to achieve higher determinism. Third, these data can be used for a semi-automatic process discovery. Therefore, in the future, we plan to perform a process discovery experiment in an organization interested in BPM adoption.
Concerning limitations, we have to admit that it can be used to extend process Flexibility and partially process Adaptability. However, it does not provide any improvement to process Dynamism as it was defined at the beginning of this paper. Therefore, once we want to add completely new activities into an ad-hoc process, we still have to solve the same problems related to process model changes in a traditional approach to process modeling. This problem remains unsolved and we are not able find any satisfactory solution applicable on a BPMN level. There were some discussions about generic activities, but they are not defined in BPMN standards and such a solution could be done by hacking particular BPMS. All other proposed solutions ended up with this result as well. Something similar is described by [START_REF] Adams | Worklets: A service-oriented implementation of dynamic flexibility[END_REF], but this is also a proprietary solution and far from being generalized the way our pattern is. Therefore, this problem, which could significantly extend our approach, remains as another challenge for further research.
Fig. 2 .
2 Fig. 2.Figure 2. (Evaluate quote process) | 36,882 | [
"1002447"
] | [
"412450",
"412450",
"412450"
] |
01474749 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474749/file/978-3-642-41641-5_16_Chapter.pdf | Jörg Becker
Nico Clever
Justus Holler
Johannes Püster
Maria Shitkova
Integrating Process Modeling Methodology, Language and Tool -A Design Science Approach
Keywords: Business Process Modeling, Business Process Modeling Tool, Process Modeling Methodology, Business Process Modeling Language
Providing high quality process models in a timely manner can be of major impact on almost all process management projects. Modeling methodologies in the form of normative procedure models and process modeling guidelines are available to facilitate this cause. Modeling languages and according tools, however, do neglect the available methodologies. Our work searches to close this research gap by proposing a modeling environment that integrates insights from modeling methodologies with a modeling language and a tool. Main features are a simple modeling language that generalizes most existing languages, four layers of abstraction and semantic standardization through a glossary and use of attributes. Our approach allows for rapid preparation of modeling activities and ensures high model quality during all modeling phases, thus minimizing rework of the models. The prototype was evaluated and improved during two practical projects.
Introduction
Business process modeling has received considerable attention in practice and theory during the last decades. Following Becker and Kahn [START_REF] Becker | The Process in Focus[END_REF], "a process is a completely The modeling environment proposed in this paper was developed according to the design science research methodology (DSRM) introduced by Peffers et al. [START_REF] Peffers | A Design Science Research Methodology for Information Systems Research[END_REF]. The DSRM consists of six consecutive phases of which the last phase, communicating the research results, is achieved with this article. The remaining five phases, along with the respective research method, are depicted in Fig. 1. To identify the problem and motivate our research, we conducted a keyword based database search [START_REF] Webster | Analyzing the Past to Prepare for the Future: Writing a Literature Review[END_REF][START_REF] Vom Brocke | Reconstructing the Giant: On the Importance of Rigour in Documenting the Literature Search Process[END_REF]. Based on the literature review we provide a line of argumentation to define the objectives of the proposed solution. In order to enable the subsequent implementation of our solution, a conceptual model of the modeling environment is developed in the design phase. Afterwards, the artifact was implemented as a web-based tool to demonstrate its practicability. To demonstrate the applicability of the modeling environment, we present the findings of two case studies projects with archetypical small and medium sized enterprise in which our prototype was used.
Related Work
The literature review revealed several noticeable procedure models for process modeling differentiating themselves in number and naming of the phases, the purpose and intention of the procedures are nevertheless comparable. Kettinger et al. [START_REF] Kettinger | Business Process Change : A Study of Methodologies , Techniques , and Tools[END_REF], performed an analysis of 25 business process reengineering (BPR) methodologies used in practice by different organizations and constructed a generalized procedure model with six phases. Due to its consolidating nature, modeling related steps are mostly concealed within the general steps of initiation, diagnosis and redesign. Allweyer [START_REF] Allweyer | Geschäftsprozessmanagement. W3[END_REF] describes a BPR procedure model of five phases that could also be a special case of the framework by Kettinger et al. [START_REF] Kettinger | Business Process Change : A Study of Methodologies , Techniques , and Tools[END_REF]. A more abstract model is proposed by Schmelzer and Sesselmann [START_REF] Schmelzer | Geschäftsprozessmanagement in der Praxis[END_REF] that hides modeling related activities within the phases of process identification, implementation, control and optimization. In the work of Becker et al. [START_REF] Becker | Process Management: A Guide for the Design of Business Processes[END_REF], a procedure for process-oriented reorganization projects is described that consists of seven phases, including phases for the preparation of modeling, process framework design, as-is modeling and to-be modeling.
As depicted in Fig. 2, the procedures can be grouped into four steps and differentiated in the proportion of modeling related activities. All procedures start with a preparation step that is not directly related to model construction. It is followed by a step of process modeling, analysis and optimization that includes a high proportion of modeling related activities. The subsequent implementation and evaluation of the processes is less focused on the models. As modeling activities are the initial steps before the actual implementation of changes in the organization, it is important to assure their quality at earlier phases, because error correction at later stages is expensive [START_REF] Moody | Theoretical and practical issues in evaluating the quality of conceptual models: current state and future directions[END_REF][START_REF] Vergidis | Business process analysis and optimization: beyond reengineering[END_REF]. We will, therefore, focus on the phases of preparation and modeling, analysis and optimization, because they determine the quality of generated models and, thus, influence the success and costs of the whole project.
Normative approaches to ensure high model quality during preparation of modeling and the actual modeling activities are available in the form of guidelines. Six guidelines of modeling (GoM) are proposed by Becker et al. [START_REF] Becker | Guidelines of Business Process Modeling[END_REF]. They are divided into mandatory and optional guidelines that give general advice on how modeling should be conducted. Because of their general nature, the GoM have been criticized of being too theoretical and abstract, meaning they cannot directly be operationalized [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF]. Mendling et al. [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF], therefore, present seven process modeling guidelines (7PMG) that provide concrete actions for modelers. Complementing the guidelines on what to do, pitfalls to avoid are listed by Rosemann [START_REF] Rosemann | Potential pitfalls of process modeling: part A[END_REF][START_REF] Rosemann | Potential pitfalls of process modeling: part B[END_REF]. A detailed overview of the available guidelines and pitfalls is given in Fig. 3. Because the GoM are more abstract, all guidelines and pitfalls presented in Mendling et al. [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF][START_REF] Rosemann | Potential pitfalls of process modeling: part A[END_REF][START_REF] Rosemann | Potential pitfalls of process modeling: part B[END_REF] are related to at least one or more GoM. While adherence to the described guidelines can increase model quality and reduce costs, empirical results also indicate that it can cause an increase in perceived usefulness, perceived ease of use and satisfaction with the modeling language [START_REF] Recker | Modeling with tools is easier, believe me"-The effects of tool functionality on modeling grammar usage beliefs[END_REF].
Existing process modeling languages, such as Petri nets, EPC, BPMN and UML Activity Diagrams, however, offer a high degree of freedom by the design of their meta-models. Restrictions, e. g. on how language elements like activities, connectors and flow should be used, should therefore be enforced by the modeling tool.
The process modeling tool market is vast and ranges from simple tools like Microsoft Visio up to business process management suites with many functionalities that surplus modeling functionality (see [START_REF]Gartner: Magic Quadrant for Business Process Management Suites[END_REF] and [START_REF] Recker | Modeling with tools is easier, believe me"-The effects of tool functionality on modeling grammar usage beliefs[END_REF] for an overview on existing tools). Most tools support user in creating, standardizing, storing and sharing process models, i. e. they offer a replacement for pen and paper. Additional functionality for the analysis of process models has received academic attention lately and is already available in some tools [START_REF] Delfmann | Pattern Specification and Matching in Conceptual Models. A Generic Approach Based on Set Operations[END_REF]. As outlined by Mendling et al. [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF], the modelers are, however, hardly supported in creating high quality and analyzable models because the available guidelines are not enforced or too abstract. We argue that this is caused by the degree of freedom in the modeling languages, which the tool vendors do not restrict [START_REF] Recker | Modeling with tools is easier, believe me"-The effects of tool functionality on modeling grammar usage beliefs[END_REF]. An exception is an approach for the standardization of model element labels by means of naming conventions for both allowed words and phrase structures that are enforced during the modeling process [START_REF] Delfmann | Unified Enterprise Knowledge Representation with Conceptual Models -Capturing Corporate Language in Naming Conventions[END_REF].
We conclude that most of the available guidelines are not enforced in available tools, because tool vendors do not to limit the degree of freedom which existing modeling languages allow for. We aim to close this research gap by proposing an integrated modeling environment that tries to enforce the guidelines summarized in Fig. 3.
The Modeling Environment
The proposed approach consists of a modeling language and a tool and was built with the goal to integrate principles from existing methodologies and guidelines. The first design principle of the environment was simplicity to keep business process modeling projects as simple as possible and as complex as necessary. Simplicity of the modeling language reduces the degrees of freedom and therefore fosters model quality by taking the available modeling methodologies and guidelines into account. Simplicity of the modeling tool enables a wider group of users to utilize the tool and, thus, facilitates distributed modeling, which reduces modeling costs [START_REF] Recker | Modeling with tools is easier, believe me"-The effects of tool functionality on modeling grammar usage beliefs[END_REF]. The second design principle is transparency, because the guidelines should be enforced already during modeling without the knowledge and additional interaction of the modeler.
The targeted audience of the modeling environment encompasses all the enterprises that choose or were chosen to model, discuss and analyze their processes. Our environment, however, was not built to model workflow processes that can directly be transformed into executable application code, because workflow modeling and organization-oriented process modeling differ substantially in the required level of detail [START_REF] Becker | Process Management: A Guide for the Design of Business Processes[END_REF]. Opening the environment for modeling in such detail contradicts the goal to reduce the degree of freedom and to take the modeling guidelines into account. The environment thus does not support process modeling as a preparation for the implementation of a workflow engine. It aims to support process modelling in projects focussed on the organizational issues, for example BPR, organizational documentation, knowledge management or software selection.
To address the 7PMG and the GoM as summarized in Table 1, the following rationales of the environment are presented in the upcoming sections:
• Simple syntax of the modeling language • Layer structure to control the leveling of modeling detail
Syntax and Structure of the Modeling Language
While Petri-nets use 3 model elements (places, transitions, flow), EPC features over 20 elements and BPMN offers more than 90 elements, our modeling language only allows for two constructs: activities and flow. This decision was supported by empirical evidence which suggests that modelers use fractions of the available elements in other languages [START_REF] Siau | Theoretical vs. Practical Complexity[END_REF][START_REF] Zur Muehlen | How Much Language is Enough ? Theoretical and Practical Use of the Business Process Modeling Notation[END_REF]. Activities and flow are constructs that are available in virtually all process modeling languages. Therefore, we do not propose a new language, but use a subset of existing modeling languages. The meta-model of the proposed language is depicted in Fig. 4.
Our concept of flow does not include routing logic in form of connectors in order to increase model quality [START_REF] Zur Muehlen | How Much Language is Enough ? Theoretical and Practical Use of the Business Process Modeling Notation[END_REF]. Activities are nonetheless allowed to have more than one predecessing or successing process element but cyclic edges are prohibited. The flow direction is top to bottom by convention within a model and only one start and end activity is allowed. Due to the secondary role of flow, all modeling detail is included in the activities and their attributes. Events are not available, because they are always connected to activities and their additional value can be included in a detailed description of the process elements, which we also call process bricks. Modeling within the environment is structured into four distinct layers, which enforce a comparable degree of modeling detail [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF][START_REF] Becker | Guidelines of Business Process Modeling[END_REF]. An example of an instantiation of the meta-model is depicted in Fig. 5. The first layer is the process framework that depicts the process landscape and consists of process elements, which can be freely arranged and shaped to represent arbitrary process frameworks and thus do not require to be connected by control flow. Each process brick in the framework represents a main process. Main processes again contain process bricks, and explicitly require control flow. Each process brick in a main process in turn is a detail process. The difference between main and detail process therefore is only on a level of detail. Each detail processes consist of several process bricks.
To further reduce branching and the number of elements per process model, our modeling environment allows referencing existing elements of the same level and variants. The process brick "Print invoice", for example, can be defined in in the detail process "Handle customer order" and reused by reference in the detail process "Revise complaint". Variants can exist on the main process and detail process level and allow the modeler to depict substantially different variations of the same process. The main process "Send invoice" could for example be constituted by the two variants "Send electronic invoice" and "Send manual invoice", which do not share any process bricks. For a more detailed description, please see [START_REF] Becker | Semantically Standardized and Transparent Process Model Collections via Process Building Blocks[END_REF].
Semantic Standardization through Glossary and Attributes
To semantically standardize the constructed models for analysis purposes and ensure high model quality, the proposed environment enforces naming conventions on the process brick labels. It builds on the approach by Delfmann et al. [START_REF] Delfmann | Unified Enterprise Knowledge Representation with Conceptual Models -Capturing Corporate Language in Naming Conventions[END_REF] and further extends it by eliminating the freedom of choice for phrase structures. Analogue to the syntax of our modeling technique, the semantic standardization uses the simplest structure availablethe verb-object phrase structure. Phrase structures of this composition have been proven to be better understandable than other phrase structures [START_REF] Mendling | Seven Process Modeling Guidelines (7PMG)[END_REF]. In the context of process modeling, verb and object can furthermore be interpreted as activity and business object.
The modeling environment contains a glossary that consists of business objects, activities, and relations between both. The business object "supplier basic data", for example, allows the activities "create" and "maintain", but does not allow "determine", which in turn could be used for "customer". The naming conventions are enforced whenever a process brick is created or edited by the fact that the modeler has to use a combination of existing business object and activity to name the process brick, as shown in Fig. 5.. Similar to Delfmann et al. [START_REF] Delfmann | Unified Enterprise Knowledge Representation with Conceptual Models -Capturing Corporate Language in Naming Conventions[END_REF] or standardization approaches based on ontologies, the glossary requires a high initial effort of creation and subsequent maintenance. However, it allows to reuse existing content, such as domain specific business objects or generally applicable activities.
Besides their label, process bricks can own an arbitrary amount of attributes. There are a number of predefined attribute types available, which include text, number, selection, file, URL, reference and hierarchy. In order to constrain the amount of attributes the attribute groups with the corresponding attributes are defined by an administrator and the normal user is only able to select from the offered set. Attributes of type hierarchy refer to hierarchies, which can also be modeled in the environment to document, for example, the organizational structures or hierarchies of the ITarchitecture and use this structured attribute to annotate process elements. Similar to the glossary, attributes can be used for process bricks on all layers, from framework to process building block. Attribution is used to increase the expressiveness of our modeling environment, because it allows for the simple annotation of detailed information without requiring new language elements.
Implementation
The prototypical implementation of our modeling environment is a web-based business process modeling tool, as presented in [START_REF] Becker | icebricks -Business Process Modeling on the Basis of Semantic Standardization[END_REF]. It is implemented as a Ruby on Rails application and integrates a role and rights management concept to support distributed modeling in large scale modeling projects with many stakeholders. See Fig. 6 for a screenshot. Process bricks of all layers are labeled using the business objects and activities that can be maintained within the glossary. Analogously, attributes can be defined within the prototype and used on all layers they are assigned to. Hierarchies for organizational structure and IT-architecture can be created and maintained directly in the prototype.
Fig. 6. Screenshot of detail process editing in the prototype
In order to evaluate the modeling environment we applied a prototypical implementation in two consulting projects that required business process modeling. Characteristics of the case study projects are summarized in Table 2. We conducted semi-structured interviews and used focus groups with the respective project leaders and key users to analyze the prototype. In both case studies, we focused on answering the question if the prototype fulfilled its purpose as efficient and effective modeling environment. Findings from the first case study were used to improve the prototype before the start of the second case study [START_REF] Tremblay | The Use of Focus Groups in Design Science Research[END_REF]. In accordance with our research methodology, we applied the enhanced prototype to the second case study and intend to keep up the iterative improvement procedure through further application.
Project 1: Business Process Reorganization and ERP Selection
The first case study was conducted within a business process reorganization and ERP selection project at a German sports and fashion retailer. The company sells sports and fashion goods in over 60 stores. The purpose of the project was the reorganization of processes in both, headquarters and stores, and the mutual consistent selection of a new ERP system to align processes and IT systems.
The project consisted of three phases. During the first phase, the as-is process models were documented. To-be processes were designed in the second phase that overlapped with the interrelated third phase of an ERP software selection process. All the phases were conducted by the company with the help of external consultants. Concerning the modeling methodologies presented in section three, the project encompassed modeling preparation and modeling conduction steps.
The modeling team consisted of external consultants and employees of the company headquarter, which were trained in the modeling environment during an initial workshop. The as-is models were created on the basis of interviews in all company departments. As the company had not undertaken any modeling activities before the project, the processes were designed on the basis of a reference model for retail processes, already available in the prototype [START_REF] Becker | A Reference Model for Retail Enterprises[END_REF]. Due to the restrictive nature of the prototype, the team did not have to discuss modeling guidelines on syntax, object types to be used, naming conventions, degree of detail or layout conventions. All adaption of the prototype to the project goals was achieved through the definition of attributes. During the first as-is modeling project phase, these company-specific attributes were defined and later on enhanced during the second to-be modeling phase. To support the final ERP choice, attributes to measure the IT-related requirements and their fulfillment were added to the modeling environment and filled out during workshop meetings with all stakeholders and process-based vendor presentations.
The case study revealed that the reduction in preparation activities was perceived very positive by the whole modeling team and allowed for a fast start into the project. Company employees stated the simplicity of the modeling language as a driver to facilitate the initial acceptance of the project in all departments. The external consultants especially complimented the four-layer architecture, because it helped to standardize the degree of detail within the models and improved inter-model comparability. The effort to prepare the models for the semi-automated creation of an ERP requirements definition could, therefore, be kept low. The main argument to use the proposed modeling environment over other modeling languages or tools in new projects, stated by almost all team members interviewed, turned out to be the ability to adapt the environment to the modeling purpose through attributes, because all models could be re-used throughout the project phases by small additions to the attributes in use. Negative comments were received during the initial preparation of the glossary. The reservations were, however, partly dissolved for the most part during later stages of the project, when renaming tasks could be executed centrally.
Major criticism was furthermore voiced towards the general usability of the prototype, in particular to glossary management and the element naming. All members of the modeling team stated export (Word, Excel export and XML import/export) and analysis features, such as statistics on the used attributes, as most important missing functionalities.
We concluded that our proposed modeling environment facilitated cost-efficient modeling by accelerating modeling preparation activities and achieved high acceptance in all departments because of the simple modeling language. Re-work by the external consultants could be minimized, because model quality was ensured during all phases of the project. Adaption of the prototype to the project could be achieved through extensive use of attributes that facilitated model re-use. Nonetheless, the case study revealed improvement points, in particular usability aspects and the need for model export and analysis features.
Project 2: Organizational Documentation and Knowledge Management
After the prototype had been improved, we conducted a second case study with a leading German wholesaler of promotional material. The company has been founded two decades ago and grown ever since. In order to keep pace with the changing processes and the growing employee base, they decided to document their processes and use the documentation for knowledge management purposes. Till now, the project only included an as-is modeling phase, but as the ERP system of the company will undergo a major update by the time of this work, the models will be used to derive software test cases.
Similar to the first project, the company had not documented any processes before the project. The project team was therefore supported by external consultants. The processes were created on the basis of retail reference processes with our prototype during interviews with employees from all departments.
The second case yielded results similar to the first project. the modeling language and the enforced modeling structure were perceived very positive by all project stakeholders. All further information could be expressed using the attributes, which again allowed for extensive reuse of the models for both organizational documentation and knowledge management. Problems for the preparation of the model-based software tests are not expected.
As the prototype was improved according to the findings from the first case study, discussions of the usability of the prototype gave much more positive responses but revealed additional issues that will be addressed in the next version of the prototype. The added export functionality could be fully tested during the project and was perceived very helpful by the modeling team. In opposite, functionality for analysis could not be properly evaluated, as the project did not encompass as-is analysis or tobe modelling steps, which could have required an tool support in model analysis.
We conclude that all requirements of both projects could be met and the discussions led to a substantial improvement of the prototype. The positive comments about productivity and model quality furthermore indicate that the design goals of the modeling environment could be met. As both companies represent typical small-or medium-sized enterprises, the application results are expected to be reproducible in other companies.
Summary, Limitations and Outlook
This paper presented a modeling environment, consisting of a language and a tool that integrate ideas from existing methodologies. It combines a simple modeling syntax that generalizes many existing languages with a four-layer structure to control the degree of modeling detail. The use of a glossary and attributes allows for the creation of standardized process models in a cost-efficient manner. The development process of the environment is based on the design science research methodology by [START_REF] Peffers | A Design Science Research Methodology for Information Systems Research[END_REF] and the resulting prototype advances the current state of the art, because it transparently operationalizes process modeling guidelines that have been proposed in literature. The prototype was successfully evaluated in two modeling projects as an easy and fast to use modeling environment for beginners but also for advanced modelers who furthermore especially appreciated the standardization regarding naming and level of detail.
The environment is limited to business modeling and analysis purposes, such as business process reengineering or knowledge documentation, and not applicable for workflow modeling. It can, however, serve as a starting point for more detailed modeling by integrating workflow models as attributes of the process bricks. The case studies on the prototype moreover only cover German companies and, thus, did not consider social factors, such as values and beliefs. Moroever, both cases were conducted within companies that had no modeling experience beforehand. An application within a company with an existing modeling landscape would be needed to reveal potential issues in model transformation. Similarly, we did not compare our approach to existing modeling languages and tools. Such a comparison could be useful to further improve the proposed environment.
During the application, we also identified potential for further research. One aspect that requires further investigation is the four-layer architecture and we intend to evaluate how it performs against structures with a different number of layers. The cases also revealed challenging usability issues, which we already address in research [START_REF] Becker | Towards A Usability Measurement Framework for Process Modelling Tools[END_REF] and subsequently intend to implement in the prototype. The proposed glossary also raised need for improvements, because it was the main source of reservations in our case studies. We intend to identify methods to improve the glossary management, for example by including existing verb-hierarchies. The prototype furthermore serves as a starting point for current research on model version control [START_REF] Clever | Growing Trees -A Versioning Approach for Business Process Models based on Graph Theory[END_REF] and modeling support features such as auto-suggestion. As it is the underlying idea behind our work, future research will also focus on the use of attributes in process modeling and the relation of attribute-based complexity with modeling language-based complexity.
Fig. 1 .
1 Fig. 1. Research methods applied in this paper mapped to research phases
Fig. 2 .Fig. 3 .
23 Fig. 2. Business process management procedures (Source: [2, 3, 16, 17])
Fig. 4 .
4 Fig. 4. Meta-model of the modeling language
Fig. 5 .
5 Fig. 5. Examples for framework, main process, detail process and process brick
Proportion of modeling related activities: high low Guidelines of Modeling Basic Guidelines
Kettinger et al. 1997 Allweyer 2005 Schmelzer, Sessel-mann 2008 Becker et al. 2011
Envision Initiate Planning Strategic positioning Modeling preparation Project management Preparation
Diagnose Redesign Reconstruct As-is analysis To-be concept To-be processes implementation Process identification Process implementation and controlling Process optimization Strategy and org.frame development As-is modeling and analysis Process-oriented org.chart development To-be modeling and process optimization New organization introduction Modeling, analysis and optimization Implemen -tation
Evaluate Continuous process management Process improvement and renewal Continuous process management tion Evalua-
• Syntactical and
semantic
correctness
• Relevance
• Economic
efficiency
Optional
Guidelines
• Clarity
• Comparability
• Systematic design
Seven Process Modeling Guidelines Potential Pitfalls of Process Modeling
G1: Use as few
elements in the • Lack of strategic • Lack of complementary
model as possible connections methodologies
G2: Minimize the • Lack of governance • L'Art pour l'Art
routing paths per • Lack of synergies • Lost in syntactical
element • Lack of qualified correctness
G3: Use one start and modelers • Lost in detail
one end event • Lack of qualified • Lack of imagination
G4: Model as business • Lost in best practice
structured as possible representatives • Design to-be models
G5: Avoid OR routing • Lack of user buy-in solely centered on new
elements • Lack of realism IT
G6: Use verb-object • Chicken and egg • Modeling success is
activity labels problem not process success
G7: Decompose a • Lack of details • Lost in model
model with more than • Lost in translation maintenance
50 elements
Table 1 .
1 • Variants and process element references to reduce model complexity • Glossary and semantic standardization to eliminate naming conflicts • Attributes to adapt the environment to a concrete modeling project While certain guidelines are enforced by the modeling tool, or imply impossible by language design, other aspects, such a the amount of elements used per model, can only be facilitated but not completely restricted. Guidelines and pitfalls for process modeling
7PMG Guidelines of Modeling Enforced () or
Facilitated (o) by:
1 Use as few elements in the Clarity, Comparability, o Syntax,
model as possible Economic efficiency Structure
2 Minimize the routing paths Clarity, Correctness Syntax
per element
3 Use one start and one end Clarity, Comparability o Syntax
event Systematic design
4 Model as structured as Clarity, Comparability, Structure,
possible Economic efficiency Variants
5 Avoid OR routing elements Clarity Syntax
6 Use verb-object activity Clarity, Systematic Glossary
labels design
7 Decompose a model with Clarity, Systematic o Syntax,
more than 50 elements design Structure
Table 2 .
2 Case study project characteristics
Case study 1 Case study 2
Modeling Business process reorganization, Organizational
purpose ERP selection documentation, knowledge
management
Project duration Winter 2010 -Summer 2011 Winter 2011 -Summer 2012
Domain Retail Warehousing
Articles ~20.000, sports and fashion ~2.500, promotion material
Employees >1.600 >250 worldwide
Customers B2C customers ~6000 B2B customers
Processes 2 frameworks (store and 1 framework, 12 main
documented headquarters), 13 main processes + 10 variants, 49
processes + 6 variants, 55 detail detail processes, 133 process
processes, 168 process building building blocks
blocks | 34,256 | [
"1002450",
"1002451",
"1002452",
"1002453",
"1002454"
] | [
"325710",
"325710",
"325710",
"325710",
"325710"
] |
01474750 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474750/file/978-3-642-41641-5_17_Chapter.pdf | Maria Leitner
email: [email protected]
Sigrid Schefer-Wenzl
email: [email protected]
Stefanie Rinderle-Ma
email: [email protected]
Mark Strembeck
email: [email protected]
An Experimental Study on the Design and Modeling of Security Concepts in Business Processes
Keywords: BPMN, Business Processes, Empirical Evaluation, Icons, Modeling, Security, Visualization
In recent years, business process models are used to dene security properties for the corresponding business information systems. In this context, a number of approaches emerged that integrate security properties into standard process modeling languages. Often, these security properties are depicted as text annotations or graphical extensions. However, because the symbols of process-related security properties are not standardized, dierent issues concerning the comprehensibility and maintenance of the respective models arise. In this paper, we present the initial results of an experimental study on the design and modeling of 11 security concepts in a business process context. In particular, we center on the semantic transparency of the visual symbols that are intended to represent the dierent concepts (i.e. the one-to-one correspondence between the symbol and its meaning). Our evaluation showed that various symbols exist which are well-perceived. However, further studies are necessary to dissolve a number of remaining issues.
Introduction
Over the last three decades, organizations moved towards a process-centered view of business activities in order to cope with rising complexity and dynamics of the economic environment (e.g., [START_REF] Zairi | Business Process Management: A Boundaryless Approach to Modern Competitiveness[END_REF]). Business processes consist of tasks which are executed in an organization to achieve certain corporate goals [START_REF] Zur Muehlen | Modeling Languages for Business Processes and Business Rules: A Representational Analysis[END_REF].
Business process models represent these processes of organizations. Typically, the business process models are executed via process-aware information systems (PAIS) (e.g., [START_REF] Weske | Business Process Management: Concepts, Languages, Architectures[END_REF]). Today, various business process modeling languages exist that support graphical representations of business processes such as the Business Process Model and Notation (BPMN) [START_REF]OMG: Business process model and notation (BPMN) version 2.0[END_REF], Unied Modeling Language (UML) Activity Diagrams [START_REF]OMG: Unied Modeling Language (OMG UML): Superstructure version 2.4.1[END_REF] or Event-driven Process Chains (EPC) [START_REF] Mendling | Metrics for Process Models: Empirical Foundations of Verication, Error Prediction and Guidelines for Correctness[END_REF][START_REF] Scheer | ARIS -Business Process Modeling[END_REF].
To protect sensitive organizational data and services, information systems security is constantly receiving more attention in research and industry (e.g., [START_REF] Johnson | Embedding Information Security into the Organization[END_REF]).
In many organizations, process models serve as a primary vehicle to eciently communicate and engineer related security properties (e.g., [START_REF] Strembeck | Scenario-Driven Role Engineering[END_REF]). However, contemporary process modeling languages, such as BPMN, EPCs or UML Activity diagrams, do not provide native language support to model process-related security aspects [START_REF] Leitner | Security policies in adaptive process-aware information systems: Existing approaches and challenges[END_REF][START_REF] Leitner | SPRINT-Responsibilities: design and development of security policies in process-aware information systems[END_REF]. As a consequence, while business processes can be specied via graphical modeling languages, corresponding security properties are usually only dened via (informal) textual comments or via ad hoc extensions to modeling languages (e.g., [START_REF] Wolter | Modelling security goals in business processes[END_REF][START_REF] Leitner | An analysis and evaluation of security aspects in the business process model and notation[END_REF]). For example in [START_REF] Leitner | An analysis and evaluation of security aspects in the business process model and notation[END_REF], we outlined current research and practice of security modeling extensions in BPMN. In addition, we conducted a survey to evaluate the comprehensibility of these extensions. The study showed that a mix of visual representations of BPMN security extensions (e.g., use of dierent shapes, use of text) exists. What is missing is a uniform approach for security modeling in BPMN.
Missing standardized modeling support for security properties in process models may result in signicant problems regarding the comprehensibility and maintainability of these ad hoc models. Moreover, it is dicult to translate the respective modeling-level concepts to actual software systems. The demand for an integrated modeling support of business processes and corresponding security properties has been repeatedly identied in research and practice (e.g., [START_REF] Wolter | Modelling security goals in business processes[END_REF][START_REF] Russell | Workow Resource Patterns: Identication, Representation and Tool Support[END_REF]).
In this paper, we present the preliminary results of an experimental study on the design and modeling of 11 security concepts on dierent abstraction levels in a business process context. In particular, we investigate the visualization of the following security concepts: Access control, Audit, Availability, Data condentiality, Data integrity, Digital signature, Encryption, Privacy, Risk, Role and User. This study aims at designing symbols that are semantics-oriented and user-oriented (see Section 2) as outlined in [START_REF] Mendling | On the usage of labels and icons in business process modeling[END_REF]. Based on the suggestions and ndings presented in [START_REF] Leitner | An analysis and evaluation of security aspects in the business process model and notation[END_REF][START_REF] Genon | Towards a more semantically transparent i* visual syntax[END_REF][START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF], we designed two studies to obtain graphical symbols for 11 security concepts. Subsequently, we evaluated the symbol set via expert interviews. As most symbols were well-perceived, we plan to use these results and reexamine the symbols that were misleading in further studies. This will yield the basis to convey security-related information in business process models in a comprehensible way.
Our paper is structured as follows. Section 2 introduces background information on the visualization of business processes and security concepts. In Section 3, we outline the methods applied in this paper and corresponding research questions. Next, Sections 4 and 5 describe the design and results of the two experimental studies we conducted to obtain a symbol set for security concepts.
The results of the evaluation of the symbols are presented in Section 6. Finally, in Section 7 we discuss results, preliminary options for integrating the symbols into BPMN and UML and impact on future research. Section 8 concludes the paper.
Related Work
Visual representations have a strong impact on the usability and eectiveness of software engineering notations [START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF]. The quality of conceptual models is essential to, e.g., prevent errors and to improve the quality of the corresponding systems [START_REF] Moody | Theoretical and practical issues in evaluating the quality of conceptual models: current state and future directions[END_REF]. Several frameworks exist that provide guidelines on how to design and evaluate visual notations (e.g., [START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF][START_REF] Blackwell | Cognitive dimensions of notations: Design tools for cognitive technology[END_REF]). For example, the Physics of Notations in [START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF] consists of nine principles to design visual notations eectively. Further language evaluation frameworks include the cognitive dimensions of notations [START_REF] Blackwell | Cognitive dimensions of notations: Design tools for cognitive technology[END_REF][START_REF] Green | Cognitive dimensions: Achievements, new directions, and open questions[END_REF] that provide a set of dimensions to assist designers of visual notations to evaluate these designs. A framework for evaluating the quality of conceptual models is presented in [START_REF] Krogstie | Process models representing knowledge for action: a revised quality framework[END_REF]. This approach considers various aspects such as learning (of a domain), current knowledge and the modeling activity. It also provides a dynamic view showing that change to a model might cause a direct change of the domain.
Visual Representations of Business Processes In the context of PAIS, recent publications show increased interest in the visual representation of process modeling languages. For example in [START_REF] Genon | Analysing the cognitive eectiveness of the BPMN 2.0 visual notation[END_REF], an evaluation of the cognitive eectiveness of BPMN using the Physics of Notations is performed. Further studies investigate certain characteristics such as routing symbols [START_REF] Figl | On the cognitive eectiveness of routing symbols in process modeling languages[END_REF] or the usage of labels and icons [START_REF] Mendling | On the usage of labels and icons in business process modeling[END_REF]. In this paper, we use the terms symbol and icon synonymously as icons are symbols that perceptually resemble the concepts they represent [17, p.765]. In [START_REF] Mendling | On the usage of labels and icons in business process modeling[END_REF], the following guidelines for icon development are outlined based on research in graphical user interface design: a Semantics-oriented: Icons should be natural to users, resemble to the concepts they refer to, and be dierent from each other (so that all icons can be easily dierentiated). Modeling of Security Concepts in Business Processes Typically, process models are created by process modelers or process managers in an organization. These managers have an expertise in process modeling, but are often not experts in security. A security expert provides know-how and collaborates with the process modeling expert to enforce security concerns in a process. Hence, the integrated modeling of security aspects in a process model is intended to provide a common language and basis between dierent domain experts. Recent publications try to provide a common language between domain experts (e.g., security experts) and process modelers by proposing process modeling extensions such as to the Unied Modeling Language (UML) (e.g., [START_REF] Lodderstedt | SecureUML: a UML-Based modeling language for model-driven security[END_REF][START_REF] Hoisl | Modeling support for condentiality and integrity of object ows in activity models[END_REF][START_REF] Sindre | Mal-Activity Diagrams for Capturing Attacks on Business Processes[END_REF]) or to BPMN (e.g., [START_REF] Wolter | Modelling security goals in business processes[END_REF][START_REF] Leitner | An analysis and evaluation of security aspects in the business process model and notation[END_REF]).
Methodology
The main goal of this paper is to assess the design and modeling of security concepts in business processes. Thereby, we expect to obtain an initial set of symbols for security concepts. In contrast to existing security extensions, we do not design these symbols from scratch though. In order to obtain a set of symbols for selected security concepts, we conducted two studies (Experiment 1 and 2) and evaluated the results via expert interviews. In particular, our research was guided by the following questions (RQ):
1 Which symbols can be used to represent 11 dierent security concepts?
2 How can the drawings from RQ1 be aggregated into stereotype symbols? 2.1 How do experts evaluate the one-to-one correspondence between the symbols and security concepts? 2.2 How do experts rate the resemblance between the symbols and concepts? 2.3 How can the stereotype symbols be improved?
3 How can the security symbols be integrated into business process modeling? 3.1 Are the stereotype symbols suitable to be integrated into business process models?
3.2 Which business process modeling languages are suggested for security modeling by experts? 3.3 In BPMN, which symbols should be related to which process elements? 3.4 Can color be useful to distinguish security symbols from BPMN standard elements?
Research question RQ1 investigates what kind of symbols people draw for 11 security concepts in Experiment 1. In the rst experiment, we retrieved the symbols by setting up an experiment where the participants were asked to draw intuitive symbols for security concepts. Based on these drawings, we aggregate the drawings into stereotype symbols (RQ2) based on frequency, uniqueness and iconic character in Experiment 2. For evaluation, we analyze the stereotype symbols with expert interviews (see Section 6). In particular, we will evaluate the one-to-one correspondence between symbols and concepts, the rating of resemblance between symbols and concepts and if these symbols can be improved.
With this expert evaluation, we hope to identify not only strengths and shortcomings of the symbols but also to gain insights on how to enhance the symbols such as with the use of hybrid symbols that combine graphics and text. Moreover, we investigate if security symbols can be integrated into business process modeling (RQ3). For example, we evaluate for each security concept which process elements in BPMN can be associated with it. Thereby, we expect to identify integration options for business process modeling languages.
Experiment 1: Production of Drawings
The rst experiment addresses research question RQ1 to identify which symbols can be used to represent security concepts (see Section 3). For this purpose, we adapted the experiment design of the rst experiment presented in [START_REF] Genon | Towards a more semantically transparent i* visual syntax[END_REF].
Participants and Procedure
In our rst experiment, we used a paper-based questionnaire to conduct a survey.
In total, 43 Bachelors' and Masters' students in Business Informatics at the University of Vienna and the Vienna University of Economics and Business lled out the questionnaire. Most participants had beginner or intermediate knowledge of business processes and/or security. We expect to nd this setting also in research and industry where experts from dierent domains (e.g., process modelers, security experts and business process managers) interact with each other to discuss and dene security in business processes.
The survey contained 13 stapled, one-sided pages and completing the survey took about 30 minutes. It consisted of two parts. The rst part, 2 pages long, presented the aim and collected demographic data of the participants, such as knowledge of business processes, knowledge of business process modeling languages and security knowledge. The second part consisted of 11 pages; one for each security concept. At the top of each page, a two-column table was displayed.
Its rst row contained the name of the security concept in English (see Table 1).
Additionally, we displayed the name of the respective concepts in German. In the last row, a denition of the concept was given. All denitions were taken from the internet security glossary [START_REF] Shirey | Internet Security Glossary[END_REF] except for Role and User which were taken from the RBAC standard in [START_REF] Council | Information technology -role based access control[END_REF]. Please note that a denition of Role is not given in [START_REF] Shirey | Internet Security Glossary[END_REF] and the User denition in [START_REF] Shirey | Internet Security Glossary[END_REF] and [START_REF] Council | Information technology -role based access control[END_REF] are very similar. Role is important as it is an essential concept for access control in PAIS (see [START_REF] Strembeck | Scenario-Driven Role Engineering[END_REF]).
The selection of the security concepts to be included in the survey was based on literature reviews and research projects. The aim was to consider concepts on dierent abstraction levels, including abstract concepts such as data integrity or condentiality but also to include its applications (e.g., digital signature (integrity) and encryption (condentiality)). In the middle of each page, a (3 inch x 3 inch) frame was printed. Participants were asked to draw in the frame what they estimate to be the best symbol to represent the name and the denition of a security concept. At the bottom of each page, we asked the participants to rate the diculty of drawing this sketch. Additionally, the participants were asked to describe the symbol with one to three keywords in case they want to clarify the sketch.
Results
In total, we received 473 drawings (blank and null drawings included). We observed that participants often did not only draw a single symbol for a concept Access control Protection of system resources against unauthorized access. Audit An independent review and examination of a system's records and activities to determine the adequacy of system controls, ensure compliance with established security policy and procedures, detect breaches in security services, and recommend any changes that are indicated for countermeasures.
Availability
The property of a system or a system resource being accessible and usable upon demand by an authorized system entity, according to performance specications for the system. Data condentiality The property that information is not made available or disclosed to unauthorized individuals, entities, or processes.
Data integrity
The property that data has not been changed, destroyed, or lost in an unauthorized or accidental manner. Digital signature A value computed with a cryptographic algorithm and appended to a data object in such a way that any recipient of the data can use the signature to verify the data's origin and integrity. Encryption Cryptographic transformation of data (called "plaintext") into a form (called "ciphertext") that conceals the data's original meaning to prevent it from being known or used.
Privacy
The right of an entity (normally a person), acting in its own behalf, to determine the degree to which it will interact with its environment, including the degree to which the entity is willing to share information about itself with others.
Risk
An expectation of loss expressed as the probability that a particular threat will exploit a particular vulnerability with a particular harmful result.
Role
A role is a job function within the context of an organization with some associated semantics regarding the authority and responsibility conferred on the user assigned to the role. User A user is dened as a human being. The concept of a user can be extended to include machines, networks, or intelligent autonomous agents.
but a combination of several symbols e.g., a desk in front of a matchstick man.
These drawings often included signs or symbols that resembled the majority drawings.
As can be seen in Figure 1, most participants stated that the task to draw a symbol for User, Encryption, Risk and Access control was easy or fairly easy.
On the other hand, it was fairly dicult or dicult for many participants to draw Audit, Data condentiality, Digital signature and Role. To answer research question RQ2, Experiment 2 is concerned with producing stereotypical symbols out of the sketches of Experiment 1 (adapted from [START_REF] Genon | Towards a more semantically transparent i* visual syntax[END_REF]).
Procedure
A stereotype is the best median drawing, i.e. the symbol which is most frequently used by people to depict a concept [START_REF] Genon | Towards a more semantically transparent i* visual syntax[END_REF]. The resulting set of stereotypes then constitutes our rst proposed set of hand-sketched symbols for visualizing security concepts. However, as mentioned in [START_REF] Genon | Towards a more semantically transparent i* visual syntax[END_REF], the drawing that is the most frequently produced to denote a security concept is not necessarily expressing the idea of the respective concept best. Thus, we subsequently evaluated the set of stereotypes via expert interviews (see Section 6).
In accordance with [START_REF] Genon | Towards a more semantically transparent i* visual syntax[END_REF], we applied a judges' ranking method in Experiment 2 to identify the stereotypes. We started by categorizing the drawings obtained from Experiment 1. We evaluated (a) the idea it represented, (b) whether it is a drawing or a symbol and (c) the uniqueness and dissimilarity between the drawings. Thereby, each author associated a keyword (i.e. category) that represented the idea with each drawing. Drawings representing the same idea for a particular security concept form a category. Each author performed the categorization independently. Subsequently, we analyzed each categorization and reviewed and agreed on a nal categorization in several rounds (see column Experiment 2 in Table 2 for the nal number of categories).
To select the stereotypes, we applied the following three criteria to determine the symbol that best expressed the idea of the respective security concept: (1)
Frequency of occurrence: For each security concept, we chose a drawing from each category that contained the largest number of drawings. (2) Distinctiveness and uniqueness: To avoid ambiguities and symbol overload [START_REF] Genon | Analysing the cognitive eectiveness of the BPMN 2.0 visual notation[END_REF][START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF], we tried to select symbols which are not too similar and can be easily distinguished from each other. (3) Iconic character: According to [START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF], users prefer real objects to abstract shapes, because iconic representations can be easier recognized in a diagram and are more accessible to novice users (see [START_REF] Petre | Why looking isn't always seeing: Readership skills and graphical programming[END_REF]). an idea that could be found in many other drawings. We assume that this is due to the high level of abstraction of the terms which leads to diculties in their visual representation. The results also indicate that the participants prefer real objects for representing security concepts (e.g., a house for Privacy).
Evaluation
This evaluation is concerned with validating the results retrieved form Experiments 1 and 2 via expert interviews and also to initially assess the use of the security symbols for business process models.
Participants and Procedure
For evaluation, a series of semi-structured interviews were conducted. A paperbased questionnaire served as the basis for these interviews. Moreover, one of the authors observed each expert while lling out the questionnaire. In addition to the questionnaire, a sheet with a list of security concepts and denitions (see Table 1) was provided to the expert. In total, we interviewed 6 experts from the security (2), process modeling (3) and visualization (1) domain. All experts have a high or intermediate expertise in both areas, process modeling and security.
The questionnaire consisted of three dierent parts. In the rst part of the interview, goals and purpose of the interview were presented. Then, demographic data of the experts were collected such as general level of knowledge of process modeling and security. The second part of the interview was concerned with investigating the stereotype symbols (see Figure 2). First, the experts matched the 11 security (stereotype) symbols with corresponding 11 security concepts using thinking aloud techniques (see [START_REF] Boren | Thinking aloud: reconciling theory and practice[END_REF]). With this setting, we expect to gain insight into how the symbols are matched by experts. After the matching, the interviewer pointed out his/her matching. Subsequently, the experts were specically questioned for the one-to-one correspondence between the symbols and their security concepts to evaluate the semiotic clarity of the symbols (see [START_REF] Moody | The physics of notations: Toward a scientic basis for constructing visual notations in software engineering[END_REF]). Furthermore, the experts were asked to rate the resemblance of the symbols with the concepts they represent. Additionally, we asked if the use of shapes (e.g., triangles or circles), document shapes or hybrid symbols (a combination of graphics and text) can be helpful for the stereotype symbols. The third part addressed the icons' suitability to be integrated into a business process modeling language.
Therefore, we asked if the stereotype symbols are suitable to be integrated into business process models and more specically into which business process modeling languages. For example, we analyzed to which BPMN elements the symbols could be related to and if color can be helpful to distinguish security symbols from standard BPMN elements.
Results
In the following, we will summarize the results according to each research question (see Section 3). Table 2 displays the quantitative results of the study: the number of collected drawings by concept in Experiment 1, the number of assigned categories per concept in Experiment 2 (see Section 5) and the number of correct matches of the one-to-one correspondence of the experts for evaluation (RQ2a).
RQ2a: How do experts evaluate the one-to-one correspondence between the symbols and security concepts? In general, all experts could relate most stereotype symbols to the list of security concepts (see Table 2). For example, all (6) experts could identify the stereotype symbols Audit, Availability, Risk and User (see Figure 2). However, Digital signature (2 of 6) and Access control (3) were the least recognized symbols.
In case of Digital signature, two experts related the pen symbol with the act of writing and signing. However, all other experts could not identify the symbol as pen and binary code. Two security experts could not relate the symbol to any concept or at least to the Data condentiality symbol (see Figure 2). Furthermore, two experts related the padlock symbol for Access control to the key symbol for Encryption as referring to locking and unlocking something. One expert assigned the concept encryption to the padlock symbol. One could not interpret the symbol at all. In addition, two experts pointed out that the padlock symbol used for Access control is also part of the Data condentiality symbol, which might lead to dierentiation problems.
RQ2b: How do experts rate the resemblance between the symbols and concepts? All experts agreed on a good resemblance of the symbols Encryption, Risk and User. Four of the experts assessed a good resemblance of the symbols Availability, Data condentiality and Data integrity. The expert opinions for Access control, Audit, Digital signature and Role varied and therefore no clear statement can be made. In the case of Audit, at rst, experts often associated the magnier to searching for something. After the interviewer referred to the denition of Audit, the expert could link the symbol to review and examine.
RQ2c: How can the stereotype symbols be improved? There were only few suggestions on how to improve the symbols. One important note, however, was the similarity of the Access control and Data condentiality symbol (due to the use of the padlock symbol) and of the Availability and Data integrity symbol (due to the check mark). Also, the relation of the padlock and the key symbol were associated with something that is in a locked or unlocked state. Hence, these symbols need to be reexamined in future studies.
Shapes The use of additional shapes such as triangles or circles around the symbols can be slightly or moderately helpful. Some experts pointed out that the complexity of most symbols should not be increased by additional shapes. However, the shapes in symbols Risk and Availability were well-perceived. Document Shapes In the rst experiment, many participants draw symbols using a document shape (e.g., symbol Data condentiality in Figure 2). The experts pointed out that these document shapes should be primarily used to display concepts in relation to data such as data integrity or condentiality. Additionally, the size of the symbol integrated in the document shape should be large enough to recognize the symbol.
Hybrid Symbols Most experts found that hybrid symbols combining graphics and text can be very and extremely helpful to display security concepts. However, it is important to use common abbreviations or the full name to display the security concepts.
RQ3a: Are the stereotype symbols suitable to be integrated into business process models? In general, the experts agreed that the symbols are suitable for the integration into business processes. However, they noted that some symbols should be reevaluated or redrawn to avoid symbol redundancy as stated in research question RQ2c. Furthermore, they stated that the use of legends could be helpful to novices.
RQ3b: Which business process modeling languages are suggested for security modeling by experts? The experts proposed mainly BPMN and UML. The choice for BPMN was motivated by the experts as it serves as de facto standard for business process modeling. In addition, UML is suggested because it oers integrated languages for specifying software systems from various perspectives, which includes the process and security perspectives.
RQ3c: In BPMN, which symbols should be related to which process elements? In the following, we will list the experts opinions (of at least 3 or more experts) on the linkage of security symbols and BPMN process elements (events, data objects, lanes, message events, tasks and text annotations).
Tasks can be associated to Access control, Audit, Privacy, Risk, Role and User. Hence, not only the authorization of end users to tasks is an important factor but also the supervision of these. Furthermore, events can be related to Audit and Risk. Data objects can be linked to Availability, Data condentiality, Data integrity, Digital signature and Encryption. This is not surprising as these security concepts are closely related to data. Moreover, message events are associated to Data condentiality, Data integrity, Digital signature and Encryption. As messages represent a piece of data this seems conclusive. Lanes can be linked to Role. As lanes can represent job functions or departments it seems feasible that lanes could be also linked to User. Lastly, Audit was the only symbol associated to text annotations. These suggestions provide an initial basis to further develop a security extension for BPMN. However, not only the semantic (semiotic) modeling but also the syntactic modeling is important and will be investigated in future work.
RQ3d: Can color be useful to distinguish security symbols from BPMN standard elements? Most experts state that color can be helpful to highlight the security symbols in BPMN. However, the use of color should be moderately handled such as using only one color or coloring the background of the symbol.
In conclusion, our evaluation showed that most symbols could be recognized by the experts. Some symbols such as Data condentiality and Access control should be reexamined to dissolve remaining issues (see RQ2b and RQ2c). Furthermore, the integration of security symbols into business processes was in general well-perceived.
Discussion
Threats of Validity In the rst experiment, we analyzed the drawings of 43 students. One can argue that this number is not enough to discover stereotype symbols for security concepts. As depicted in Section 5, we evaluated the frequency, uniqueness and iconic character between the drawings to develop the stereotype symbols. Most symbols could be easily identied except for Access control and Data condentiality. Our evaluation showed that even though we received a wide range of drawings, the experts rated the resemblance of symbols and their concepts in general positively.
Moreover, the 11 security concepts dier in their level of abstraction. For example, Privacy and Availability are highly abstract concepts, while Digital signature and Encryption are more low-level concepts (e.g., applications). In future studies, we need to investigate the need to translate the abstract concepts into further low-level (e.g., implementation relevant) concepts and their use in a business process context.
For evaluation, we interviewed six experts from the security and/or process modeling domain. The purpose of these interviews was to gain qualitative insights on the security symbols and to analyze the one-to-one correspondence matching of the symbols and concepts. Based on these interviews, we will further develop and evaluate the security symbols and continue our research centering on end user preferences.
Integration Scenarios for BPMN and UML The BPMN [START_REF]OMG: Business process model and notation (BPMN) version 2.0[END_REF] metamodel provides a set of extension elements that assign additional attributes and elements to BPMN elements. In particular, the Extension element binds an ExtensionDefinition and its ExtensionAttributeDefinition to a BPMN model denition. This elements could be used to dene, e.g., an encryption level or that a digital signature is required. Furthermore, new markers or indicators can be integrated into BPMN elements to depict a new subtype or to emphasize a specic attribute of an element. For example, additional task types could be established by adding indicators similar to the e.g., service task in the BPMN specication (see [START_REF]OMG: Business process model and notation (BPMN) version 2.0[END_REF]). The BPMN standard already species user tasks; i.e. tasks executed by humans. However, this does not specify how the user is authenticated (Access control) nor how the task showed up in his worklist (resolved via Role or User). We will investigate further if the assignment of Role or User to tasks is really needed as lanes provide similar functionality in BPMN. For the BPMN symbols for data and message events, we would need to adapt these symbols and determine how to relate security concepts to them.
In case of UML, an integration of the security concepts is possible either by extending the UML metamodel or by dening UML stereotypes (see [START_REF]OMG: Unied Modeling Language (OMG UML): Superstructure version 2.4.1[END_REF]).
In particular, UML2 Activity models oer a process modeling language that allows to model the control and object ows between dierent actions. The main element of an Activity diagram is an Activity. Its behavior is dened by a decomposition into dierent Actions. A UML2 Activity thus models a process while the Actions that are included in the Activity can be used to model tasks.
Several security extensions to the UML already exist, for example SecureUML [START_REF] Lodderstedt | SecureUML: a UML-Based modeling language for model-driven security[END_REF]. However, this extension does not have any particular connection to process diagrams. In addition, several approaches exist to integrate various security aspects, such as role-based access control concepts [START_REF] Strembeck | Modeling Process-related RBAC Models with Extended UML Activity Models[END_REF][START_REF] Schefer-Wenzl | A UML Extension for Modeling Break-Glass Policies[END_REF][START_REF] Schefer | Modeling Support for Delegating Roles, Tasks, and Duties in a Process-Related RBAC Context[END_REF] or data integrity and data condentiality [START_REF] Hoisl | Modeling support for condentiality and integrity of object ows in activity models[END_REF] into UML Activity diagrams. However, in contrast to the approach presented in this paper, all other security visualizations only represent presentation options. They are suggested by the authors and not evaluated with respect to the cognitive eectiveness of the new symbols. Based on the integration options for BPMN, we derive the following suggestions for integrating the security symbol set into UML. Access control, Privacy, Risk, Role and User may be linked to a UML Action. Availablity, Data condentiality, Data integrity, Digital signature and Encryption can be assigned to UML ObjectNodes. Audit may be linked to EventActions or be integrated as a UML Comment.
Future Research Several opportunities for future research emerge from our paper. As this initial study aimed at a preliminary design and modeling of security concepts, further research is necessary to fully develop security modeling extensions for business processes that can be interpreted by novices and experts, that are based on user preferences and are easy to learn (see Section 2). For example, we plan to use the stereotype drawings as basis to develop icons that can be integrated in process modeling languages. Therefore, we will investigate the icons in business processes, i.e. evaluate icons in a specic context. Furthermore, an extensive survey could assess the end user preferences of security symbols and the interpretation of these symbols (in and out of business process context). This could lead to a general approach to model security in business processes which might be adaptable to various business process modeling languages.
Conclusion
This paper presented our preliminary results of an experimental study on the design and modeling of security concepts in business processes. In our rst study, we asked students to draw sketches of security concepts. Based on these drawings, we produced stereotype symbols considering the main idea the drawings represented, the frequency of occurrence and the uniqueness and dissimilarity between drawings. For evaluation, we interviewed experts from the area of process modeling and/or security. This evaluation showed that most symbols could be recognized based on the idea they represented. We received an even stronger acceptance for the one-to-one correspondence during the interviews when using a list of symbols and its concepts. In future studies, we aim to further analyze how our symbol set aects the cognitive complexity of corresponding models.
In addition, we will evaluate dierent symbol integration options into process modeling languages.
b
User-oriented: Icons should be selected based on user preferences and user evaluation. c Composition principle: The composition of icons should be easy to understand and learn. d Interpretation: The composition rules should be transferable to dierent models.
Fig. 1 .
1 Fig. 1. Participant Rating of Diculty of Drawing a Sketch
Fig. 2 .
2 Fig. 2. Stereotype Drawings for Security Concepts
Table 1 .
1 Names and Denitions of Security Concepts in Experiment 1
Name Denition
Table 2 .
2 Quantitative Evaluation Results for each Security Concept
Experiment 1 Experiment 2 Evaluation
Security Concept No. of Drawings No. of Categories Correct Matchings
(out of 43) (out of 6)
Access control 42 15 3
Audit 38 16 6
Availability 38 15 6
Data condentiality 39 18 4
Data integrity 41 9 5
Digital signature 37 11 2
Encryption 42 5 5
Privacy 40 20 4
Risk 40 14 6
Role 38 15 4
User 43 5 6
Acknowledgements The authors would like to thank the participants of the survey and the experts for their support and contributions.
Data confidentiality
Availability Audit Access control | 40,436 | [
"1002455",
"1002456",
"1002457",
"1002458"
] | [
"300742",
"486555",
"152508",
"300742",
"152508"
] |
01474751 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474751/file/978-3-642-41641-5_18_Chapter.pdf | Alexander Smirnov
Kurt Sandkuhl
email: [email protected]
Nikolay Shilov
Alexey Kashevnik
email: [email protected]
"Product-Process-Machine" System Modeling: Approach and Industrial Case Studies
Keywords: Product-process-machine modeling, enterprise engineering, knowledge model, product model, production model
Global trends in the worldwide economy lead to new challenges for manufacturing enterprises and to new requirements regarding modeling industrial organizations, like integration of real-time information from operations and information about neighboring enterprises in the value network. Consequently, there is a need to design new, knowledge-based workflows and supporting software systems to increase efficiency of designing and maintaining new product ranges, production planning and manufacturing. The paper presents an approach to a specific aspect of enterprise modeling, productprocess-machine modeling, derived from two real life case studies. It assumes ontology-based integration of various information sources and software systems and distinguishes four levels. The upper two levels (levels of product manager and product engineer) concentrate on customer requirements and product modeling. The lower two levels (levels of production engineer and production manager) focus on production process and production equipment modeling.
Introduction
Enterprise modeling has a long tradition in the area of industrial organization and production logistics, which is manifested in the field of enterprise engineering with its many techniques and developments (see, e.g. [START_REF] Martin | The great transition: using the seven disciplines of enterprise engineering to align people, technology, and strategy[END_REF][START_REF] Vernadat | Enterprise modelling and integration[END_REF]). Similar to business modeling or process modeling, the subjects of enterprise models in industrial organization are processes, organization structures, information flows and resources, but for manufacturing and logistics tasks, not for business processes. Global trends in the worldwide economy, like agile manufacturing, value networks or changeable production systems, lead to new challenges for manufacturing enterprises and to new requirements regarding modeling industrial organizations. Traditional enterprise models need to be enhanced with product knowledge, real-time information from operations, and information about neighboring enterprises in the value network. Consequently, there is a need to design new, knowledge-based workflows and supporting software systems to increase efficiency of designing and maintaining new product ranges, production planning and manufacturing.
However, implementation of such changes in large companies faces many difficulties because business process cannot simply be stopped to switch between old and new workflows, old and new software systems have to be supported at the same time, the range of products, which are already in the markets, has to be maintained in parallel with new products, etc. Another problem is that it is difficult to estimate in advance which solutions and workflows would be efficient and convenient for the decision makers and employees. Hence, just following existing implementation guidelines is not advisable (confirmed, e.g., by Bokinge and Malmqvist in [START_REF] Bokinge | PLM implementation guidelines -relevance and application in practice: a discussion of findings from a retrospective case study[END_REF] for PLM), and the adaptation process to changes has to be and iterative and interactive.
Within enterprise modeling for industrial organizations, this paper focuses on modeling an essential aspect of production systems, the product -process -machine (PPM) system. While a production system includes all elements required to design, produce, distribute, and maintain a physical product, the PPM system only includes design and production of a product, i.e. a PPM system can be considered a subsystem of a production system. The paper presents an approach to "Product-Process-Machine" system modeling, which focuses on integrating product, production and machine knowledge for iterative and interactive product lifecycle management (PLM). The main contributions of this paper are (a) from a conceptual perspective we propose to integrate information at the intersection of product, process and machine models essential for planning changes and assessing their effects, (b) from a technical perspective we propose to use a knowledge structure suitable for extension with domain specific components and (c) from an application perspective we show two industrial cases indicating flexibility and pertinence of the approach.
Based on related work in the field (section 2) and experiences from two industrial cases, the paper introduces the overall PPM system modeling approach (section 3) and shows its use for different roles in PLM (section 4). Section 5 discusses experiences and lessons learned, and summarizes the work.
Modeling PPM systems
Due to the trends and challenges identified in section 1 enterprises need to be quick responsive to changes in their environment, which basically means the ability to identify (a) the potential options how to react to changes and (b) the effects of acting in a certain way, without endangering current operations of customer relations. As a basis for this "agility" an integrated view on knowledge from the product, process and machine perspective and mechanisms tailored to the needs of decision makers in the enterprise are recommendable. In this context, an integrated view to knowledge should encompass:
Dependencies and relationships within the PPM perspectives: o Product perspective: products consist of components, fulfill customer requirements, are built according to design principles, use certain materials and apply specific design principles and rules. o Process perspective: processes are cross-connected between each other by information, material and control flows o Machine perspectives: machines consist of sub-systems offering capabilities depending on other sub-systems and production logistics components. The above are only some examples of information relevant for decision makers in manufacturing enterprises when evaluating options and their effects. Dependencies and relationships between the perspectives: products are developed and manufactured in processes; processes are performed using resources, like machines; machines are operated by roles, which in turn are responsible for certain processes and products or product parts. These examples show that there are many relationships between different perspectives that have to be understood when developing and assessing options for reacting on environmental changes. This integrated view primarily requires information quantifying and qualifying the dependencies between the different perspectives, i.e. not all available information within the different perspective has to be included. Details of product design, operations of productions as provided by CAD or CAX systems usually are not required. Most existing approaches in engineering for manufacturability, PLM and production line engineering strive for an information integration and functional integration covering all life cycle phases. Our focus is on integrating essential information for key roles in a change process (see section 3).
Important roles in an enterprise when it comes to evaluating options and effects are the product manager and the production engineer. They would be among the primary user groups of a PPM system model and knowledge base. The product manager is responsible for the short-term and mid-term development of a product in terms of translating customer requirements to product features (target setting), guidance and supervision of the design process from customer features to the way of implementing them as functions, control of variability indicated by new and existing product functions, interface to sales and marketing, etc. If changes in the environment occur, the product manager has to investigate options on product level how to act or react. In cooperation with the product manager, the production engineer is responsible for making the product "manufacturable", i.e. adjust product design, material or features to what the machines in the production system can manufacture. Furthermore, the production engineer designs the overall production system by developing an appropriate composition of sub-systems and the flow of resources, materials and products.
Modeling in manufacturing and control has a long tradition, which does not only include the above mentioned perspectives, but also business aspects of manufacturing and the design, construction and operations part. This is manifested in reference models and frameworks, such as GERAM [START_REF] Williams | PERA and GERAM -enterprise reference architectures in enterprise integration[END_REF] and CIMOSA [START_REF] Kosanke | CIMOSA: enterprise engineering and integration[END_REF], and in a variety of standards for modeling the different perspectives (e.g. STEP [START_REF]ISO 10303 Industrial automation systems and integration -Product data representation and exchange[END_REF] and CMII [START_REF]CMII Standard for Product Configuration Management[END_REF]). Furthermore, large European research projects and networks, like ATHENA [START_REF] Ruggaber | ATHENA-Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications[END_REF], developed approaches for integrating enterprise system modeling and production system modeling into a joint framework. PLM systems address the complete lifecycle of products including production systems aspects and both, logistics in the supply chain and in production.
Integrated modeling of PPM systems has been investigated before. Many approaches exist which integrate two of the above mentioned perspectives. Examples are [START_REF] Osorio | A Modeling Approach towards an Extended Product Data Model for Sustainable Mass-Customized Products[END_REF] that combine product and machine perspective, [START_REF] Buchmann | Modelling Collaborative-Driven Supply Chains: The ComVantage Method[END_REF] where Buchmann and Karagiannis partly integrate process and product, and [START_REF] Mun | Information model of ship product structure supporting operation and maintenance after ship delivery[END_REF] with an approach for integrating product and machine perspective. Sandkuhl and Billig [START_REF] Sandkuhl | Ontology-based Artefact Management in Automotive Electronics[END_REF] propose the use of an enterprise ontology for product, process, resource and role modeling. However, this approach is limited to the application area of product families in automotive industries. Lillehagen and Krogstie [START_REF] Lillehagen | Active knowledge modeling of enterprises[END_REF] use active knowledge models for capturing dependencies between different knowledge perspectives. They use active knowledge models, which fulfill most of the requirements regarding the knowledge to be included, but their representation as visual models is not appropriate for reasoning and knowledge creation.
Approach
The proposed approach to PPM-System modeling originates from experiences in two industrial case studies. These cases served as basis for developing the approach and were used for implementing it and each of the cases implemented a different part. The first case is from a project with a global automation equipment manufacturer with more than 300 000 customers in 176 countries. The detailed description of this project is presented in [START_REF] Oroszi | Ontology-Driven Codification for Discrete and Modular Products[END_REF][START_REF] Smirnov | Knowledge Management for Complex Product Development: Framework and Implementation[END_REF]. The automation equipment manufacturer has a large number of products consisting of various components. The project aimed at product codification, i.e. structuring and coding the products and their components and defining rules for coding and configuration. This coding forms the basis for quickly adapting to new market requirements by facilitating easing configuration of new products and the production system.
The second case is from manufacturing industries and resulted in a tool called DESO (Design of Structured Objects). This tool supports part of our approach and is capable of describing production processes and production facilities [START_REF] Golm | ProCon: Decision Support for Resource Management in a Global Production Network[END_REF][START_REF] Golm | Virtual Production Network Configuration: ACS-approach and tools[END_REF]. It distinguishes two planning levels: the central planning area and the decentralized planning area of distributed plants. Every production program project or planning activity is initiated by a request asking for manufacturing of a product in a predefined volume and timeframe. Starting from this information, the central planning staff has the task to design a production system capable of fulfilling the given requirements and consisting of different plants. The involved engineers prepare the requests for different plants using the production system design. The engineers harmonize and aggregate parallel incoming requests for different planning periods. The different plants offer their production modules as a contribution to the entire network system. On plant level, engineers have to analyze the manufacturing potentials concerning capacity and process capability of their facilities. Only in the plants is the specific and detailed knowledge and data regarding the selection and adaptation of the machining systems to the new production tasks available. The expertise for developing and engineering of the production modules is available only in the plants.
In the context of these two cases, we propose from a conceptual perspective to distinguish two levels when representing knowledge required for central and decentral planning. The approach is illustrated in fig. 1. The first level describes the structural knowledge, i.e. the "schema" used to represent what is required for the two planning levels. Knowledge represented by the second level is an instantiation of the first level knowledge; this knowledge holds object instances. The object instances have to provide information at the intersection of product, process and machine models essential for planning changes and assessing their effects. What information is essential can be judged from different perspectives. Our approach is to put the demand of the main target groups presented in section 2 into the focus, i.e. product manager and production engineer. Since the PPM perspectives cannot be seen as isolated (see section 2), the common and "interfacing" aspects are most important.
The knowledge of the first level (structural knowledge) is described by a common ontology of the company's product families (classes). Ontologies provide a common way of knowledge representation for further processing. They have shown their usability for this type of tasks (e.g., [START_REF] Bradfield | A Metaknowledge Approach to Facilitate Knowledge Sharing in the Global Product Development Process[END_REF][START_REF] Chan | A framework of ontology-enabled product knowledge management[END_REF][START_REF] Patil | Ontology-based exchange of product data semantics[END_REF]).
The major ontology is in the center of the model. It is used to solve the problem of knowledge heterogeneity and enables interoperability between heterogeneous information sources due to provision of their common semantics and terminology [START_REF] Uschold | Ontologies: Principles, methods and applications[END_REF]. This ontology describes all the products (already under production and planned or future products), their features (existing and possible), production processes and production equipment. Population and application of this ontology are supported by a number of tools, described in detail in section 4. A knowledge map connects the ontology with different knowledge sources of the company.
At the level of product manager, the customer needs are analyzed. The parameters and terminology used by the customer often differ from those used by the product engineers. For this reason, a mapping between the customer needs and internal product requirements is required. Based on the requirements new products, product modifications or new production systems can be engineered for future production.
The approach distinguishes between virtual and real modules. Virtual modules are used for grouping technological operations from the production engineer's point of view. The real modules represent actual production equipment (machines) at the level of production manager.
The first case study addressed the requirements and support for product manager and product engineers. Production engineers and production managers are supported by the tool developed in the second case study. The first step to implementation of the approach is creation of the ontology. This operation was done automatically based on existing documents and defined rules of the model building. The resulting ontology consists of more than 1000 classes organized into a 4 level taxonomy (fig. 2). Taxonomical relationships support inheritance that makes it possible to define more common attributes for higher level classes and inherit them to lower level subclasses.
The same taxonomy is used in the company's PDM and ERP systems.
For each product family (class) a set of properties (attributes) is defined, and for each property the possible values and their codes are specified. The lexicon of properties is valid ontology-wide, i.e. the values can be reused for different families. Application of the common single ontology provides for the consistency of the product codes and makes it possible to reflect incorporated changes in the codes instantly.
Complex Product Modeling
An experience from the first industrial case is that customers nowadays wish to buy complete customized solutions (referred to as "complex products") consisting of numerous products, rather than separate isolated products which have to be integrated into a solution. Whereas such complex products in the past were configured by experts based on the customer requirements, they nowadays to a large extent have to be configurable by the customers, which requires appropriate tool support and automation. However, inter-product relationships are very challenging. For example, the most common use case is the relationship between a main product and an accessory product. While both products are derived from different complex products there are dependencies which assign a correct accessory to a configured main product. The dependencies are related to the products' individual properties and values. E.g., "1x3/2 or 2x3/2-way valve" cannot be installed on a valve terminal if its size is "Size 10, deviating flow rate 1". The depth of product-accessory relationships is not limited, so accessory-of-accessory combinations have also to be taken into account. These relationships can be very complex when it comes to define the actual location and orientation of interfaces and mounting points between products. Complex product description consists of two major parts: product components and rules. Complex product components can be the following: simple products, other complex products, and application data. The set of characteristics of the complex product is a union of characteristics of its components. The rules of the complex products are union of the rules of its components plus extra rules. Application data is an auxiliary component, which is used for introduction of some additional characteristics and requirements to the product (for example, operating temperatures, certification, electrical connection, etc.). They affect availability and compatibility of certain components and features via defined rules. Some example rules are shown in Fig. 3. The figure presents a valve terminal (VTUG) and compatibility of electrical accessories option C1 (individual connecting cable) with mounting accessories (compatible only with H-rail mounting) and accessories for input-output link (not compatible with 5 pin straight plug M12). These rules are stored in the database and can later be used during configuration of the valve terminal for certain requirements.
When the product model is finished it is offered to the customers, i.e. the customers could configure required products and solutions themselves or with assistance of product managers.
Production Facilities Modeling
The DESO system is a tool for management and structured storage of information in knowledge domain, and for processing this information. Depending on the domain under consideration, the system can be extended by additional components for solving specific problems using the information stored in the DESO database. Up to now, components for enterprise production program planning, for production module design, and for industrial resources distribution and planning were developed. Initially, the DESO system was developed in a project focusing on the early stages of planning investments including (a) derivation of production scenarios, (b) determination of investment cost, (c) assignment of locations and (d) estimation of variable product cost. The system aims at providing a knowledge platform enabling manufacturing enterprises to achieve reduced lead time and reduced cost based on customer requirements through customer satisfaction by means of improved availability, communication and quality of product information. It follows a decentralized method for intelligent knowledge and solutions access. The configuring process incorporates the following features: order-free selection, limits of resources, optimization (minimization or maximization), default values, freedom to make changes in Global Production Network model. The architecture of the system reflects the structure of "Product-Process-Machine" system. It includes three main IT-Modules or software tools (fig. 4).
The hierarchy editor (fig. 5) is a tool for creating, editing and managing hierarchical relations between objects. These relations may show structures of objects, sequence of operations for a part production, possible alternatives of accommodation etc. The hierarchy editor supports inheriting subordinate objects, what allows creating of complex hierarchical systems of objects by some stages, and using templates automating the user's work. [START_REF] Golm | ProCon: Decision Support for Resource Management in a Global Production Network[END_REF] DESO distinguishes between virtual and real modules. In accordance with the approach, the virtual modules are used for grouping technological operations from production engineer's point of view (fig. 5). The real modules stand for the real equipment used for the actual production (fig. 6). The production engineer sets correspondences between the technological operations of virtual modules and machines of real modules (fig. 7).
Summary and Discussion
The paper presents an approach to product-process-machine modeling derived from two real life case studies. Compared to existing approaches in the field, which strive for the integration of all available information in the different PPM models, our focus is on integrating essential information for two key roles in an adaptation process to new market requirements, i.e. product manager and production engineer. The approach is based on the idea of integrating various information sources and software systems and distinguishes four levels. The upper two levels (levels of product manager and product engineer) concentrate on customer requirements and product modeling. The lower two levels (levels of production engineer and production manager) focus on production process and production equipment modeling.
As already mentioned, just following existing guidelines for implementing new workflows often is not possible for number of reasons. Engineers and managers do not have sufficient information to decide in advance which solution would be more convenient and efficient for them. As a result, the implementation of new workflows is more a "trial-and-error" process.
This was in a higher degree visible when working with product managers and product engineers in the first case study. In the second case study (aimed at the levels of production engineer and production manager) this issue was less obvious, because the "explorative" production planning could be done in parallel with the actual one. In this context, the modeling of the "product-process-machine" system proved to be an efficient solution.
The model built enabled automation of a number of processes previously done manually. The main advantages of the developed solution are [START_REF] Smirnov | Knowledge Management for Complex Product Development: Framework and Implementation[END_REF]:
-Automatically creating master data in SAP models; -Automatically creating data for the configuration models and services; -Automatically generating an ordering sheet for the print documentation (this ordering sheet was generated earlier with much effort manually); -Automatically generating a product and service list which is needed in the complete process implementing new products.
Based on the experiences from the two cases, our conclusion is that the aim of integrating information from the perspectives product, process and machine was achieved and that it supports identifying potential options in case of changes and the effects of these options. Examples are parallel incoming requests with respect to product features or their effects on re-scheduling and re-configuring the production facilities. Product managers and production engineers are supported in their tasks and responsibilities, both with respect to the central planning level and for the distributed plants.
Experience from the implementation of the mentioned projects shows that deep automation of the Product-Process-Machine system could be achieved if it is considered as one complex system. This requires consideration of all levels of the production system indicated in sec. 3. To facilitate implementation of such projects, first, the structural information has to be collected followed by identification of the relationships. This can be done only through long-term time-consuming communications with experts from the company. As a consequence, we consider using typical structural models or recurring "patterns" for such models as promising and beneficial for such processes.
The limitation of the approach is that it focuses only on the "integrating links" between the different perspectives, i.e. we do not attempt to integrate all existing information regarding construction, design, operation of administrative aspects of products and production systems. Future work will include conceptual extensions and gathering more experiences from practical cases. Conceptual extension will be directed to support more roles in the area of industrial organization by implementing additional features. An example is the extension towards integration of suppliers or partners in the value network.
Fig. 1 .
1 Fig. 1. Multi-level enterprise modelling concept
Fig. 2 .
2 Fig. 2. Main window of the product hierarchy description tool[START_REF] Oroszi | Ontology-Driven Codification for Discrete and Modular Products[END_REF]
Fig. 3 .
3 Fig. 3. Solution rules example[START_REF] Smirnov | Knowledge Management for Complex Product Development: Framework and Implementation[END_REF]
Fig. 4 .
4 Fig. 4. Product-Process-Machine system modeling
Fig. 5 .
5 Fig. 5. Hierarchy editor of DESO[START_REF] Golm | ProCon: Decision Support for Resource Management in a Global Production Network[END_REF]
Fig. 6 .Fig. 7 .
67 Fig. 6. Real module description
Acknowledgements
This paper was developed in the context of the project COBIT sponsored by the Swedish Foundation for International Cooperation in Research and Higher Education. Some parts of the work have been sponsored by grants 12-07-00298 of the Russian Foundation for Basic Research, project # 213 of the research program #15 "Intelligent information technologies, mathematical modelling, system analysis and automation" of the Russian Academy of Sciences, and project 2.2 "Methodology development for building group information and recommendation systems" of the basic research program "Intelligent information technologies, system analysis and automation" of the Nanotechnology and Information technology Department of the Russian Academy of Sciences. | 28,837 | [
"992760",
"977635",
"992762",
"992761"
] | [
"471046",
"82150",
"452135",
"471046",
"471046"
] |
01474753 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474753/file/978-3-642-41641-5_20_Chapter.pdf | Nuno Ferreira
email: [email protected]
Nuno Santos
email: [email protected]
Pedro Soares
email: [email protected]
Ricardo J Machado
Dragan Gašević
email: [email protected]
A Demonstration Case on Steps and Rules for the Transition from Process-level to Software Logical Architectures in Enterprise Models *
Keywords: Enterprise Information Systems, Enterprise Modeling, Requirement Elicitation, Model Transformation, Transition to Software Requirements
At the analysis phase of an enterprise information system development, the alignment between the process-level requirements (information systems) with the product-level requirements (software system) may not be properly achieved. Modeling the processes for the enterprise's business is often insufficient for implementation teams, and implementation requirements are often misaligned with business and stakeholder needs. In this paper, we demonstrate, though a real industrial case, how transition steps and rules are used to assure that process-and product-level requirements are aligned, within an approach that supports the creation of the intended requirements. The input for the transition steps is an information system logical architecture, and the output is a product-level (software) use case model.
Introduction
During an enterprise information system development process, assuring that functional requirements fully support the stakeholder's business needs may become a complex and inefficient task. Additionally, the "newfound" paradigm of IT solutions (e.g., Cloud Computing) typically results in more difficulties for defining a business model and for eliciting product-level functional requirements for any given project. If stakeholders experience such difficulties then software developers will have to deal with incomplete or incorrect requirements specifications, resulting in a real problem.
When there are insufficient inputs for a product-level approach to requirements elicitation, using a process-level perspective is a possible approach, in order to create an information system logical architecture which is used for eliciting software (product-level) requirements.
The first effort should be to specify the requirements of the overall system in the physical world; then to determine necessary assumptions about components of that physical world; and only then to derive a specification of the computational part of the control system [START_REF] Maibaum | On specifying systems that connect to the physical world[END_REF]. There are similar approaches that tackle the problem of aligning domain specific needs with software solutions. For instance, goal-oriented approaches are a way of doing so, but they don't encompass methods for deriving a logical representation of the intended system processes with the purpose of creating context for eliciting product-level requirements.
Our main problem, and the main topic this paper addresses, is assuring that product-level (IT-related) requirements are perfectly aligned with process-level requirements, and hence, are aligned with the organization's business requirements. The process-level requirements express the need for fulfilling the organization's business needs, and we detail how they are characterized within our approach further in section 2. These requirements may be supported by analysis models, that are implementation agnostic [START_REF] Yue | A Systematic Review of Transformation Approaches between User Requirements and Analysis Models[END_REF]. According to [START_REF] Yue | A Systematic Review of Transformation Approaches between User Requirements and Analysis Models[END_REF], the existing approaches for transforming requirements into an analysis model (i) don't require acceptable user effort to document requirements, (ii) are efficient enough (e.g., one or two transformation steps), (iii) are able to (semi-)automatically generate a complete (i.e., static and dynamic aspects) consistent analysis model, which is expected to model both the structure and behavior of the system at a logical level of abstraction.
In this paper we present a demonstration case in which we illustrate the transition between the process-level requirements of the intended system and the technological requirements that the same system must comply with. The transition is part of an approach that expresses the project goals and allows creating context to implement a software system. The entire approach is detailed in [START_REF] Ferreira | Transition from Process-to Product-level Perspective for Business Software[END_REF] as a V + V process, based on the composition of two V-shaped process models (inspired in the "Vee" process model [START_REF] Haskins | Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities[END_REF]). This way, we formalize the transition steps between perspectives that are required in order to align the requirements the V+V process presented in [START_REF] Ferreira | Transition from Process-to Product-level Perspective for Business Software[END_REF].The requirements are expressed through logical architectural models and stereotyped sequence diagrams [START_REF] Ferreira | Aligning Domain-related Models for Creating Context for Software Product Design[END_REF] in both a process-and a product-level perspective.
This paper is structured as follows: section 2 briefly presents the macro-process for information system's development based on both processand product-level V-Model approaches; section 3 describes the transition steps and rules between both perspectives; in section 4 we present a real industrial case on the adoption of the transition steps between both V-Model executions; in section 5 we compare our approach with other related work; and in section 6 we present the conclusions.
A Macro-process Approach to Software Design
The development process of information systems can be regarded (in a simple way) as a cascaded lifecycle (i.e., a development process only initiates when the previous has ended), if we consider typical and simplified phases: analysis, design and implementation. Our approach encompasses two V-shaped process models hereafter referred as the V+V process. The main difference from our proposed approach to other information system development approaches is that it is applicable for eliciting productlevel requirements in cases where there is no clearly defined context for eliciting product requirements within a given specific domain, by first eliciting process-level requirements and then evolving to the product-level requirements, using a transition approach that assures an alignment between both perspectives. Other approaches (described further in section 5) typically apply to a single perspective.
Fig. 1. V+V process framed in the development macro-process
The first V-Model (at process-level) is composed by Organizational Configurations (OC) [START_REF] Ferreira | Aligning Domain-related Models for Creating Context for Software Product Design[END_REF], A-type and B-type sequence diagrams [START_REF] Ferreira | Aligning Domain-related Models for Creating Context for Software Product Design[END_REF] (stereotyped sequence diagrams that use, respectively, use cases and architectural elements from the logical architecture), and (business) Use Case models (UCs) that are used to derive (and, in the case of B-type sequence diagrams, validate) a process-level logical architecture (i.e., the information system logical architecture). Use cases are mandatory to execute the 4SRS method. Since the term process has different meanings depending on the context, in our process-level approach we acknowledge that: (i) real-world activities of a software production process are the context for the problem under analysis; (ii) in relation to a software model context [START_REF] Conradi | Process Modelling Languages[END_REF], a software process is composed of a set of activities related to software development, maintenance, project management and quality assurance. For the scope definition of our work, and according to the previously exposed acknowledgments, we characterize our process-level perspective by: (i) being related to real-world activities (including business); (ii) when related to software, those activities encompass the typical software development lifecycle. Our process-level approach is characterized by using refinement (as one kind of functional decomposition) and integration of system models. Activities and their interface in a process can be structured or arranged in a process architecture [START_REF] Browning | Modeling impacts of process architecture on cost and schedule risk in product development[END_REF]. We frame the process-level V-Model (the first V-Model of Fig. 1) in the analysis phase, creating the context for product design (CPD). In its vertex, the process-level 4SRS (Four-Step Rule-Set) method execution (see [START_REF] Ferreira | Derivation of Process-Oriented Logical Architectures: An Elicitation Approach for Cloud Design[END_REF] for details about the process-level 4SRS method) assures the transition from the problem to the solution domain by transforming pro-
Process-Level 4SRS 4SRS Analysis Design Implementation CPD CPI Product-Level 4SRS
4SRS and Dragan Gašević4 cess-level use cases into process-level logical architectural elements, and results in the creation of a validated architectural model which allows creating context for the product-level requirements elicitation and in the uncovering of hidden requirements for the intended product design.
The second V-Model (at product-level) is composed by Mashed UCs model (a use case model composed by use cases derived from the transition steps but not yet being the final product use case model), A-type and B-type sequence diagrams, and (software) Use Case models (UCs) that are used to derive (and, validate) a product-level logical architecture (i.e., the software system logical architecture). By product-level, we refer as the typical software requirements. The second execution of the V-Model is done at a product-level perspective and its vertex is supported by the product-level 4SRS method detailed in [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF]. The product-level V-Model gathers information from the CPD in order to create a new model referred as Mashed UCs. The creation of this model is detailed in the next section of this paper as transition steps and rules. The product-level V-Model (the second V-Model of Fig. 1) enables the transition from analysis to design trough the execution of the product-level 4SRS method (see [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF] for details about the product-level 4SRS method). The resulting architecture is then considered a design artifact that contributes for the creation of context for product implementation (CPI) as information required by implementation teams. Note that the design itself is not restricted to that artifact, since in our approach it also encompasses behavioral aspects and non-functional requirements representation.
Fig. 2. Derivation of software system logical architectures by transiting from information system logical architectures
The information regarding each of the models and their usage within the V+V process is detailed in [START_REF] Ferreira | Transition from Process-to Product-level Perspective for Business Software[END_REF] and as such is not our purpose to thoroughly describe them, leaving the reader with just a brief explanation on their meaning. The OC model is a high-level representation of the activities (interactions) that exist between the business-level entities of a given domain. A-type and B-type sequence diagrams are stereotyped sequence diagram representation to describe interactions in early analysis phase of system development. A-type sequence diagrams use actors and use cases. B-type sequence diagrams use actors and architectural elements depicted in the both the information system or software system logical architectures.
As depicted in Fig. 2, the result of the first V-Model (process-level) execution is the information system logical architecture. The architectural elements that compose this architecture are derived (by performing transition steps) into product-level use cases (Mashed UCs model). The result of the second V-Model (product-level) execution is the software system logical architecture.
Process-to Product-level Transition
As stated before, a first V-Model (at process-level) can be executed for business requirements elicitation purposes, followed by a second V-Model (at product-level) for defining the software functional requirements. The V+V process is useful for both stakeholders, organizations and technicians, but it is necessary to assure that they properly reflect the same system.
This section begins by presenting a set of transition steps whose execution is required to create the initial context for product-level requirements elicitation, referred to as Mashed UC model. The purpose of the transition steps is to assure an aligned transition between the process-and product-level perspectives in the V+V process, that is, the passage from the first V-Model to the second one. By defining these transition steps, we assure that product-level (software) use cases (UCpt's) are aligned with the architectural elements from the process-level (information system) logical architectural model (AEpc's); i.e., (software) use case diagrams are reflecting the needs of the information system logical architecture. The application of these transition rules to all the partitions of an information system logical architecture gives origin to a set of Mashed UC models (preliminary product-level use case models).
To allow the recursive execution of the 4SRS method [START_REF] Machado | Refinement of Software Architectures by Recursive Model Transformations[END_REF][START_REF] Azevedo | Refinement of Software Product Line Architectures through Recursive Modeling Techniques in On the Move to Meaningful Internet Systems[END_REF], the transition from the first V-Model to the second V-Model must be performed by a set of steps. The output of the first V-Model must be used as input for the second V-Model; i.e., we need to transform the information system logical architecture into product-level use case models. The transition steps to guide this mapping must be able to support business to technology changing. These transition steps (TS) are structured as follows:
TS1 -Architecture Partitioning, where the process-level architectural elements (AEpc's) under analysis are classified by their computation execution context with the purpose of defining software boundaries to be transformed into product-level (software) use cases (UCpt's.); TS2 -Use Case Transformation, where AEpc's are transformed into software use cases and actors that represent the system under analysis through a set of transition patterns that must be applied as rules; TS3 -Original Actors Inclusion, where the original actors that were related to the use cases from which the architectural elements of the process-level perspective are derived (in the first V execution) must be included in the representation; TS4 -Redundancy Elimination, where the model is analyzed for redundancies; TS5 -Gap Filling, where the necessary information of any requirement that is not yet represented, is added, in the form of use cases.
During the execution of these transition steps, transition use cases (UCtr's) bridge the AEpc's and serve as basis to elicit UCpt's. UCtr's also provide traceability between process-and product-level perspectives using tags and annotations associated with each representation.
The identification of each partition is firstly made using the information that results from the packaging and aggregation efforts of the previous 4SRS execution (step 3 of the 4SRS method execution as described in [START_REF] Ferreira | Derivation of Process-Oriented Logical Architectures: An Elicitation Approach for Cloud Design[END_REF]). Nevertheless, this information is not enough to properly identify the partitions. Information gathered in OC's and on the process-level B-type sequence diagrams must also be accounted. A partition is created by identifying all the relevant architectural elements that belong to all B-type sequence diagrams that correspond to a given organizational configuration scenario. By traversing the architectural elements that comply with the scenario definition (for each B-type sequence diagram and aligned with the packages and aggregations presented in the information system logical architecture), it is possible to properly identify the partitions that support the interactions depicted in the OC's.
The rules to support the execution of the transition step 2 are applied in the form of transition rules and must be applied in accordance to the stereotype of the envisaged architectural element. There are three stereotyped architectural elements: d-type, which refer to generic decision repositories (data), representing decisions not supported computationally by the system under design; c-type, which encompass all the processes focusing on decision making that must be supported computationally by the system; and i-type, which refer to process' interfaces with users, software or other processes. The full descriptions of the three stereotypes are available in [START_REF] Ferreira | Derivation of Process-Oriented Logical Architectures: An Elicitation Approach for Cloud Design[END_REF].
The defined transition rules (TR), from the logical architectural diagram to the Mashed UC diagram are as follows:
TR1 -an inbound c-type or i-type AEpc is transformed into an UCtr of the same type (see Fig. 3). By inbound we mean that the element belongs to the partition under analysis;
Fig. 3. TR1 -transition rule 1
TR2 -an inbound d-type AEpc is transformed into an UCtr and an associated actor (see Fig. 4). This is due to the fact that d-type AEpc's corresponds to decisions not computationally supported by the system under design and, as such, it requires an actor to activate the depicted process. TR3 -an inbound AEpc, with a given name x, which also belongs to an outbound partition, is transformed into an UCtr of name x, and an associated actor, of name y, being the responsible for representing the outbound actions associated with UCtrx (see Fig. 5).
Fig. 5. TR3 -transition rule 3
The connections between the use cases and actors produced by the previous rules must be consistent with the existing associations between the AEpc's. The focus of this analysis is UCtr's and is addressed by the following two transition rules.
TR4 -an inbound d-type UCtr of name x with connections to an (any type) UCtr of name y and to an actor z, gives place to two UCtr's, x and y, maintaining the original types (see Fig. 6). Both are connected to the actor z. This means that all existing connections on the original d-type AEpc that were maintained during execution of TR2 or TR3 are transferred to the created actor.
Fig. 6. TR4 -transition rule 4
The previous rule is executed after TR1, TR2 or TR3, so it only needs to set the required association between the UCtr's and the actors, that is to say, after all transformations are executed (TR1, TR2, and TR3), a set of rules are executed to establish the correct associations to the UCtr's.
TR5 -an inbound UCtr of name x with a connection to an outbound AEpc of name y (note that this is still an AEpc, since it was not transformed into any other concept in the previous transition rules) gives place to both an UCtr named x and to an actor named y (see Fig. 7). AEpc's that were not previously transformed are now transformed by the application of this TR5; this means that all AEpc's which exist outside the partition under analysis having connections with inbound UCtr's will be trans- A special application of TR5 (described as TR5.1) can be found in Fig. 8 where we can see an UCtr with a connection to an outbound AEpc and another connection to an actor. In this case, TR5 is applied and the resulting UCtr is also connected to the original actor. Note that an UCtr belonging to multiple partitions is first and foremost, an inbound UCtr due to being under analysis. The application of these transition steps and rules to all the partitions of an information system logical architecture gives origin to a set of Mashed UC models, as we illustrate in the next section using a real industrial case. In the remaining of the transition steps, the purpose is to promote completeness and reliability in the model. The model is complete after adding the associations that initially connected actors (the ones who trigger the AEpc's) and the AEpc's, and then by mapping those associations to the UCtr's. The model is reliable since the enforcement of the rules eliminates redundancy and assures that there are no gaps in the UCtr's associations and related actors. Only after the execution of all the transition steps we consider the resulting model as containing product-level use cases (UCpt's).
Applicability of the Transition Steps: The ISOFIN Project
The applicability of the transition steps and rules is demonstrated with a real project: the ISOFIN project (Interoperability in Financial Software) [START_REF] Dillon | [END_REF]. This project aimed to deliver a set of coordinating services in a centralized infrastructure, enacting the coordination of independent services relying on separate infrastructures. The resulting ISOFIN platform allows for the semantic and application interoperability between enrolled financial institutions (Banks, Insurance Companies and others).
The global ISOFIN architecture relies on two main service types: Interconnected Business Service (IBS) and Supplier Business Service (SBS). IBSs concern a set of functionalities that are exposed from the ISOFIN core platform to ISOFIN Customers. An IBS interconnects one or more SBS's and/or IBS's exposing functionalities that relate directly to business needs. SBS's are a set of functionalities that are exposed from the ISOFIN Suppliers production infrastructure. From the demonstration case, we first present a subset of the information system logical architecture in Fig. 9, that resulted from the execution of the 4SRS method at a process-level perspective [START_REF] Ferreira | Derivation of Process-Oriented Logical Architectures: An Elicitation Approach for Cloud Design[END_REF]; i.e., the execution of the first (process-level) V-Model. The information system logical architecture is composed by architectural elements that represent processes executed by within the ISOFIN platform.
In Fig. 10, we depict the execution of TS1 to a subset of the entire information system logical architecture (composed by the same architectural elements as Fig. 9), i.e., the partitioning of the information system logical architecture, by marking its architectural elements in partition areas, each concerning the context where services are executed, which resulted in two partitions: (i) the ISOFIN platform execution functionalities (in the area marked as P1); (ii) the ISOFIN supplier execution functionalities (in the area marked as P2). The identification of the partitions will enable the application of the transition steps to allow the application of the second V-Model to advance the macro-process execution into the product implementation. Presenting the information that supported the decisions regarding the partitions in the case of the ISOFIN project is out of the scope of this paper.
Fig. 11 shows the filtered and collapsed diagram that resulted from the P2 partition, which (in the demonstration case) is the partition under analysis. P2 includes the architectural elements that belong to both partitions and that must be considered when applying the transition rules. After being filtered and collapsed, the partitioned information system logical architecture is composed, not only by the architectural elements that belong to the partition under analysis, but also by some additional architectural elements belonging to other partitions that has an association (i.e., the dashed and/or straight lines between architectural elements) with architectural elements belonging to the partition under analysis (e.g., {AE3.6.i} Generate SBS Code belongs to P1, but possesses an association with {AE3.7.1.i} Remote SBS Publishing Interface that belongs to P1 and P2 partitions). The keeping of these outbound AEpc's assures that outbound interfaces information is preserved.
The model is now ready to be transformed. It is during TS2 that the perspective is Nuno Ferreira, Nuno Santos2, Pedro Soares2, Ricardo J. Machado3, and Dragan Gašević4 Fig. 10. Partitioning of the information system logical architecture (TS1). altered from process-to product-level. We now execute the transition rules presented in the previous section to our demonstration case. We transform from the source model (model from Fig. 11) to the target model (model from Fig. 12), as well as the TR that was applied, are supported in Table 1. Table 1 allows a better understanding of the application of the TR and the result of the transformation executed in TS2. The resulting model however presents some redundancies and gaps, so it is necessary that the remaining TS are executed. In Fig. 12, we depict the final Mashed UC model (the application of the transition rules in TS2. The resulting model however presents redundancies and gaps, so it is necessary that the remaining TS are executed. It is possible to objectively recognize the effect of the application of some transition rules previously described. TR1 was the most applied transition rule and one example is the transformation of the AEpc named {AE2. After the execution of the transition steps, the Mashed UC model is used as input for the product-level 4SRS method execution [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF] in order to derive the software system logical architecture for the ISOFIN platform. Such architecture is the main output of the second (product-level) V-Model execution. We depict in Fig. 13 the entire software system logical architecture (the second architecture of Fig. 13) obtained after the execution of the V+V process (and, as represented in Fig. 2, derived by transforming product use cases in architectural elements using product-level 4SRS method), having as input the information system logical architecture (the first architecture of Fig. 13) previously presented. The software system logical architecture is composed by architectural elements (depicted in the zoomed area) that represent functionalities that are executed in the platform. The alignment between the architecture elements in both perspectives is supported by the transition steps. It would be impossible to elicit requirements for a software system logical architecture as complex as the ISOFIN platform (the overall information system logical architecture was composed by near 80 architectural elements, and the resulting software system logical architecture by near 100) by adopting an approach that only considers the product-level perspective.
Comparison with Related Work
There are many approaches that allow deriving at a given level a view of the intended system to be developed. Our approach clearly starts at a process-level perspective, and by successive models derivation creates the context for transforming the requirements expressed in an information system logical architecture into product-level context for requirements specification. Other approaches provide similar results at a subset of our specification. For instance, KAOS, a goal-oriented requirement specification method, provides a specification that can be used in order to obtain architecture requirements [START_REF] Jani | Experience Report: Deriving architecture specifications from KAOS specifications[END_REF]. This approach uses two step-based methods, which output a formalization of the architecture requirements for each method, each of one providing a different view of the system. The organization's processes can be represented by an enterprise architecture [START_REF]TOGAF -The Open Group Architecture Framework[END_REF], and representation extended by including in the architecture modeling concerns as business goals and requirements [START_REF] Engelsman | Extending enterprise architecture modelling with business goals and requirements[END_REF]. However, such proposals don't intend to provide information for implementation teams during the software development process, but instead to provide to stakeholders with business strategic requirements. The relation between what the stakeholders want and what implementation teams need requires an alignment approach to assure that there are no missing specifications on the transition between phases. An alignment approach also based on architectural models is presented in [START_REF] Strnadl | Aligning Business and It: The Process-Driven Architecture Model[END_REF].
In [START_REF] Dijkman | An algorithm to derive use cases from business processes[END_REF] it is specified a mapping technique and an algorithm for mapping business process models, using UML activity diagrams and use cases, so functional requirements specifications support the enterprise's business process. In our approach, we use an information system logical architecture diagram instead of an activity diagram, since an information system logical architecture provides a fundamental organization of the development, creation, and distribution of processes in the relevant enterprise context [START_REF] Winter | Essential Layers, Artifacts, and Dependencies of Enterprise Architecture[END_REF]. Model-driven transformation approaches were already used for developing information systems in [START_REF] Iribarne | A Model Transformation Approach for Automatic Composition of COTS User Interfaces in Web-Based Information Systems[END_REF]. In literature, model transformations are often related to the Model-Driven Architecture (MDA) [20] initiative from OMG. MDA-based transformations are widely used but, as far as the authors know, the supported transformations don't regard a perspective transition, i.e., are perspective agnostic since they concern model transformations within a single perspective (typically the product-level one). For instance, [START_REF] Kaindl | Can We Transform Requirements into Architecture?[END_REF] describes MDA-based transformations from use cases and scenarios to components, but only in a product-level perspective. Even in cases when MDA transformations are executed using different source and target modeling languages (there is a plethora in literature regarding these cases, like, for instance, [START_REF] Bauer | A Model-driven Approach to Designing Cross-Enterprise Business Processes[END_REF], where a source model in Business Process Modeling Notation -BPMN, and Dragan Gašević4 is transformed into target model in Business Process Execution Language -BPEL), the transformation only regards a single perspective. The concerns that must be assured by transiting between perspectives are not dealt by any of the previous works.
The existing approaches for model transformation attempt to provide an automated or automatic execution. [START_REF] Yue | A Systematic Review of Transformation Approaches between User Requirements and Analysis Models[END_REF] provides a systematic review and evaluation of existing work on transforming requirements into an analysis model and, according to the authors, none of the compared approaches provide a practical automated solution. The transition steps and rules presented in this work intent to provide a certain level of automation into our approach and improve the efficiency, validation, and traceability of the overall V+V process.
Conclusions
In this paper, we demonstrated through a real industrial case the transition from previously process-level elicited requirements to requirements in a product-level perspective, included in an elicitation approach based in two V-Models (the V+V process). We illustrated a demonstration case that elicits requirements for developing a complex interoperable platform, by adopting a model-based approach to create context for business software implementation teams in situations where requirements cannot be properly elicited. Our approach is supported on a set of transition steps and transition rules that use as basis an information system logical architecture to output a product-level use case model. By adopting the approach, requirements for specifying software system functionalities are properly aligned with organizational information system requirements in a traceable way.
Our approach uses software engineering techniques, such as operational model transformations to assure the execution of a process that begins with business needs and ends with a logical architectural representation. It is a common fact that domain-specific needs, namely business needs, are a fast changing concern that must be tackled. Information system architectures must be modeled in a way that potentially changing domain-specific needs are local in the architecture representation of the intended service. Our proposed V+V process encompasses the derivation of a logical architecture representation that is aligned with domain-specific needs and any change made to those domain-specific needs is reflected in the logical architectural model, and the transformation is properly assured. Since the Mashed UC model is derived from a model transformation based on mappings, traceability between AEpc's and UCpt's is guaranteed, thus any necessary change on product-level requirements due to a change on a given business needs is easily identified alongside the models.
Fig. 4 .
4 Fig. 4. TR2 -transition rule 2
Fig. 7 .
7 Fig. 7. TR5 -transition rule 5
Fig. 8 .
8 Fig. 8. TR5.1 -transition rule 5.1
Fig. 9 .
9 Fig. 9. Subset of the ISOFIN information system logical architecture
Fig. 11 .
11 Fig. 11. Filtered and collapsed architectural elements (TS1)
Fig. 12 .
12 Fig. 12. Mashed UC model resulting from the transition from process-to product-level
1.c} Access Remote Catalogs into one UCtr named {U2.1.c} Access Remote Catalogs. One example of the application of TR2 is the transformation of the AEpc named {AE2.6.2.d} IBS Deployment Decisions into the UCtr named {U2.6.2.d} Define IBS Deployment and the actor named IBS Developer. TR3 was applied, for instance, in the transformation of the AEpc named {AE3.7.1.c} Define SBS Information into the UCtr named {U3.7.1.c} Define SBS Information and the actor named SBS Publisher. Finally, we can recognize the application of TR5.1 in the transformation of the AEpc named {AE3.6.i} Generate SBS Code into the actor named SBS Developer. All the other actors result from the execution of TS3. We must refer, for instance, that the actor SBS Developer results from the execution of TS4, since the original actor and the actor resulting from an application of TR2 and TR5.1 and also the inclusion of the original actor in TS3, result in the same actor which brings the need to eliminate the generated redundancy. The resulting model allows to identify potential gaps in use cases or actors (in the execution of TS5), but in this case such wasn't required.
Fig. 13 .
13 Fig. 13. Subset of the ISOFIN software system logical architecture after V+V process
TS1 -Architecture Partitioning TS2 -Use Case Transformation TS3 -Original Actors Inclusion TS4 -Redundancy Elimination TS5 -Gap Filling
First V-Model Second V-Model
Process-level Use Cases Diagrams Product-level Use Cases Diagrams
Process-level 4SRS Transition Steps: Product-level 4SRS
U2.1 AE2.1i U2.1 AE2.1i
AE2.1d AE2.1d
U2.2 AE2.2d U2.2 AE2.2d
AE2.2c AE2.2c
Information System Logical Software System Logical
Architecture Architecture
.i} Generate SBS Code <<control>> {AE2.1.c} Access Remote Catalogs <<control>> {AE2.3.1.c} IBS Internal Structure Specification
{P2.2} IBS Analysis
Decisions
P1 <<interface>> {P1.3} SBS Generator <<control>> {AE2.3.1.c} IBS Internal Structure Specification <<control>> {AE2.1.c} Access Remote Catalogs P2
{AE3.7.2.i} Local
SBS Publishing
Interface {P2.3} IBS Generator
<<interface>> {AE3.6.i} Generate SBS Code <<interface>> {AE3.7.1.i} Remote SBS Publishing Interface <<interface>> {AE2.6.1.i} Generate IBS Code <<interface>> {AE2.6.2.i} IBS Deployment Process
<<data>>
{AE2.6.2.d} IBS
Deployment
<<data>> Decisions
{AE3.7.1.c} Remote SBS Publishing Information <<interface>> {AE2.7.i} Execute IBS Publication in <<control>> {AE2.7.c} IBS Publication
Catalog Decisions
<<interface>> {AE2.11.i} Execute Publishing Info Integration <<control>> {AE2.11.c} Global Publishing Integration Decisions
<<interface>>
<<interface>> {AE3.6<<control>> <<data>> {AE3.7.1.
{AE2.11.i} Execute {AE2.11.c} Global
Publishing Info Publishing Integration
Integration Decisions
c} Remote SBS Publishing Information <<interface>> {AE3.7.1.i} Remote SBS Publishing Interface <<interface>> {AE2.6.1.i} Generate IBS Code <<interface>> {AE2.6.2.i} IBS Deployment Process <<control>> {AE2.7.c} IBS Publication Decisions <<interface>> {AE2.7.i} Execute IBS Publication in Catalog <<data>> {AE2.6.2.d} IBS Deployment Decisions <<interface>> {AE3.7.2.i} Local SBS Publishing Interface
Table 1 .
1 Executed transformations to the model
{U3.7.1.i} Publish SBS Information {U2.3.1.c} Define IBS Internal Structure {U2.1.c} Access Remote Catalogs
SBS Developer {U2.11.i} Integrate
Publishing
Information IBS Business Analyst
{U3.7.1.c} Define SBS Information {U2.11.c} Define Global Publishing Integration {U2.6.2.d} Define IBS Deployment
SBS Publisher {U2.7.c} Define IBS {U2.7.i} Publish IBS {U2.6.1.i} {U2.6.2.i} Deploy
Information Information Generate IBS Code IBS
IBS Developer
Process-level (transformation source) TR Product-level (transformation target)
AEpc {AE2.1.c} Access Remote Catalogs TR1 UCtr {U2.1.c} Access Remote Catalogs
AEpc {AE2.3.1.c} IBS Internal Structure Specification TR1 UCtr {U2.3.1.c} Define IBS Internal Struc-ture
AEpc {AE2.6.1.i} Generate IBS Code TR1 UCtr {U2.6.1.i} Generate IBS Code
AEpc {AE2.6.2.d} IBS Deployment Decisions TR2 UCtr {U2.6.2.d} Define IBS Deployment; Actor IBS Developer
AEpc {AE2.6.2.i} IBS Deployment Pro-cess TR1 UCtr {U2.6.2.i} Deploy IBS
AEpc {AE2.7.i} Execute IBS Publication in Catalog TR1 UCtr {U2.7.i} Publish IBS Information
AEpc {AE2.7.c} IBS Publication Deci-sions TR1 UCtr {U2.7.c} Define IBS Information
AEpc {AE2.11.i} Execute Publishing Info Integration TR1 UCtr {U2.11.i} Integrate Publishing Infor-mation
AEpc {AE2.11.c} Global Publishing Integration Decisions TR1 UCtr {U2.11.c} Define Global Publishing Information
AEpc {AE3.6.i} Generate SBS Code TR5.1 Actor SBS Developer
AEpc {AE3.7.1.i} Remote SBS Publish-ing Interface TR3 UCtr {U3.7.1.i} Publish SBS Information; Actor SBS Developer
AEpc {AE3.7.1.c} Remote SBS Publish-ing Information TR3 UCtr {U3.7.1.c} Define SBS Information; Actor SBS Publisher
* This work has been supported by project ISOFIN (QREN 2010/013837), Fundos FEDER through Programa Operacional Fatores de Competitividade -COMPETE and by Fundos Nacionais through FCT -Fundação para a Ciência e Tecnologia within the Project Scope: FCOMP-01-0124-FEDER-022674. | 39,742 | [
"1002459",
"1002460",
"1002461",
"991637",
"1002440"
] | [
"486560",
"486561",
"486561",
"300854",
"486532"
] |
01474754 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474754/file/978-3-642-41641-5_4_Chapter.pdf | John Krogstie
email: [email protected]
Evaluating Data Quality for Integration of Data Sources
Keywords: Product modelling, data integration, data quality
Data can be looked upon as a type of model (on the instance level), as illustrated e.g., in the product models in CAD and PLMsystems. In this paper we use a specialization of a general framework for assessing quality of models to be able to evaluate the combined quality of data for the purpose of investigating potential challenges when doing data integration across different sources. A practical application of the framework from assessing the potential quality of different data sources to be used together in a collaborative work environment is used for illustrating the usefulness of the framework for this purpose. An assessment of specifically relevant knowledge sources (including the characteristics of the tools used for accessing the data) has been done. This has indicated opportunities, but also challenges when trying to integrated data from different data sources typically used by people in different roles in an organization.
Introduction
Data quality has for a long time been an established area [START_REF] Batini | Data Quality: Concepts, Methodologies and Techniques Springer[END_REF]. A related area that was established in the nineties is quality of models (in particular quality of conceptual data models) [START_REF] Moody | Metrics for Evaluating the Quality of Entity Relationship Models[END_REF]. Traditionally, one has here looked at model quality for models on the M1 (type) level (to use the model-levels found in e.g., MOF [START_REF] Booch | The Unified Modeling Language: User Guide Second Edition[END_REF]). On the other hand, it is clear especially in product and enterprise modeling that there are models on the instance level (M0), an area described as containing data (or objects in MOFterminology). Thus our hypothesis is that also data quality can be looked upon relative to more generic frameworks for quality of models. Integrating data sources is often incorrectly regarded as a technical problem that can be solved by the ITprofessionals themselves without involvement from the business side. This widespread misconception focus only on the data syntax and ignores the semantic, pragmatic, social and other aspects of the data being integrated that can lead to costly business problems further on.
Discussions on data quality must be looked upon in concert with discussions on data model (or schema) quality. Comprehensive and generic frameworks for evaluating modelling approaches have been developed [START_REF] Krogstie | Model-based development and evolution of information systems: A quality approach[END_REF][START_REF] Lillehagen | Active Knowledge Modeling of Enterprises[END_REF][START_REF] Nelson | A conceptual modeling quality framework[END_REF], but these can easily become too general for practical use. Inspired by [START_REF] Moody | Theorethical and practical issues in evaluating the quality of concep tual models: Current state and future directions[END_REF], suggesting the need for an inheritance hierarchy of quality frameworks, we have earlier provided a specialization of the generic SEQUAL framework [START_REF] Krogstie | Model-based development and evolution of information systems: A quality approach[END_REF] for the evaluation of the quality of data and their accompanying data models [START_REF] Krogstie | A Semiotic Framework for Data Quality[END_REF]. Whereas the framework used here is the same as in [START_REF] Krogstie | A Semiotic Framework for Data Quality[END_REF], the application of the framework for looking at quality aspects when integrating data sources is novel to this paper.
In section 2, we present the problem area and case study for data integration. Section 3 provides a brief overview of SEQUAL, specialized for data quality assessment. An example of action research on the case, using the framework in practice is provided in section 4. In section 5, we conclude, summarizing the experiences applying the SEQUAL specialization.
Description of the Problem Area of the Case-Study
LinkedDesign 1 is an ongoing international project that aims to boost the productivity of engineers by providing an integrated, holistic view on data, actors and processes across the full product lifecycle. To achieve this there is a need to evaluate the appropriateness of a selected number of existing data sources, to be used as a basis for the support of collaborative engineering in a Virtual Obeya [START_REF] Aasland | An analysis of the uses and properties of the Obeya[END_REF]. Obeya -Japanese for "large room" -is a term used in connection with project work in industry, where one attempted to collect all relevant information from the different disciplines involved in the same physical room. Realizing a Virtual Obeya means to provide a "room" with similar properties, which is not a physical room, but exists only on the net.
Fig. 1: Approach to knowledge access and creation in a Virtual Obeya [START_REF] Aasland | An analysis of the uses and properties of the Obeya[END_REF] The selected data sources are of the types found particularly relevant in the use cases of the project. When we look of quality of a data source (e.g., a PDM tool), we look 3 Introduction to Framework for Data Quality Assessment SEQUAL [START_REF] Krogstie | Model-based development and evolution of information systems: A quality approach[END_REF] is a framework for assessing and understanding the quality of models and modelling languages. It has earlier been used for evaluation of modelling and modelling languages of a large number of perspectives, including data [START_REF] Krogstie | Quality of Conceptual Data Models[END_REF], object [START_REF] Krogstie | Evaluating UML Using a Generic Quality Framework[END_REF], process [START_REF] Krogstie | Quality of Business Process Models[END_REF][START_REF] Recker | Ontology-versus pattern-based evaluation of process modeling language: A comparison[END_REF], enterprise [START_REF] Krogstie | Assessing Enterprise Modeling Languages using a Generic Quality Framework[END_REF], and goal-oriented [START_REF] Krogstie | Using Quality Function Deployment in Software Requirements Specification[END_REF][START_REF] Krogstie | Integrated Goal, Data and Process Modeling: From TEMPORA to Model-Generated Work-Places[END_REF] modelling. Quality has been defined referring to the correspondence between statements belonging to the following sets: G, the set of goals of the modelling task. L, the language extension. D, the domain, i.e., the set of all statements that can be stated about the situation.
Domains can be divided into two parts, exemplified by looking at a software requirements specification model: o Everything the computerized information system is supposed to do. This is termed the primary domain. o Constraints on the model because of earlier baselined models. This is termed the modelling context. In relation to data quality, the underlying data model is part of the modelling context. M, the externalized model itself. K, the explicit knowledge that the audience have of the domain. I, the social actor interpretation of the model T, the technical actor interpretation of the model
The main quality types are: Physical quality: The basic quality goal is that the externalized model M is available to the relevant actors (and not others) for interpretation (I and T). Empirical quality deals with comprehensibility of the model M. Syntactic quality is the correspondence between the model M and the language extension L. Semantic quality is the correspondence between the model M and the domain D. Perceived semantic quality is the similar correspondence between the social actor interpretation I of a model M and his or hers current knowledge K of domain D. Pragmatic quality is the correspondence between the model M and the actor interpretation (I and T) of it. Thus whereas empirical quality focus on if the model is understandable according to some objective measure that has been discovered empirically in e.g., cognitive science, we at this level look on to what extend the model has actually been understood. The goal defined for social quality is agreement among actor's interpretations. The deontic quality of the model relates to that all statements in the model M contribute to fulfilling the goals of modelling G, and that all the goals of modelling G are addressed through the model M.
When we structure different aspects according to these levels, one will find that there might be conflicts between the levels (e.g., what is good for semantic quality might be bad for pragmatic quality and vice versa). This will also be the case when structuring aspects of data quality. We here discuss means within each quality level, positioning the areas that are specified by Batini et al. [START_REF] Batini | Data Quality: Concepts, Methodologies and Techniques Springer[END_REF], Price et al. [START_REF] Price | A Semiotic Information Quality Framework[END_REF][START_REF] Price | A semiotic information quality framework: Development and comparative analysis[END_REF] and Moody [START_REF] Moody | Metrics for Evaluating the Quality of Entity Relationship Models[END_REF]. Points from these previously described in [START_REF] Krogstie | A Semiotic Framework for Data Quality[END_REF] are emphasised using italic.
Physical Data Quality
Aspects of persistence, data being accessible (Price) for all (accessibility (Batini)), currency (Batini) and security (Price) cover aspects on the physical level. This area can be looked upon relative to measures of persistence, currency, security and availability that apply also to all other types of models. Tool functionality in connection with physical quality is based on traditional database-functionality.
Empirical Data Quality
This is addressed by understandable (Price). Since data can be presented in many different ways, this relates to how the data is presented and visualized. How to best present different data depends on the underlying data-type. There are a number of generic guidelines within data visualization and related areas that can be applied. For computer-output specifically, many of the principles and tools used for improving human computer interfaces are relevant at the empirical level.
Syntactic Data Quality
From the generic SEQUAL framework we have that there is one main syntactic quality characteristics, syntactical correctness, meaning that all statements in the model are according to the syntax and vocabulary of the language Syntax errors are of two kinds: Syntactic invalidity, in which words not part of the language are used. Syntactic incompleteness, in which one lack constructs or information to obey the language's grammar.
Conforming to metadata (Price) including that the data conform to the expected data type of the data (as described in the data model) are part of syntactic data quality. This will typically be related to syntactic invalidity when e.g., the data is of the wrong data-type.
Semantic Data Quality
When looking upon semantic data quality relative to the primary domain of modelling, we have the following properties: Completeness in SEQUAL is covered by completeness (Batini), mapped completely (Price), and mapped unambiguously (Price). Validity in SEQUAL is covered by accuracy (Batini), both syntactic and semantic accuracy as Batini has defined it, the difference between these is rather to decide on how incorrect the data is, phenomena mapped correctly (Price), properties mapped correctly (Price) and properties mapped meaningfully (Price). Since the rules of representation are formally given, consistency (Batini)/mapped consistently (Price) is also related to validity. The use of meta-data such as the source of the data is an important mean to support validity of the data.
Properties related to the model context are related to the adherence of the data to the data model. One would expect for instance that All tables of the data model should include tuples Data is according to the constraints defined in the data-model
The possibility of ensuring high semantic quality of the data is closely related to the semantic quality of the underlying data model. When looking upon semantic quality of the data model relative to the primary domain of modelling, we have the following properties: Completeness (Moody and Batini) (number of missing requirements) and integrity (Moody) (number of missing business rules).
Completeness (Moody) (number of superfluous requirements) and integrity (Moody) (number of incorrect business rules) relates to validity. The same applies to Batini's points on correctness with respect to model and correctness with respect to requirements.
Pragmatic Data Quality
Pragmatic quality relates to the comprehension of the model by the participants. Two aspects can be distinguished: That the interpretation by human stakeholders of the data is correct relative to what is meant to be expressed. That the tool interpretation is correct relative to what is meant to be expressed.
Starting with the human comprehension part, pragmatic quality on this level is the correspondence between the data and the audience's interpretation of it.
The main aspect at this level is interpretability (Batini), that data is suitably presented (Price) and data being flexibly presented (Price). Allowing access to relevant metadata (Price) is an important mean to achieve comprehension.
Social Data Quality
The goal defined for social quality is agreement. The area quality of information source (Batini) touches important mean for the social quality of the data, since a high quality source will increase the probability of agreement.
In some cases one need to combine different data sources. This consists of combing the data-models, and then transferring the data from the two sources into the new schema. Techniques for schema integration [START_REF] Francalanci | View integration: A survey of current developments[END_REF] are specifically relevant for this area.
Deontic Data Quality
Aspects on this level relates to the goals of having the data in the first place. Aspects to decide volatility (Batini) and timeliness (Batini)/ timely (Price) needs to relate to the goal of having and distributing the data. The same is the case for type-sufficient (Price), the inclusion of all the types of information important for its use.
Application of the Framework
Looking at the sets of SEQUAL in the light of the case of the LinkedDesign project, we have the following: G: There are goals on two levels. The goal to be achieved when using the base tool and the goal of supporting collaborative work using data from this tool as one of several sources of knowledge to be combined in the Virtual Obeya. Our focus in the case is on this second goal. L: The language is the way data is encoded (e.g., using some standard), and the language for describing the data model/meta-model. M: Again on two levels, the data itself and the data-model. A: Actors i.e., the people in different roles using the models, with a specific focus on the collaborators in the use-cases of the project. K: The relevant explicit knowledge of the actors (A) in these roles T: Relates to the possibilities of the languages used to provide tool-support in handling the data (in the base tools, and in the Virtual Obeya) I: Relates to how easy it is for the different actors to interpret the data as it can be presented (in the base tool, and also in a Virtual Obeya) D: Domain: The domain can on a general level be looked upon relative to the concepts of an upper-level ontology. We focus on perspectives captured in the generic EKA -Enterprise Knowledge Architecture of Active Knowledge Models (AKM) since these have shown to be useful for context-based user interface development in other projects [19, chapter 5]. Thus we look on information on: Products, tasks, goals and rules (from standards to design rules), roles (including organizational structure and persons, and their capabilities) and tools. Based on this we can describe the quality of data more precisely for this case: Physical quality relates to:
o If the data is available in a physical format (and in different versions when relevant) so that it can be reused in the Virtual Obeya. o Possibility to store relevant meta-data e.g., on context o Availability of data for update or annotation/extension in the user interface o Availability of data from other tools o Data only available for those that should have access in case of there being security aspects Empirical quality is not directly relevant when evaluating the data-sources per se.
Guidelines for this is relevant when we look upon how data can be presented in tools (and in the Virtual Obeya). Syntactic quality. Are the data represented in a way following the defined syntax including standards for the area? Semantic quality. Do the data sources potentially contain the expected type of data? Note that we here look on the possibility of representing the relevant types of data, obviously the level of completeness is dependent on what is represented in the concrete case. Tools might also have mechanisms for supporting the rapid development of complete models. Pragmatic quality. Is data of such a type that it can be easily understood (or visualized in a way that can be easily understood) by the stakeholders. Social quality. Is there agreement on the quality of the data among the stakeholders? Since different data comes from different tools, and often need to be integrated in the Virtual Obeya, agreement on interpretation of data and of the quality of the data sources among the involved stakeholders can be important. Deontic quality: Shall we with the help of data from the data source be able to achieve the goals of the project? Whereas the treatment at the other levels is meant to be generic, we have here the possibility to address the particular goals of the case explicitly. An important aspect of the case is to reduce waste in lean engineering processes [START_REF] Manyika | Using technology to improve workforce collaboration[END_REF]. In LinkedDesign, the use case-partners and other project partners have prioritized the waste areas, and we have used this input to come up with the following list of waste to be avoided as the most important: Not all the case organizations used all tool-types. We here focus on one of the organizations which had a need for integration of Excel-data, KBE and PLM-data. In the following we present the treatment of these areas.
Quality of Excel Data
Much data and information relevant for engineers and other business professionals is developed and resides in office automation tools like Excel [START_REF] Hermans | Analyzing and Visualizing Spreadsheets[END_REF]. Features supporting physical quality of Excel data: Data in tools like Excel can be saved both in the native format (.xls, .xlsx), in open standards such as .html, .xps, .dif, and .csv-files, and in open document formats (e.g., .ods), thus Excel-data can be made available in well-established forms following de jure and de facto standards, and thus can be easily made available for visualization and further use. One can also export e.g., PDF-versions of spreadsheets for making the information available without any possibility for interaction. Ensuring secure access to the data when exported is only manually enforced. Since the format is known, it is possible to save (updated) data from e.g., a Virtual Obeya, feeding this back to the original spreadsheet. Features supporting empirical quality of Excel data: Excel has several mechanisms for data-visualizations in graphs and diagrams to ensure nice-looking visualizations and these visualizations can be made available externally for other tools. The underlying rules and macros in the spreadsheets are typically not visualized. Features supporting syntactic quality of Excel data: Although the syntax of the storage-formats for Excel is well-defined, and standard data-types can be specified, there is no explicit information on the category of data (e.g., if the data represents product information). (Calculation) rules can be programmed, but these are undefined (in the formal meaning of the word), and the rules are in many export formats (such as .csv) not included. Features supporting semantic quality of Excel data: You can represent knowledge of all the listed categories in a spreadsheet, but since the data-model is implicit, it is not possible to know what kind of data you have available without support from the human developer of the data, or by having this represented in some other way. Features supporting pragmatic quality of Excel data: As indicated under empirical quality you can present data in spreadsheets visually, which can be shared (and you can potentially update the visualization directly), but as discussed under semantic quality, one do not have explicit knowledge of the category of the data represented. Features supporting social quality of Excel data: Since Excel (and other office automation tools) typically are personal tools (and adapted to personal needs, even in cases where a company-wide template has been the starting point), there is a large risk that there are inconsistencies between data (and the underlying data model) in different spreadsheets and between data found in spreadsheets and in other tools.
Features supporting deontic quality of Excel data:
Where much engineering knowledge is found in spreadsheets, it can be important to be able to include this in aggregated view in a Virtual Obeya. On the other hand, an explicit meta-model for the data matching a common ontology must typically be made in each case, thus it can be costly to ensure that all relevant data is available. As long as you keep to the same (implicit) meta-model for the data in the spreadsheet, you can update the data in the Virtual Obeya and have it transferred to the original data source. On the other hand, if you need to annotate the data with new categories it is not easy to update the spreadsheet without also updating the explicit meta-model without manual intervention.
Looking upon the waste forms we have the following Searching: When Excel is used, there is often data in a number of different Excelsheets developed by a number of different people, and it is hard to know that one have the right version available. Under-communication: There is no explicit data-model, thus the interpretation of data might be based on labels only, which can be interpreted differently by different persons. A number of (calculation) rules are typically captured in Excel-sheets without being apparent. Misunderstanding: Due to potential different interpretation of terms, misunderstandings are likely. Interpreting: Since the meaning of data is under-communicating, the time to interpret might be quite long. Waiting: If data must be manually transformed to another format to be usable this might be an issue. Extra processing: Due to the versatility of tools like Excel, it is very easy to represent additional data and rules, even if they are not deemed useful by the organization.
Quality of Data in KBE Tools
KBE -Knowledge based engineering has its roots in applying AI techniques (especially LISP-based) on engineering problems. In [START_REF] Rocca | Knowledge based engineering: Between AI and CAD. Review of a language based technology to support engineering design[END_REF], four approaches/programming languages are described: IDL, GDL, AML, and Intent!, all being extensions of LISP.
In LinkedDesign, one particular KBE tool is used; KBEdesign™. The KBeDesign™ is an engineering automation tool developed for Oil & Gas offshore platform engineering design and construction, built on top of a commercial Knowledge Based Engineering (KBE) application (Technosofts AML), being similar to the AML sketcher. In the use case, there are two important data sources: The representation of the engineering artifacts themselves, and the way the engineering rules are represented (in AML) as part of the code.
Features supporting physical quality of KBE data: Knowledge and data is hardcoded in the AML framework. There exists classes for exporting the AML code into XML (or similar), however some information might be lost in this process. There are also classes for querying the AML code for the information you want, along with classes for automatic report creation. It is possible in KBEdesign to interact with most systems in principle. What is so far implemented is import/export routines to analysis software like GeniE, STAAD.Pro. Drawings can be exported to DWG (AutoCAD format). When the model is held within the tool, access rights can be controlled, but it is hard to enforce this when the model is exchanged to other tools. There is limited support for controlling versions both in the rule-set and in the models developed based on the rule-set. As for the rules, these are part of the overall code which can be versioned. Some rules related to model hierarchy and metadata (not geometry) for ex-port to CAD and PLM systems are stored in a database and can be set up per project. Some capability to import data contained in the CAD-system PDMS is implemented.
Features supporting empirical quality of KBE data: Geometric data can be visualized as one instantiation of a model with certain input parameters. There are also multiple classes for different kind of finite element analysis of the model. Whereas the engineering artifact worked on is visualized in the work-tool, the AML-rules are not available for the engineer in a visual format. For those developing and maintaining the rule-base, these are represented in a code-format (i.e., structured text).
Features supporting syntactic quality of KBE data: In AML, datatypes are not defined. Programs might run even with syntax errors in formulas as there are both default values, and other mechanisms in place to ensure that systems can run with blank values. The data is stored in a proprietary XML-format, although as indicated it is also possible to make the model available using CAD-standards, but then only the information necessary for visualization is available. Options are available within AML for import and export to industry-standard file formats, including IGES, STEP, STL, and DXF. New STEP standards going beyond the current standards for CAD-tools that are interesting in connection to KBE codification are: The standard for construction history that is used to transfer the procedure used to construct the shape, referred to as ISO 10303-55.
Features supporting semantic quality of KBE data:
The focus in KBEDesign is the representation of product data. AML is used to represent engineering rules. There are also possibilities in the core technology to represent process information related to the products. Note that an OO-framework has some well-known limitations in representing rules, e.g., for representing rules spanning many classes [START_REF] Høydalsvik | On the purpose of object-oriented analysis[END_REF]. The AML framework also supports dependency tracking, so that if a value or rule is updated, everything that uses that value or rule is also changed. Dynamic instantiation is supported, providing potential short turnaround for changes to the rule-set.
Features supporting pragmatic quality of KBE data:
The experiences from the use case indicate that it is very important to be able to provide rule visualizations, and that these can be annotated with meta-data and additional information. Standard classes in the AML framework allow you to query AML models, generating reports. Data can be visualized any way you want in AML, and if the required visualization is not part of the standard AML framework, then it can be created. It is practical to have everything working in the same environment, but it can be difficult for non-experienced users to find the right functionality. Features supporting social quality of KBE data: KBE is a particular solution for engineering knowledge, and experiences from the use case indicate that there is not always agreement on the rules represented. The KBEDesign tool is used for developing oil-platform-designs, but for other engineering and design tasks, other tools are used. Export to tools used company-wide such as PDMS is important to establish agreement, and thus, social quality of the models.
Features supporting deontic quality of KBE data: An important aspect with object-oriented, rule-based approaches is the potential for supporting reuse across domains. Summarizing relative to factors for waste reduction in lean engineering Searching: Representing all rules in the KBE-system is useful in this regard, but they are to a limited degree structured e.g., relative to how rules influence each other, which rules are there to follow a certain standard etc. Under-communication: Since AML-rules are accessible as code only, it can be hard to understand why different design decisions are enforced. Misunderstanding: Can result from not having access to the rules directly; Interpreting: Additional time might be needed for interpretation for the above mentioned reason Waiting: If not getting support quickly for updating rules (if necessary), this can be an issue. The use of dynamic instantiation described under semantic quality can alleviate this, on the other hand one needs people with specific coding skills to add or change rules; Extra processing: Might need to represent rules differently to be useful in new situations. On the other hand if using the abstraction mechanism in a good way, this can be addressed.
Quality of Data in PDM/PLM Tools
Product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal. Whereas CAD systems focus primarily on early phases of design, PLM attempts to take a full lifecycle view. PLM intends to integrate people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise. There are a number of different PDM/PLM-tools. Some tools that were previously CAD tools like Catia have extended the functionality to become PLM-tools. The following is particularly based on literature review and interview with representatives for Teamcenter, which according to Gartner group is the market leader internationally for PLM tools. There is typically a core group of people creating information for such tools, and a vast group of people consuming this information.
Features supporting physical quality of PDM/PLM data: Core product data is held in an internal database supported by a common data model. The data can be under revision/version and security (access) control. Some data related to the product might be held in external files e.g., office documents. There can also be integration to CAD tools and ERP-tools (both ways). For Teamcenter for instance, there is CADintegration (with Autocad, Autodesk, SolidWorks, Unigraphics, I-deas NX, Solid Edge, Catia V5, Pro Engineer) and ERP-integration (bi-directional with SAP ERP (R/3), MS Dynamics and Oracle). In addition to access on workstation, it is also possible to access the data on mobile platforms such as iPAD. Data can also be shared with e.g., suppliers supporting secure data access across an extended enterprise. This kind of functionality should also make it easier to support the access of data in the PLM-system from outside (e.g., also from a Virtual Obeya). Teamcenter have multi-site functionality, but it does not work well to work towards the same database over long distances.
Features supporting empirical quality of PDM/PLM data: PLM tools typically support 2D and 3D visualization of the products within the tool. These are typically made in CAD tools. CAD tools typically have good functionality to visualize the product data in 3D. Because of its economic importance, CAD has been a major driving force for research in computational geometry and computer graphics and thus for algorithms for visualizations that one typically focus on as means under the area of empirical quality.
Features supporting syntactic quality of PDM/PLM data: Storage of PLM-data is typically done according to existing standards. PLM XML is supported in Teamcenter, in addition to the formats needed for export to CAD and ERP tools mentioned under physical quality.
Features supporting semantic quality of PDM/PLM data As the name implies, the main data kept in PLM systems is product data, including data relevant for the process the product undergoes through its lifecycle. Schedule information and workflow modeling is supported in tools such as Teamcenter, but similar to CAD tools, the function of the parts in the product is not represented in most tools. Compliance management modules can support representation of regulations (as a sort of rules).
Features supporting pragmatic quality of PDM/PLM data: Relevant context information can be added to the product description supporting understanding. PLM systems have become very complex and as such more difficult to use and comprehend. The size of the products (number of parts) has also increased over the years. Whereas a jet engine in the 1960s had 3000 parts, in 2010 it might have 200000 parts.
Reporting is traditionally in Excel, but newer tools can support running reports on the 3D-model, presenting the results as annotation to this. The Teamcenter tool has been reported to be hard to learn if you are not an engineer.
Features supporting social quality of PDM/PLM data: PLM systems are systems for integrating the enterprise. When implementing PLM-systems one needs to agree on the system set-up, data-coding etc. across the organization. Thus when these kinds of systems are successfully implemented, one can expect there to be high agreement on the data found in the tool in the organization. Note that a similar issue that is found in ERP systems, the so-called work and benefit disparity might occur (this problem was originally described in connection to so-called groupware systems [START_REF] Grudin | Groupware and social dynamics: eight challenges for developers[END_REF]). Company-wide application often require additional work from individuals who do not perceive a direct benefit from the use of the application. When e.g., creating new parts, a large number of attributes need to be added, thus it takes longer time to enter productinformation in the beginning.
Features supporting deontic quality of PDM/PLM data: Looking upon the waste forms we conclude the following Searching: Large models and a lot of extra data might make it difficult to get an overview and find all the (and only the) relevant information. On the other hand, since one have a common data-model, it should be easier to find all the data relevant for a given product. Under-communication: Since extra data has to be added up front for the use later in the product life cycle, it is a danger that not all necessary data is added (or is added with poor quality), which can lead to the next two issues:
Misunderstanding: Can be a result of under-communication.
Interpreting: When engineers and other groups need to communicate, one should also be aware of possible misunderstandings, given that it seems to be hard to learn these tools if you are not an engineer. Also given that only a few people are actually adding data a lot of people need to interpret these models without actively producing them. Waiting: It can be a challenge when a change is done for this to propagate also to e.g., ERP systems and supplier systems. For some type of data this propagation is automatic. Extra processing: Necessary to add data up front. Can be a challenge when you need to perform changes, to have the data produced in earlier phases updated.
Conclusion
Above, we have seen three assessments done using the specialization of SEQUAL for data quality of specifically relevant knowledge sources to be used in a Virtual Obeya. This has highlighted opportunities, but also challenges when trying to integrated data from different knowledge sources typically used by people in different roles in an organization in a common user interface, supporting collaboration. In particular it highlights how different tools have a varying degree of explicit meta-model (data model), and that this is available in a varying degree. E.g., in many export-formats one loses some of the important information on product data. Even when different tools support e.g., process data, it is often process data on different granularity. The tools alone all have challenges relative to waste in lean engineering. In a Virtual Obeya environment one would explicitly want to combine data from different sources in a context-driven manner to address these reasons for waste. Depending on the concrete data sources to combine, this indicates that it is often a partly manual job to prepare for such matching. Also the different level of agreement of data from different sources (social quality) can influence the use of schema and object matching techniques in practice.
As with the quality of a BPM [START_REF] Krogstie | Quality of Business Process Models[END_REF] and data models [START_REF] Krogstie | Quality of Conceptual Data Models[END_REF], we see some benefit both for SEQUAL and for a framework for data quality by performing this kind of exercise: Existing work on data and information quality, as summarized in [START_REF] Batini | Data Quality: Concepts, Methodologies and Techniques Springer[END_REF][START_REF] Price | A Semiotic Information Quality Framework[END_REF][START_REF] Price | A semiotic information quality framework: Development and comparative analysis[END_REF] can be positioned within the generic SEQUAL framework as described in Section 3. These existing overviews are weak on explicitly addressing areas such as empirical and social quality, as also described in Section 3.2 and Section 3.6. Guidelines and means for empirical quality can build upon work in data and information visualization. The work by Batini and Price et al. on the other hand enriches the areas of in particular semantic and pragmatic data quality, as described in section 3.4 and section 3.5. The framework, especially the differentiation between the different quality levels has been found useful in the case from which we have reported in Section 4, since it highlights potential challenges of matching data from different sources as dis-cussed above. On the other hand, to be useful, an additional level of specialization of the quality framework was needed. Future work will be to device more concrete guidelines and metrics and evaluate the adaptation and use of these empirically in other cases, especially how to perform trade-offs between the different data quality types. Some generic guidelines for this exist in SEQUAL [START_REF] Krogstie | Model-based development and evolution of information systems: A quality approach[END_REF], which might be specialised for data quality and quality of conceptual data models. We will also look at newer work [START_REF] Batini | Methodologies for data quality assessment and improvement[END_REF][START_REF] Jiang | Measuring and Comparing Effectiveness of Data Quality Techniques[END_REF] in the area in addition to the one we have mapped so far. Due to the rapid changes to data compared to conceptual models indicates that guidelines for achieving and keeping model quality might need to be further adapted to be useful when achieving and keeping data quality. We will also look more upon the use of the framework when integrating data from less technical areas such as CRM and ERP data.
o 4 . 1
41 Searching: time spent searching for information o Under-communication: Excessive or not enough time spent in communication o Misunderstanding: o Interpreting: time spent on interpreting communication or artifacts o Waiting: delays due to reviews, approvals etc. o Extra processing: excessive creation of artifacts or information Evaluations of Relevant Tool-types In this project, based on the needs of the use cases, we have focused on the following concrete tools and tool types in the assessment. Office automation: Excel Computer-Aided Design (CAD): PDMS, Autocad, Catia V5 Knowledge-based Engineering (KBE): KBEdesign Product Lifecycle Management (PLM/ PDM): Teamcenter, Enovia Enterprise Research Planning (ERP): SAP ERP (R/3), MS Dynamics
Standards for parameterization and constraints for explicit geometric product models, providing an indication of what are permissible to change refer to ISO 10303-108 for single parts and ISO 10303-109 for assemblies. Standard for what is known as 'design features', refer to ISO 10303-111.
Acknowledgements
The research leading to these results was done in the LinkedDesign project that has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013]) under grant agreement n°284613 | 41,310 | [
"977578"
] | [
"50794"
] |
01474758 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474758/file/978-3-642-41641-5_8_Chapter.pdf | Stefan Hofer
Modeling the Transformation of Application Landscapes
Keywords: application landscape, enterprise architecture, transformation, migration, co-evolution
Many of today's IT projects transform application landscapes. Transformation is a challenging task that has signicant eect on an organization's business processes and the organization itself. Although models are necessary to accomplish this task, there are no specialized modeling approaches for transformation. We describe what such a specialized modeling approach should be capable of. This will allow the adaption of existing approaches and thus support the transformation of application landscapes.
Introduction
Enterprises use models of application landscapes for many purposes. One of them is to support the transformation of application landscapes like in the following example:
An insurance company introduces a new customer relationship management (CRM) system which replaces a legacy application and provides new possibilities for handling documents. In addition, a back-oce system replaces the previous custom-made software for commission calculation. Millions of data records need to be migrated to the new systems.
Several applications (e.g. oer calculation, electronic application form, and bookkeeping) need to be adapted so they can exchange data with the new systems.
Releasing all changes in a big bang approach seems too risky, so the company opts for an incremental transition in two steps:
1. Transition to an intermediate to-be landscape.
2. Transition to the nal to-be landscape.
The transformation aects the users because they have to adapt their work processes. For example, the oer calculation process has to be altered to avoid data duplication. Instead of entering the customer's data into the oer calculator, it has to be entered into the CRM system and then imported by the calculator.
Transformation may be triggered by business needs, legislation and strategic technological decisions. An organization, its business processes and its application landscape are interwoven and changes to one of them are likely to aect the others. This eect is called co-evolution (see [START_REF] Mitleton-Kelly | Co-Evolution Of Diverse Elements Interacting Within A Social Ecosystem[END_REF]). Thus, transformation requires knowledge about applications and their dependencies, knowledge on how applications support business processes, and knowledge on how users work with applications. To acquire that knowledge, a vast amount of information has to be gathered and analyzed; information that changes frequently and cannot be gained by measuring and automated analysis only. Observations, interviews and assumptions add to the body of acquired knowledge. Models are a means to record this knowledge and make it accessible.
A large number of modeling tools 1 and notations support modeling of appli- cation landscapes. We claim that despite they have been in use to help transforming IT landscapes, they are not adjusted well enough for that purpose. For example, they give little guidance on how to create models from a huge amount of possibly contradicting information. Furthermore, it is often neglected in modeling that technical aspects and domain aspects of an application landscape need to be analyzed in conjunction with one another.
Our claim will be elaborated by presenting these contributions:
We dene what transformation of application landscapes means and how models are used in that area.
We present six requirements that should be fullled by modeling approaches that are used to support application landscape transformation.
We will show how well existing modeling approaches fulll these requirements.
In conclusion, the reader will see that modeling approaches are currently not well suited for application landscape transformation and that adapted approaches would be useful.
The contributions presented in this paper are the result of ongoing research.
Five real-world projects were analyzed to identify what characterizes transformation and to derive requirements for modeling approaches. The projects were conducted by companies from dierent domains, namely banking, logistics, and wholesale. The author was actively involved in three of those projects. Literature ( [START_REF] Engels | Quasar Enterprise. dpunkt[END_REF] and [START_REF] Matthes | Softwarekarten zur Visualisierung von Anwendungslandschaften und ihren Aspekten[END_REF]) served as additional input for the concept of transformation.
All results presented in this paper were evaluated by seven experts that helped transform application landscapes as project manager, IT department manager, consultant, or software architect. None of the experts are aliated with the author's employer or research group.
Transformation of Application Landscapes
The term application landscape refers to the entirety of the business applications and their relationships to other elements, e.g. business processes in a company 1 For example, Buckl et al. list 41 of such tools in [START_REF] Buckl | Enterprise Architecture Management Pattern Catalog[END_REF]. [4, p. 12]. More precisely, we will use this term only if the applications are used in the context of human work and some of the applications are directly used by people. As a counterexample, a group of applications that jointly and fully automatically carries out business processes is not considered an application landscape in the context of this paper.
Transformation is inevitable in the life cycle of application landscapes. We use the term to describe substantial, business-critical changes in an application landscape that have signicant impact on an organization's business processes and on the people that work with the applications. Hence, transformation is a form of co-evolution which means that the evolution of one domain is partially dependent on the evolution of the other [...], or that one domain changes in the context of the other. [START_REF] Mitleton-Kelly | Co-Evolution Of Diverse Elements Interacting Within A Social Ecosystem[END_REF] In this paper, we focus on the impact of transformation on business processes and on the people that execute them. This view was inuenced by the concept of application orientation which is one of the main pillars of a software development approach called Tools & Materials approach (see [START_REF] Züllighoven | Object-Oriented Construction Handbook[END_REF]): Application orientation focuses on software development, with respect to the future users, their tasks, and the business processes in which they are involved. [23, p. 4]. Accordingly, we aspire towards an application oriented approach to transformation of application landscapes. Yet, there are other aspects of application landscapes that are of importance for transformation, such as costs, maintainability, security, and scalability. They will, however, not be covered by this paper.
Transformation is usually carried out as a project that includes several types of activities:
Collect information: Technical aspects (e.g. dependencies) and business aspects (e.g. supported business processes) of the application landscape have to be gathered. This is either done before or during modeling.
Evaluate and decide: The goals of the transformation have to be dened and the current state of the application landscape has to be evaluated. For all aected applications, the necessary changes need to be identied.
Plan: Planning activities for transformation are comparable with regular IT projects. A lot of tasks have to be aligned in order to t into the overall roadmap.
Involve the organization: Lots of communication is required to explain the goals, decisions and deadlines to all those aected by the transformation.
Execute: The transformation is executed. The operations that constitute a transformation are: bringing an application into service changing an application placing an application out of operation More accurately, these operations are relevant to transformation projects only if they aect several applications (e.g. by adding a new dependency to the application landscape). Hence, end-to-end tests of business processes are required to ensure that the application landscape works as expected.
In this paper, projects that include these activities and transform an application landscape are called transformation projects. Although the list describes what activities are typical for transformation projects, it may not be complete.
As mentioned in Sect. 1, this list of activities was derived from real-world transformation projects and evaluated by experts.
Modeling in Transformation Projects
The requirements we will lay down in this paper aim at increasing the value that models provide to transformation projects. To understand what value that is and who benets from it, we will discuss the following questions:
How and what for are models used? What kinds of models are relevant for transformation projects? What is modeled? Who uses the models?
We will answer these questions in the following subsection. Examples for use of models in transformation projects conclude this section.
Characteristics of Modeling in Transformation Projects
In transformation projects, models are often used to evaluate the current state of an application landscape and to develop possible future states. The main purpose of the latter is to explore possibilities and to anticipate the consequences of transformation. This is of particular importance for the activities summarized as evaluate and decide in the previous section. In general, the purpose of a model covers a variety of dierent intentions and aims such as perception support for understanding the application domain, explanation and demonstration [. . . ], optimization of the origin, hypothesis verication through the model, construction of an artifact or of a program [20, p. 86].
Transformation projects require models that depict the context in which an application landscape is used the terminology of the domain, business processes and how the application landscape supports these processes. Models that represent a domain are called conceptual models. They may be interpreted as a collection of specication statements relevant to some problem [12, p. 42].
Other types of models used in transformation projects show the internal structure of an application landscape its software systems and their dependencies. Models depict either what software systems and dependencies exist generally in the landscape or how they work together during the execution of certain business processes. Yet other types of models focus on the dependencies between software systems and the hardware they run on.
The activities described in Sect. 2 involve various stakeholders like domain experts, IT experts, managers, and users. Since dierent kinds of models are commonly used to support these activities, modelers are relevant stakeholders too. A stakeholder's view on the application landscape is shaped by their goals and activities and may dier substantially from other stakeholders' views. A view can be expressed with one or several models.
Examples
The following examples illustrate how models are used in transformation projects:
As-is models of the structure of the application landscape foster discussion about the applications that might be aected by a transformation. Also, the models serve as a baseline for the development of to-be models that show possible future states of the application landscape's structure.
As-is and to-be models are used to analyze which dependencies have to be added, deleted, or modied and which interfaces need to be changed or introduced. Some transformation projects are carried out in several releases, transitioning incrementally from the as-is state to the to-be state. Models are used to determine the technical and process-related dependencies. This allows to decide which changes will be carried out in which transition.
Models are used to develop to-be processes that t the to-be state of the application landscape. The transformation's consequences on the way users work with their applications are communicated with the help of such models.
To-be models allow cross-checking the planned to-be state of the application landscape and its intended use. Such cross-checks are useful to detect shortcomings in the planned to-be states.
Models are used to develop test cases that ensure that the application landscape supports the business processes as expected during transitions and in the nal to-be state.
Creation and use of models like in these examples require a suitable modeling approach. In the next section, we will describe what constitutes such an approach.
Six Requirements for a Modeling Approach for Transformation Projects
A graphical modeling approach should provide a modeling language and a modeling procedure (see [START_REF] Karagiannis | Metamodelling Platforms[END_REF]). In addition, we consider tool support essential for use in real-world projects. Furthermore, any modeling approach should provide means to achieve high quality of models. Quality attributes that generally apply to models are for example correctness, consistency and comprehensibility (see [START_REF] Mohagheghi | Towards a Tool-Supported Quality Model for Model-Driven Engineering[END_REF]).
We claim that there are additional requirements to modeling approaches that are due to the nature of transformation projects. Also, we argue that stakeholders would benet from using a modeling approach that meets these requirements. The following collection of requirements was compiled from an application-oriented point of view. Hence, the requirements focus primarily on how application landscape, business processes and the people who execute them are intertwined. 12 requirements derived from the same ve real-world projects described in Sect. 1 were evaluated by the same experts who assessed what activities are typical for transformation projects. The six requirements rated as most relevant are:
Requirement 1: The modeling approach should make the available information manageable.
A model is an abstraction of an original (e.g. an application landscape) that contains only selected properties of that original. It is generally assumed that all the properties of the original are known and that the selection of properties that are represented in the model is driven by the goal of the model. However, this assumption does not hold in the context of application landscapes. There is extensive information available about an application landscape and its use.
In large organizations this information changes constantly. Since transformation projects do not only require technical information but also information on how the landscape is used, the problem is aggravated:
The only complete specication of a system is the system itself, and the only complete specication of the use of a system is an innite log of its actual use [. . . ]. [5, p. 255].
For these reasons, it is not possible to create a complete model within limited time and eort. Information on complex application landscapes is both incomplete and beyond comprehensibility. Therefore, a modeling approach should provide some guidance on how to create and use models in such an environment.
Requirement 2:
The modeling approach should be able to express contradictions.
It is tempting to assume that models of application landscapes show facts. After all, applications are technical systems and information about them can be measured or at least gathered automatically. But even if that were the case with all the technical information, some interpretation is necessary to create models from that information. In addition to technical information, transformation projects require information about how people use an application landscape (see Sect. 3).
In some cases, log les can be used to analyze application usage (see [START_REF] Van Der Aalst | Intra-and Inter-Organizational Process Mining: Discovering Processes Within and Between Organizations[END_REF]). But to a certain extent modelers have to rely on interviews and observations. Hence, modeling in the context of application landscapes is a social process. It has to account for the personal goals and needs of the people involved. For example, a stakeholder might present an assumption as a fact, withhold information, or (consciously or not) falsify information. In such an environment, contradictions will emerge.
Requirement 3: The modeling approach should be able to express how an application landscape supports business processes.
In transformation projects, stakeholders use models to understand which business processes depend on which applications. However, this information does not suce to plan how work processes and organizational units are aected by the transformation. This requires knowledge of how exactly applications are used in business processes. In particular, this information is needed for testing.
Requirement 4: The modeling approach should be able to express an application landscape's dependencies even for business processes that use several applications and are carried out by more than one organizational unit. Models of such processes are prone for errors as the people that are involved in them usually are familiar with fragments of the process only. The information they can provide on how application landscapes and business processes work together may be inaccurate. The division of work results in little understanding of overall processes. Modeling approaches should consider that.
Requirement 5: The modeling approach should be able to express dependencies between applications even if they cannot be mapped to technical interfaces.
There are various kinds of dependencies in an application landscape like calls (of functions, methods etc.), shared data, shared hardware (e.g. same network segment), and shared runtime environments (e.g. virtual machine). At least in theory, some information about dependencies can be gathered by analyzing interface access. This increases condence in the information that is depicted in a model. But there is another kind of dependency that does not correspond to any technical interface and can only be recognized by analyzing business processes: Dependency by time and order. For example, a stakeholder may use the results of one task (carried out with application A) to decide, how to carry out another task with application B. If application A was to be changed or replaced in a transformation project, the way how stakeholders use application B could be aected. Such dependencies have to be considered in transformation projects and in modeling. Requirement 6: The modeling approach should be able to express how an application landscape changes over time. As illustrated by the example in Sect. 1, application landscapes undergo a series of changes until their desired state is reached at the end of a transformation project. Thus, it is important for stakeholders to know how and when changes will aect their work processes. This is not just a matter of project planning but of communication.
In the next section, we will evaluate how well existing modeling approaches meet the requirements.
Evaluation of Existing Modeling Approaches
The goal of this evaluation is to test our claim that existing approaches are not suited well enough for transformation projects. As mentioned in Sect. 1 there is a large number of modeling languages, frameworks and tools that deal with application landscapes. Our evaluation focuses on approaches that fulll certain criteria or are open for extension so that the criteria could be fullled by adapting the approach. The criteria are:
The approach consists of a modeling notation, a methodology for creation and use of models, and tool support.
The approach can express dierent views on an application landscape (as described in Sect. 3). It is able to depict technical information and domain knowledge.
Since these criteria are possibly matched by many professional modeling tools we chose one tool as a representative and omitted other commercial products from the evaluation.
In the following sub-sections we will give a short introduction to the approaches that were included in the evaluation. The section is concluded with the results of the evaluation.
UML
The Unied Modeling Language (UML, see [START_REF] Rumbaugh | The Unied Modeling Language reference manual[END_REF]) has its origins in the area of software engineering but lays claim to be much more versatile: UML is a general purpose language, that is expected to be customized for a wide variety of domains [17, p. 211].
UML is adaptable and can be modied to depict application landscapes.
Such an adaption is reported by Heberling et al. in [START_REF] Heberling | Visual Modelling and Managing the Software Architecture Landscape in a large Enterprise by an Extension of the UML[END_REF]. Countless modeling tools support UML. However, UML does not include a methodology for how to create or use models:
[UML] is methodology-independent. Regardless of the methodology that you use to perform your analysis and design, you can use UML to express the results. [START_REF]Object Management Group: Introduction to OMG's Unied Modeling Language[END_REF] UML was included in the evaluation for its extensibility and widespread adoption.
ArchiMate
ArchiMate was developed to model enterprise architectures (which application landscapes are a part of ). It does not include a methodology but an informal description of usage scenarios that are expressed as a collection of viewpoints (see [START_REF]The Open Group: N116 ArchiMate 2.0 Viewpoints Reference Card[END_REF]). Since version 2.0, ArchiMate can be used in combination with The Open Group Architecture Framework (TOGAF, see [START_REF] Jonkers | Using the TOGAF 9.1 Architecture Content Framework with the ArchiMate 2.0 Modeling Language[END_REF]) for enterprise architecture development. However, TOGAF does not provide any guidelines on how to create or use ArchiMate models.
As UML, ArchiMate is a standardized and established modeling language that is supported by many tools and was thus included in the evaluation.
EAM Pattern Catalog
The EAM patterns described in the Enterprise Architecture Management Pattern Catalog [START_REF] Buckl | Enterprise Architecture Management Pattern Catalog[END_REF] are a collection of problems and tting solution patterns in the area of IT enterprise architecture management. This descriptive approach presents best practices for analysis, graphical representation, and information modeling. The patterns aim at enhancing existing approaches. For example, the catalog's methodology patterns concretize TOGAF and the viewpoint patterns show applications of UML, ArchiMate, and software maps (see [START_REF] Buckl | Generating Visualizations of Enterprise Architectures using Model Transformations[END_REF]). Due to the nature of this approach, the criterion of tool support can be neglected.
Methodology patterns describe the use of models. Since this approach tends to interpret graphical models as mere visualization of an underlying information model, there is no guidance on how to create models. Yet, this approach was included in the evaluation because it is grounded in practice and many of the patterns are implemented by professional modeling tools.
MEMO
Multi-perspective enterprise modeling (MEMO, see [START_REF] Frank | Multi-perspective enterprise modeling: foundational concepts, prospects and future research challenges[END_REF]) is a framework for the development of domain specic modeling languages for use in enterprise modeling. Examples of such languages are the business process modeling language OrgML and the IT Modeling Language (ITML, see [START_REF] Kirchner | Entwurf einer Modellierungssprache zur Unterstützung der Aufgaben des IT-Managements[END_REF]) for IT management.
All MEMO languages share a common meta-model that ensures interoperability of languages. MEMO is meant to be extended and provides a metamethodology for modeling that can be used to develop a language-specic methodology. A tool prototype demonstrates that tool support is feasible.
MEMO was included in the evaluation because it provides adequate concepts to create a modeling approach for transformation projects.
ADOit 5.0
ADOit is a commercial tool for architecture management developed by the BOC Group [START_REF]ADOit Product Website[END_REF]. Its meta-model can be adapted to meet the requirements of transformation projects. Yet, already the meta-model's default conguration suces to model technical and domain aspects of application landscapes. Although ADOit is not coupled to a specic methodology for creation and use of models there are predened views and queries that suggest certain usage scenarios.
ADOit was included in the evaluation because of its adaptable meta-model and the availability of a free-of-charge community edition. The tool is relevant to the German market and documentation is available. The vendor proved to be accessible for discussion. However, it should be noted that these criteria might also be met by other vendors (and their tools, respectively).
BEN
The Business Engineering Navigator (BEN, see [START_REF] Winter | Business Engineering Navigator[END_REF]) developed at the University of St. Gallen is an approach for managing IT enterprise architecture. It oers support for modeling IT and its relation to business processes. BEN provides some guidance on how to analyze models of application landscapes and tool support is available. It was included in the evaluation because it covers application landscapes from a business engineering perspective.
Evaluation
In this section, we rate how well the approaches that were described briey in the preceding sections meet the requirements laid down in Sect. 4. The results are summed up in Table 1 and explained in the remainder of this section. We use the following values for the rating: ++ Requirement is fullled.
+
Rudimentary but insucient solution for requirement.
=
Requirement not fullled but approach oers means for enhancement.
Requirement not fullled. proles, see [START_REF] Rumbaugh | The Unied Modeling Language reference manual[END_REF]) we rated it with =. The EAM patterns approach was rated + because for every concern it addresses corresponding information model patterns. These patterns help a modeler to identify which information is needed to provide a model for the given concern.
Requirement 2 (contradictions): Contradictions will catch a stakeholder's eye during modeling or during use of models. The approaches presented above give little guidance on how to model and contradictions are not mentioned at all. However, some approaches oer generic means to at least annotate contradictions. UML provides stereotypes and tagged values (see [START_REF] Rumbaugh | The Unied Modeling Language reference manual[END_REF])
for that purpose and ADOit's meta-model could be adapted to achieve a similar possibility.
Requirement 3 (business process support): Several approaches can express which activities an application is involved in. UML oers activity diagrams and partitions (see [START_REF] Rumbaugh | The Unied Modeling Language reference manual[END_REF], often called swim lanes" in other approaches).
ADOit allows modelers to link activities to applications. In addition, it provides several queries to analyze the usage of an application landscape in business processes. ArchiMate's business layer only allows for modeling of coarse-grained process chains but they can be linked with services and components in the application layer. Several similar viewpoint patterns can be found in the EAM pattern catalog. MEMO's meta-model includes a relationship between business processes and applications (see [START_REF] Frank | Multi-perspective enterprise modeling: foundational concepts, prospects and future research challenges[END_REF]) which allows for the creation of a MEMO language that fullls this requirement.
Requirement 4 (dependencies across boundaries): The dependencies described in this requirement can be expressed with UML's activity diagrams and ArchiMate (for example with the introductory and layered viewpoint described in [START_REF]The Open Group: N116 ArchiMate 2.0 Viewpoints Reference Card[END_REF]). An overview of such dependencies is provided by so-called Process Support Maps that show which applications support which business processes. Such visualizations are described in the EAM patterns catalog (e.g. viewpoint patterns V-29 and V-30, see [START_REF] Buckl | Enterprise Architecture Management Pattern Catalog[END_REF]), in ADOit, and in BEN.
Requirement 5 (non-technical dependencies): To express dependencies between applications that cannot be mapped to technical interfaces, both modeling language and methodology have to be considered. Since no approach provides both, none was rated + or ++. Approaches with extensive possibilities to depict dependencies or generic relation types were rated =.
Requirement 6 (change over time): The approaches included in the evaluation provide three dierent means to express how an application landscape changes over time:
The rst is provided by EAM patterns, BEN, and ADOit which allow to model the life cycle of applications. The second possibility is a model of the application landscape that combines as-is and to-be applications. ArchiMate (viewpoint Implementation and Migration, see [START_REF]The Open Group: N116 ArchiMate 2.0 Viewpoints Reference Card[END_REF]) and ADOit support this kind of model. Additionally, ADOit oers a time-based lter that helps tracking changes over time. Third, BEN's tool support allows for pairwise comparison of models. This functionality can be used to compare as-is and to-be models with each other.
Conclusions and Further Work
In this paper, we introduced a type of project that deals with extensive changes to an organization's application landscape transformation projects. We look at transformation from an application-oriented point of view that focuses on how an application landscape is used in business processes. This point of view relies on models of the application landscape and its use.
We argued that specic requirements for modeling exist in this area and that it would be benecial for modeling approaches to meet these requirements.
This would improve their suitability for transformation projects. such requirements were presented. An evaluation of existing approaches showed that some approaches provide means to fulll some of these requirements. None of the approaches met all the requirements. However, it is not necessary to invent a completely new modeling approach for transformation projects. We plan to show that existing approaches can be complemented so that they are better suited for transformation projects. Therefore, we will create an enhanced approach to show the feasibility of this idea. The enhanced approach will then be evaluated to test our initial claim: Fullling the presented requirements leads to a more useful modeling approach for transformation projects.
Table 1 .
1 Results
of the Evaluation
Requirement UML ArchiMate EAM-Patterns MEMO ADOit BEN
1 manageable information = = + = =
2 contradictions + =
3 business process support ++ + + + +
4 dependencies across ++ ++ + + + +
boundaries
5 non-technical dependen- = = = =
cies
6 change over time + + = ++ +
Requirement 1 (manageable information): To make the amount of infor-
mation manageable, one can simply include less of it in a model either by
constricting the modeling language or by switching to a more coarse-grained,
generalized perspective that omits detail. Even if a modeling approach does
not support these mechanisms explicitly they can always be applied by con-
vention. If an approach enforces such conventions (like UML by providing | 32,073 | [
"1002469"
] | [
"419674"
] |
01474781 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474781/file/978-3-642-41641-5_12_Chapter.pdf | Céline Décosse
email: [email protected]
Wolfgang A Molnar
email: [email protected]
Henderik A Proper
A Qualitative Research Approach to Obtain Insight in Business Process Modelling Methods in Practice
Keywords: enterprise modelling in practice, information systems method evaluation criteria, qualitative research approach
In this paper we are concerned with the development of an observational research approach to gain insights into the performance of Business Process Modelling Methods (BPMMs) in practice. In developing this observational approach, we have adopted an interpretive research approach. More specifically, this involved the design of a questionnaire to conduct semi-structured interviews to collect qualitative research data about the performance of BPMMs. Since a BPMM is a designed artefact, we also investigated Design Science Research literature to identify criteria to appreciate the performance of BPMMs in practice. As a result, the questionnaire that was used to guide the interview is based on a subset of criteria of progress for information systems theories, while the observational research approach we adopted involves the collection of qualitative data from multiple stakeholder types. As a next step, the resulting questionnaire was used to evaluate the performance an actual BPMM in practical use; the DEMO method. Though the analysis of the collected qualitative data of the DEMO case has not been fully performed yet, we already foresee that part of the information we collected provides new insights compared to existing studies about DEMO, as is the fact that a variety of types of stakeholders have been approached to observe the use of DEMO.
Introduction
In this paper we are concerned with the development of an observational research approach to obtain more insights into the actual performance of Business Process Modelling Methods (BPMM) in practice. Our initial research goal was to gain more insight into the performance of the DEMO (Design and Engineering Methodology for Organisations) method, which is a specific BPMM that is employed by enterprise architects to design concise models of organizations processes. Indeed, we knew that several projects had been performed with DEMO [START_REF] Dias | Using enterprise ontology for improving emergency management in hospitals[END_REF][START_REF] Dias | Using Enterprise Ontology for Improving the National Health System-Demonstrated in the Case of a Pharmacy and an Emergency Department[END_REF][START_REF] Dias | Using Enterprise Ontology Methodology to Assess the Quality of Information Exchange Demonstrated in the case of Emergency Medical Service[END_REF][START_REF] Guerreiro | Enterprise dynamic systems control enforcement of run-time business transactions -Lecture notes[END_REF][START_REF] Maij | Use cases and DEMO: aligning functional features of ICT-infrastructure to business processes[END_REF][START_REF] Henriques | Enterprise Governance and DEMO -Guiding enterprise design and operation by addressing DEMO's competence, authority and responsibility notions[END_REF][START_REF] Pombinho | Towards Objective Business Modeling in Enterprise Engineering-Defining Function, Value and Purpose[END_REF][START_REF] Op 't Land | Towards a fast enterprise ontology based method for post merger integration[END_REF][START_REF] Nagayoshi | A study of the patterns for reducing exceptions and improving business process flexibility[END_REF][START_REF] Barjis | A business process modeling and simulation method using DEMO[END_REF][START_REF] Dumay | Evaluation of DEMO and the Language / Action Perspective after 10 years of experience[END_REF][12]. For some of these projects, the application of DEMO seems to be very promising, e.g. DEMO helped to "construct and analyse more models in a shorter period of time" (p.10) [START_REF] Dias | Using Enterprise Ontology for Improving the National Health System-Demonstrated in the Case of a Pharmacy and an Emergency Department[END_REF]. Therefore, we were curious about the performance of DEMO in practice. In addition, we had access to practitioners who have used DEMO in their projects, and who would agree to have these projects investigated by researchers.
However, rather than limiting ourselves to DEMO only, we decided to generalize our effort to BPMMs in general. In other words, rather than developing an observational approach to only observe the performance of DEMO in practice, we decided to develop an approach to observe the practical performance of BPMMs in general (still using DEMO as a specific case). Having insights about a BPMM in practice is valuable because it is easier to select, promote, improve or even better use a BPMM when knowing what can be expected when using it in practice. In doing so, we were also inspired by Winter (p.471) [START_REF] Winter | Design science research in Europe[END_REF]: "Not every artefact construction, however, is design research. 'Research' implies that problem solutions should be generic to some extent, i.e., applicable to a set of problem situations", where in our case the constructed artefact is the observation approach for the performance of BPMM in practice.
In developing the observation approach, we chose the interpretive research approach as a starting point, where qualitative research data pertaining to the use and performance of the specific BPMM, will be collected with semi-structured interviews. The original contribution of this paper is the way in which we defined the themes and questions of these interviews: we selected a subset of criteria of progress for information systems theories proposed by Aier and Fischer [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. We then performed the interviews to investigate the use of DEMO in practice in several projects and contexts by several types of stakeholders acquainted with DEMO. Although the analysis of the data has not been completely performed, we nevertheless already present some first insights about DEMO in practice.
The paper is structured as follows: in the introduction we defined the problem to be addressed. Section 2 introduces definitions and positions DEMO as a specific BPMM in these definitions. In section 3, we reviewed the literature about the evaluation of DEMO in practice. Section 4 is the core of the paper: it presents the proposed research approach for getting insights about a BPMM in practice. Section 5 deals about the validity of this proposed observation approach by applying it to the use of DEMO in practice as a case study. A few first insights we gained about the use of DEMO in practice, based on the conducted interviews, are briefly presented in section 6. We then underline some limitations of our work and conclude in section 7.
DEMO as a method
Method. Unlike process definitions, method definitions often refer to a modelling language and to "underlying concepts" [START_REF] Mettler | Situational maturity models as instrumental artifacts for organizational design[END_REF], "hidden assumptions" [START_REF] Kensing | Towards Evaluation of Methods for Property Determination: A Framework and a Critique of the Yourdon-DeMarco Approach[END_REF] or "way of thinking" [START_REF] Seligmann | Analyzing the structure of IS methodologies, an alternative approach[END_REF]. In the information technology domain, March and Smith define a method as "a set of steps (…) used to perform a task." [START_REF] March | Design and natural science research on information technology[END_REF]. The definition proposed by Rescher [START_REF] Rescher | Methodological pragmatism: A systems-theoretic approach to the theory of knowledge[END_REF] and adopted by Moody [START_REF] Moody | The method evaluation model: a theoretical model for validating information systems design methods[END_REF] is more general: as methods define "ways of doing things, methods are a type of human knowledge (the "knowledge how"). Mettler and Rohner (p.2) [START_REF] Mettler | Situational maturity models as instrumental artifacts for organizational design[END_REF] bring a prescriptive flavour and an ideal goal to method definition: "methods (…) focus on the specification of activities to reach the ideal solution (how)". They distinguish methods "key activities" (that aim to reach a business goal) from their "underlying concepts" (that is "the conceptual view on the world that underlies the performance of the activities") [START_REF] Mettler | Situational maturity models as instrumental artifacts for organizational design[END_REF]. This distinction seems to be consistent with the view of Seligman et al. [START_REF] Seligmann | Analyzing the structure of IS methodologies, an alternative approach[END_REF] on information systems methodologies, that they characterise with "5 ways": ─ the "way of thinking", defined as "hidden assumptions" that are "used to look at organisations and information systems" [START_REF] Kensing | Towards Evaluation of Methods for Property Determination: A Framework and a Critique of the Yourdon-DeMarco Approach[END_REF][START_REF] Seligmann | Analyzing the structure of IS methodologies, an alternative approach[END_REF], ─ the "way of working" (how to do things), ─ the "way of controlling" (how to manage things), ─ the "way of modelling", which they define as the "network of [the method's] models, i.e. the models, their interrelationships and, if present, a detailed description of the model components and the formal rules to check them", ─ the "way of supporting", which is about the tools supporting the method.
In the current paper, we want to have an insight about methods that are used as BPMMs in practice. The method definitions above are a way for the researcher to establish a set of themes to interview stakeholders about without forgetting aspects of methods that are recurrent in the literature.
Method and modelling language. When modelling languages are mentioned in the methods definitions above, they are presented as being part of methods: March and Smith explain that "although they may not be explicitly articulated, representations of tasks and results are intrinsic to methods." (p.257) [START_REF] March | Design and natural science research on information technology[END_REF]. For Seligman et al. [START_REF] Seligmann | Analyzing the structure of IS methodologies, an alternative approach[END_REF], the way of modelling is one of the features of a method, as is the "way of working". DEMO, language or method? Primarily, an artefact! Whereas authors argue that DEMO is a modelling language [START_REF] Hommes | Assessing the quality of business process modelling techniques[END_REF], almost all other recent publications about DEMO usually consider it as a method [START_REF] Weigand | LAP: 10 years in retrospect[END_REF][START_REF] Khavas | The Adoption of DEMO in Practice -Dissertation for a Master of Science in Computer Science[END_REF]. Winter et al. argue that a method and recommendations concerning the representation of a model can be seen as aspects of an artefact, and then propose the following artefact definition: "A generic artefact consists of language aspects (construct), aspects referring to result recommendations (model), and aspects referring to activity recommendations (method) as well as instantiations thereof (instantiation)." (p.12) [START_REF] Winter | Method versus model -two sides of the same coin? Advances in Enterprise Engineering III[END_REF].
Literature review on the evaluation of DEMO in practice
This literature review aims at investigating whether DEMO has been evaluated in practice and how. Many papers [START_REF] Dias | Using enterprise ontology for improving emergency management in hospitals[END_REF][START_REF] Dias | Using Enterprise Ontology for Improving the National Health System-Demonstrated in the Case of a Pharmacy and an Emergency Department[END_REF][START_REF] Dias | Using Enterprise Ontology Methodology to Assess the Quality of Information Exchange Demonstrated in the case of Emergency Medical Service[END_REF][START_REF] Guerreiro | Enterprise dynamic systems control enforcement of run-time business transactions -Lecture notes[END_REF][START_REF] Maij | Use cases and DEMO: aligning functional features of ICT-infrastructure to business processes[END_REF][START_REF] Henriques | Enterprise Governance and DEMO -Guiding enterprise design and operation by addressing DEMO's competence, authority and responsibility notions[END_REF][START_REF] Pombinho | Towards Objective Business Modeling in Enterprise Engineering-Defining Function, Value and Purpose[END_REF][START_REF] Op 't Land | Towards a fast enterprise ontology based method for post merger integration[END_REF][START_REF] Nagayoshi | A study of the patterns for reducing exceptions and improving business process flexibility[END_REF][START_REF] Barjis | A business process modeling and simulation method using DEMO[END_REF][START_REF] Dumay | Evaluation of DEMO and the Language / Action Perspective after 10 years of experience[END_REF][12] deal with case studies in which DEMO has been used to design situational DEMO based methods or to propose ontologies. Besides, qualities of DEMO models have been studied in several evaluations [START_REF] Hommes | Assessing the quality of business process modelling techniques[END_REF][START_REF] Huysmans | Using the DEMO methodology for modeling open source software development processes[END_REF]. We found two papers dealing with a partial evaluation of DEMO in practice across several cases. The first one [START_REF] Ven | The adoption of demo: A research agenda[END_REF] studies the adoption of DEMO by DEMO professionals in practice, in order to improve this adoption (Table 1). This study is restricted to the adoption of DEMO in practice so the use of DEMO in practice is not the core of the study [START_REF] Ven | The adoption of demo: A research agenda[END_REF]. The second one [START_REF] Dumay | Evaluation of DEMO and the Language / Action Perspective after 10 years of experience[END_REF] investigates DEMO as a means of reflecting upon the Language/Action perspective (LAP) (Table 2). The DEMO related part of this paper aims at finding out how the actual application of DEMO differs from its intended application. Besides, only DEMO professionals were asked to answer the survey so the study only reflects why DEMO certified practitioners adopted (or not) DEMO, it provides less insights about people aware of the existence of DEMO who are not DEMO professionals. Dumay et al. focused on how the professional application of DEMO differs from its intended application so only practitioners have been involved in their study [START_REF] Dumay | Evaluation of DEMO and the Language / Action Perspective after 10 years of experience[END_REF].
Results
The DEMO in practice evaluation part of this study "answers the question how the professional application of DEMO differs from its intended application." (p.80)
To the best of our knowledge, no study has been performed with the aim of giving a holistic view on the use of DEMO in practice both by approaching a variety of themes and a diversity of stakeholder's profiles. The subject or the current paper is to define an observational approach to do so.
Observational approach for BPMMs performance in practice 4.1 Observational approach overview: an interpretive approach
This section discusses the set up of the observational research approach that is to be used to gain insights about the performance of a BPMM in practice. These insights are provided by exploring stakeholders' views about the use of a BPMM in practice.
For this exploratory purpose, we adopted a qualitative research approach, because it is aimed at understanding phenomena and provides modes and methods for analysing text [START_REF] Myers | Qualitative research in information systems[END_REF]. Qualitative research can be positivist, interpretive, or critical [START_REF] Myers | Qualitative research in information systems[END_REF]. We adopted an interpretive approach because it allows to produce "an understanding of the context of the information system, and the process whereby the information system influences and is influenced by the context" [START_REF] Walsham | Interpreting information systems in organizations[END_REF](p. [START_REF] Guerreiro | Enterprise dynamic systems control enforcement of run-time business transactions -Lecture notes[END_REF][START_REF] Maij | Use cases and DEMO: aligning functional features of ICT-infrastructure to business processes[END_REF]. "Interpretive methods of research adopt the position that our knowledge of reality is a social construction by human actors" [START_REF] Walsham | The Emergence of Interpretivism in IS Research[END_REF]. We found it suitable for exploring the use of a method, because a method is an artefact that is designed, performed and evaluated by human people.
To collect these qualitative data, we selected the semi-structured interview technique. Qualitative interviews are one of "the most important data gathering tools in qualitative research" (p.2) [START_REF] Myers | The qualitative interview in IS research: Examining the craft[END_REF]. The reason is that it is "permitting us to see that which is not ordinarily on view and examine that which is looked at but seldom seen" (p.vii) [START_REF] Rubin | Qualitative interviewing: The art of hearing data[END_REF]. We created a questionnaire to be used as a guideline by the interviewer during the interviews to discover what can be expected from DEMO in practice.
Whether being a method or a language, DEMO is a designed artefact; so are BPMMs. We considered that to gain insights about what can be expected from a designed artefact, we could use artefacts evaluation criteria. We selected those criteria by reasoning on criteria of progress for information systems theories proposed by Aier and Fischer [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. We based the questions of the questionnaire on the artefacts evaluation criteria we selected, and then gathered these questions in themes to ease the interviewing process. We then obtained an interview questionnaire about IS (Information Systems) DS (Design Science) artefact evaluation, to which we added a few complementary questions that are specific to DEMO.
The scope of the current paper does not involve the mode of analysis of the collected data; it is concerned with the identification of the observational approach for gaining insights about a BPMM in practice and with the first insights about DEMO provided by the interviews that have been performed.
Interview guideline setup
Identification of the interview themes
The themes of the questionnaire used as an interview guideline have been identified according to the goal of the questionnaire: gain insights about DEMO in practice. Themes have been identified from the design science literature regarding artefact and method evaluation, so that the core of the questionnaire might be used as a guideline for any IS method evaluation; from literature about DEMO, regarding stakeholders feed-back about DEMO; during a brainstorming with fellow researchers to complete the above points.
In 2010, Aier and Fischer proposed a set of Criteria of Progress for Information Systems Design Theories [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. We call this set of criteria CriProISDT 1 . Aier and Fischer based their reflection, amongst other elements, on "evaluation criteria for IS DSR artefacts" ("IS DSR" stands for "Information Systems Design Science Research") and reviewed the literature about that. In particular, they used March and Smith [START_REF] March | Design and natural science research on information technology[END_REF] set of criteria for IS DSR artefacts evaluation 2 . Then, Aier and Fischer focused on IS DSR artefacts evaluation criteria that they considered as being independent of any particular artefact type (method, model, construct, instantiation), which is interesting for our purpose. They established a table of comparison of the evaluation criteria for IS DSR artefacts by March and Smith with CriProISDT. Adopting Aier and Fischer's position that "evaluation criteria for IS DSR artefacts should be strongly related to those for IS design theories" [START_REF] Aier | Criteria of progress for information systems design theories[END_REF], we choose to use this comparison the other way round: we selected amongst CriProISDT the criteria that we thought may be applicable to artefacts (called "CriProISDT subset"), and we then used the "comparison table" to retrieve the "matching" IS DSR artefacts evaluation criteria. By doing that we obtained a set of artefact evaluation criteria (called AEC) that are: generic to all types of artefact evaluation; based on a recent literature review; in line with one of the most well-known set of criteria for IS DSR artefacts evaluation 1 Aier and Fischer CriProISDT: Utility, Internal Consistency, External consistency, Broad purpose and scope, Simplicity, Fruitfulness of further research [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. 2 March and Smith set of criteria for IS DSR artefacts evaluation: Completeness, Ease of use, Effectiveness, Efficiency, Elegance, Fidelity with real world phenomena, Generality, Impact on the environment and on the artefacts' users, Internal consistency, Level of detail, Operationality, Robustness, Simplicity, Understandability [START_REF] March | Design and natural science research on information technology[END_REF].
(March and Smith's) although refining it. Criteria we selected to be included in CriProISDT subset are: Utility, External consistency, Broad purpose and scope.
Utility (usefulness).
The reason for selecting utility is that DSR literature emphasizes that DSR products "are assessed against criteria of value or utility" (p.253) [START_REF] March | Design and natural science research on information technology[END_REF]. Aier and Fischer define "utility" in (p.158) [START_REF] Aier | Criteria of progress for information systems design theories[END_REF] and "usefulness" in [START_REF] Aier | Scientific Progress of Design Research Artefacts[END_REF] as "the artefact's ability to fulfil its purpose if the purpose itself is useful. The purpose of an artefact is only useful if it is relevant for business." Following the comparison table, "matching" IS DSR artefacts evaluation criteria are: ease of use, effectiveness, efficiency, impact on the environment and on the artefacts' users and operationality. We call this list the "utility list".
External consistency, Broad purpose and scope.
Many authors underline the interdependence between a DSR artefact and its performance environment; that evaluation criteria and results are environment dependent. [START_REF] Aier | Criteria of progress for information systems design theories[END_REF][START_REF] March | Design and natural science research on information technology[END_REF][START_REF] Hevner | Design Science in Information Systems Research[END_REF][START_REF] Gregor | The anatomy of a design theory[END_REF]. So we selected the criteria of "external consistency" ("fidelity with real world phenomena" [START_REF] March | Design and natural science research on information technology[END_REF]) and "broad purpose and scope" because they are related to the performance environment of an artefact. Following the comparison table, "matching" IS DSR artefacts evaluation criteria are: fidelity with real world phenomena, generality. We call this list the "context list". We remove from this list the "robustness" criteria, which is mainly aimed at algorithmic artefact evaluation.
IS DSR artefact evaluation criteria we used.
By aggregating the "utility list" and the "context list", we obtain a list of IS DSR artefact evaluation criteria, with the definitions adopted or proposed by Aier and Fischer in [START_REF] Aier | Criteria of progress for information systems design theories[END_REF] when they can be applied to artefacts: ease of use, effectiveness, efficiency, impact on the environment and on the artefacts' users, operationality, fidelity with real world phenomena (external consistency), generality.
This list actually extends the list of criteria for methods evaluation proposed by March and Smith: ease of use, efficiency, operationality, generality [START_REF] March | Design and natural science research on information technology[END_REF]. We can see that because of the systematic way of collecting IS DSR criteria we adopted, the following criteria are not included: completeness, elegance, internal consistency, level of detail, robustness, simplicity, understandability [START_REF] March | Design and natural science research on information technology[END_REF].
─ ease of use: the artefact shall be easily usable [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]; ─ effectiveness: the degree to which the artefact meets its goal and achieve its desired benefit in practice [START_REF] Venable | A comprehensive framework for evaluation in design science research[END_REF]. So questions about the method under evaluation added value were included in the questionnaire ─ efficiency: the degree to which the modelling process utilises resources such as time and people [START_REF] March | Design and natural science research on information technology[END_REF]; a quotient of output and input [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. The notion of "Return on Modelling effort" conveys the same idea. "If an artefact resulting from a design theory is used very often, its efficiency might be the best criterion for measuring its utility." (p.149)[14] ─ impact on the environment and on the artefacts' users: a side effect. "Side effects can increase or decrease utility" (p.164) [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. "A critical challenge in building an artefact is anticipating the potential side effects of its use, and insuring that unwanted side effects are avoided" (p.254)[19] ─ operationality : "the ability to perform the intended task or the ability of humans to effectively use the method if it is not algorithmic" [14, 19] ─ fidelity with real world phenomena (external consistency): Questions about to what extent the constructs of the method under evaluation reflect business concepts that stakeholders have an interest to model with BPMMs. ─ generality: the same as "broad purpose and scope" (p.164) [START_REF] Aier | Criteria of progress for information systems design theories[END_REF]. Questions about the possibility to tailor a BPMM to specific business context are included in the questionnaire. Besides, as DSR artefacts address classes of problems [START_REF] Hevner | Design Science in Information Systems Research[END_REF][START_REF] Venable | Identifying and Addressing Stakeholder Interests in Design Science Research: An Analysis Using Critical Systems Heuristics[END_REF], questions about the "kind" of problems for which it is interesting to use the method under evaluation are included in the questionnaire.
Themes of the interview guideline.
For the fluidity of the interview, the questions have been gathered in themes C to G). Themes A and B complement these themes. Stakeholders are part of the "impact on the environment and on the artefacts' users" criteria. Because different stakeholders have different purposes and because utility definition is related to a purpose, we defined several stakeholders types [START_REF] Venable | Identifying and Addressing Stakeholder Interests in Design Science Research: An Analysis Using Critical Systems Heuristics[END_REF]. "The utility of an artefact is multi-dimensional: one dimension for each stakeholder type" (p.10) [START_REF] Aier | Theoretical Stability of Information Systems Design Theory Evaluations Based upon Habermas's Discourse Theory[END_REF]. Stakeholder types we a priori thought of were related to their role regarding the method under evaluation (here, DEMO): ─ Designers: they took part in the creation or evolution of DEMO, ─ Sponsor: owner of the engineering effort, this stakeholder pays or is financially responsible for the project in which DEMO has been applied, ─ Manager of the engineering effort: project manager for example, ─ Modeller: this stakeholder created DEMO models, ─ Final beneficiaries: they benefit from the use of the method.
Themes are common to all stakeholder types except the designers: a theme (G) dedicated only to the latter ones has been included in the questionnaire.
In themes A and B, many questions are about the stakeholders, so that new stakeholder types may emerge from the analysis of the interviews. Indeed, the semistructured interview technique provides a guidance to help collect data on themes the researcher is interested in, but the kind and scope of answers are not predefined. Ac-tually, surprising answers are an asset provided they relate to the goal of the study: they may enable the researcher to reconsider themes, sampling, questions and research approach.
With regard to the question if DEMO is considered as a language or a method.
As our goal is to have an insight about DEMO in practice, we asked the interviewees what they would call DEMO and whether they would consider it as being prescriptive or descriptive. Besides, we ensured that interviews were not exclusively "method oriented" by selecting evaluation criteria that are common to all types of designed artefacts. Winter et al. argue that a method and a model can be seen as aspects of an artefact (p.12) [START_REF] Winter | Method versus model -two sides of the same coin? Advances in Enterprise Engineering III[END_REF]. This is especially convenient in the case of asking questions about DEMO. For this reason and because of the double definition of DEMO, we added questions that are specifically related to methods as defined in Section 2, and to languages as being part of methods.
Overview of the questionnaire structure
Each set of questions is related to a stakeholder type, a theme, and is aimed at collecting data exclusively either about stakeholders' intentions or stakeholders' experience. The purpose of this is to collect stakeholders' a priori views when intending to do something ("What were your original intentions/expectations when you…?") and a posteriori views when they had experienced this something ("What is your experience about …?") 3 . The resulting questionnaire structure is depicted in Table 3.
Questions are actually often similar between stakeholder types: such a structure to design a questionnaire is only a tool for the researcher to think of many types of questions related to the goal of the study. Themes A and B are about the knowledge of the context and about the stakeholders.
Reflections about the proposed research approach validity
The themes of the questionnaire used as a guideline in the semi-structured interview technique influence the answers that are given by the respondents. So, we found it necessary to justify our position for referring to design science literature to define a set of criteria to evaluate a method. These criteria will be used to gain insights on a method in the current study, not to evaluate it.
Information Systems literature.
As BPMMs may not be information technology related, the question arises as to whether information systems literature is relevant to study them [START_REF] Winter | Design science research in Europe[END_REF]. In this paper we assume that as BPMMs are often implemented in the context of IS projects, IS literature is relevant to reflect upon BPMMs.
Information Systems Design Sciences literature.
BPMMs evaluation can be considered as a wicked problem, because it has a critical dependence upon human cognitive and social abilities to produce effective solutions (evaluation depends both on the performance of the method under evaluation and of the evaluation process itself on the other hand) and because it is strongly context dependent [START_REF] Hevner | Design Science in Information Systems Research[END_REF]. Such wicked problems can be addressed by the iterative nature of design science research [START_REF] Hevner | Design Science in Information Systems Research[END_REF].
Besides, the design science pa explores the art of building and evaluating artefacts and especially information systems related artefacts with a strong importance given to the behavioural aspects [START_REF] Hevner | Design Science in Information Systems Research[END_REF]. Is DS, evaluation of an artefact is performed against the criteria of utility, which is a practical perspective. So we can then investigate the design science literature with benefits for approaching the question of how to gain insights about BPMMs in practice. In short, evaluating BPMMs in practice is a practical problem, as such it can be approached with DS literature [START_REF] Wieringa | Design science as nested problem solving[END_REF].
Design Science literature, Design Science Research literature or both?
Winter makes the following difference between IS design research and IS design science: "While design research is aimed at creating solutions to specific classes of relevant problems by using a rigorous construction and evaluation process, design science reflects the design research process and aims at creating standards for its rigour" (p.471) [START_REF] Winter | Design science research in Europe[END_REF]. With this definition, we may rather investigate more DS research literature than DS literature. But on the one hand DSR literature and DS literature are not always "self-labelled" this way and on the other hand reflections and criteria of progress that are applicable to DS can sometimes be related to those of DSR [START_REF] Aier | Criteria of progress for information systems design theories[END_REF], we will investigated both DS literature and DSR literature.
Early experience report about the DEMO case study
This section exposes how we started to implement the proposed research approach, and some of the preliminary insights we gained about DEMO in practice. The data analysis is still to be performed.
Data collection
Based on the questionnaire used as a guideline [START_REF] Myers | The qualitative interview in IS research: Examining the craft[END_REF], for our semi-structured interviews we collected 13 interviews. Multiple stakeholder types were represented. Interviews took place in interviewee's offices except one, which was performed with Skype. Each interviewee was interviewed individually by two researchers: one interviewer to ask the questions and one "shadow" interviewer to complete questions and take notes. Interviewees agreed to be interviewed and that the interviews were recorded. Only one interviewee asked us not to disclose his name. Immediately after each interview, the interviewer and the "shadow" wrote down notes about the interview that involved what had been said or interviewee's reaction to some specific subject for example. In order to capture the actual experience of the individuals in practice with DEMO, interviewers tried to avoid "leading the witness" by: following the interview guideline, asking questions in which the answer is not included, not giving their own opinions. They attempted to reduce their role to information collectors, influencing as less as possible the content of the collected data and encouraging interviewees to keep talking. One of the interviewers was a DEMO expert, the other one was a business analyst whose knowledge about DEMO could be summarised in a few short lines.
The questionnaire used as a guideline contained about 100 questions, but as not every question was meant at every stakeholder types, around 60 questions were asked to each interviewee -or not, in case interviewee spontaneously provided the information while answering another question. Average time of interviews is one hour an thirty minutes. A total amount of twenty hours of recording have been collected.
Interview data analysis and initial insights about DEMO in practice
Interviews have been transcribed by interviewers. Scripts have been coded [START_REF] Miles | Qualitative Data Analysis: An Expanded Sourcebook[END_REF]. The full analysis still has to be performed, so only first insights can be presented here. DEMO is seen as a way of thinking that comes with a way of modelling. Interviewees mentioned a set of concepts helping enterprise engineer analysts to analyse organizations and reveals what is actually going on when it is about responsibility, authority, role, transaction. According to the interviewees, DEMO seems to be suited for complex problems. Interviewees often mention that DEMO models were implementation independent and that to apply DEMO to produce DEMO models, abstract thinking is required, but reading DEMO models seem to require only a few hours of training. When interviewees were asked about DEMO return on modelling effort, they all were very positive about it, sometimes adding "provided it is used by trained people". Several interviewees deplore a lack of interfaces with other methods. Still, all interviewees would use DEMO again if they would have to work in their project again.
Conclusion and further work
The current paper is about setting up an observational research approach for an exploratory goal: having insights about BPMMs in practice. We adopted a qualitative research approach with semi-structured interviews for collecting research data. To define the interview guideline, we relied on criteria of progress for information systems design theories.
The main criteria against which we could assess the proposed observational research approach may be the appropriateness of the insights we obtained during the interviews against the purpose of the research effort. Still, interview guideline themes are only a parameter to ensure this appropriateness: among other things, interviewees sampling, interviewees background compared to interviewers background, way of asking question, interviewers' attitude and degree of remembrance of interviewees regarding the case studies also influence the nature and quality of the collected data.
This paper provides an interview guideline structure that may be adapted from the DEMO interview experience and then potentially used to get insights about other BPMMs. Although interviews analysis has not been performed yet, we may already say that during the interviews performance, no understanding problems between interviewers and interviewees occurred, collected information was actually related to the themes and questions, new information appeared compared to the literature review about DEMO. Besides, the diversity of interviewed stakeholder types allowed the collection of various points of views, sometimes conflicting ones. All interviewees encouraged us both to carry on in DEMO investigation effort and to contact them again in case we would need further information.
The proposed observational research approach has some limitations, amongst which are the following ones: ─ As we defined the list of evaluation criteria with a systematic process (gathering two lists), we should investigate for each criterion we included or not what the implications are, then we may (certainly) integrate again some criteria. ─ Aier and Fischer explain that the set of criteria of progress they propose for information systems design theories [START_REF] Aier | Criteria of progress for information systems design theories[END_REF] might not be complete. So, for this reason again we should reflect upon the completeness of the criteria we proposed. ─ Criteria we proposed are generic to all types of artefacts, further reflection is required whether add aspects specific criteria (model, constructs, method, instantiation). ─ We have not reflected upon the limitations that are inherent to the interview technique to evaluate a method. ─ Whatever the research approach to get insights about an artefact, the influence of some parameters should be discussed, namely the amount of knowledge of the interviewers and researchers have about the artefact they want to have insights on. ─ Various frameworks have been proposed to evaluate methods in IS literature, e.g. [START_REF] Vavpotic | An approach for concurrent evaluation of technical and social aspects of software development methodologies[END_REF]. Though, they have not been taken into account in the current paper because our scope is about having insights about a method, so only method evaluation criteria were used for this purpose.
Some of these limitations may be addressed in future work: we plan to analyse the interviews, this will generate insights about the application of DEMO in practice and allow us to reflect upon the practical use of the themes and evaluation criteria that are proposed in the current paper. According to the findings, we may adapt these criteria and investigate in case studies BPMMs other than DEMO to have a variety of experiences with the proposed observational research approach.
─
A-Interview situation -Location, date, duration, language of the interview ─ B-Interview context -Actual context of use of DEMO in a particular project, also allows to determine how much the person remembers about the project ─ C-Typical context of use of the Method: recommendations, factors of influence ─ D-Use of the Method in practice ─ E-Organisation fit: Necessary skills to apply the Method and satisfaction about the Method ─ F-Method chunks identification ─ G-Method construction (only for the designer stakeholder type) Stakeholder types.
Table 1 .
1 Khavas, 2010 [24] -Master thesis: the adoption of DEMO in practice
Source Khavas, 2010 [24] -Master thesis
Subject The adoption of DEMO in practice
Motivation Ensure the adoption of DEMO in practical fields. This problem has
been introduced in [24].
Research 1 What is the adoption rate of DEMO among DEMO Professionals in
questions practice?
2 What are the factors that can influence the decision of a DEMO Pro-
fessional to adopt or ignore DEMO? (p.2)
Sample DEMO professionals [24]
Approach ─ White-box approach: to define questions, DEMO is first thoroughly
studied through a literature review. Then two surveys have been
performed.
─ A researcher who understands DEMO asks people who master
DEMO about adoption matters.
Methods Based on literature review about what DEMO is and how it works on
the one hand and about method adoption on the other hand, Khavas
elaborated first a quantitative survey and later a qualitative analysis
based on semi-structured interviews.
Results
─ Identification of several levels of adoption in an organization: individual, project, unit or organization ─ Identification of factors that influence adoption ─ Recommendations to DEMO professionals to ease the adoption or DEMO
Table 2 .
2 Dumay To study DEMO theory: the framework proposed by Mingers and Brockles[START_REF] Mingers | Multimethodology: Towards a framework for mixing methodologies[END_REF] to analyse methodologies has been used. ─ To study DEMO in practice: a survey has been sent by email to practitioners about DEMO application contexts (domain, duration, projects). Then a 4-hour workshop has been organised with the 19 practitioners amongst the survey respondents willing and able to attend. The subject of the workshop was DEMO areas of application.
Research 1. What is the relationship between DEMO theory and its intended
questions application?
2. How does the professional application of DEMO differ from its in-
tended application?
3. Can LAP unify the apparent incompatible social and technical per-
spectives present in Information Systems Development practice?
(p.78)
Sample DEMO practitioners
Approach DEMO evaluation by practitioners is a means of evaluating in practice
the LAP. The idea is to study DEMO theory, then identify the proposed
et al., 2005, [11] -Professional versus intended application of DEMO Source Dumay et al., 2005, [11] -Conference Proceedings Subject Subject of the DEMO evaluation included in the paper: find out how the professional application of DEMO differs from its intended application. Motivation "Devise several recommendations on how the Language/Action Perspective (LAP) can improve its footprint in the community of Information Systems Development practice." (p.78) LAP is an approach for the design of Information Systems.
Table 3 .
3 Structure of the questionnaire used as a semi-structured interview guideline
Themes Designer Sponsor …
A Questions Questions Questions
B Questions Questions Questions
C Intention and experience questions Intention and experi-ence questions Intention and experi-ence questions
D Intention and experience questions Intention and experi-ence questions Intention and experi-ence questions
… Intention and experience questions Intention and experi-ence questions Intention and experi-ence questions
3 "Theories seek to predict or explain phenomena that occur with respect to the artefact's use
(intention to use), perceived usefulness, and impact on individuals and organizations (net
benefits) depending on system, service, and information quality (DeLone and McLean 1992;
Seddon 1997; DeLone and McLean 2003)", cited by Hervner et al. (p.77)[35] | 45,713 | [
"1002473",
"1002474",
"1002475"
] | [
"371421",
"371421",
"371421",
"300856"
] |
01474783 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474783/file/978-3-642-41641-5_19_Chapter.pdf | Graham Mcleod
email: [email protected]
A Business and Solution Building Block Approach to EA Project Planning
Keywords: Enterprise Architecture, Building Blocks, Project Scope, Planning, TOGAF, Inspired EA Frameworks
Many EA groups battle to establish an overall programme plan in a way that is integrated, achievable and understandable to the stakeholder and sponsor community as well as the downstream implementation groups, including: IT, Process Management, Human Resources and Product Management. This paper presents an approach that achieves these objectives in a simple way. The approach is currently being implemented in a fairly new enterprise architecture function within an aggressively expanding Telco with promising results. The problem is introduced and a solution including meta model and visual representations is discussed. Early findings are made to the effect that the technique is simple to apply as well as being effective in establishing shared understanding between the EA function, project sponsors, project stakeholders and IT personnel. The technique is explicated with an example that should make it easy for others to replicate in their own setting.
Introduction to the Problem
The author and colleagues are engaged in a consulting capacity with a rapidly expanding Telco in South Africa. The organization has a newly established enterprise architecture (EA) function. EA is gaining good traction and driving several themes, including: building the EA capability and governance; implementing an ambitious five year growth strategy; providing architectural oversight to active projects and supporting a collection of projects focussed on improvements to the core value chain. A traditional value chain approach, ala Porter, has been employed [START_REF] Porter | How information gives you competitive advantage[END_REF]. A blend of the Inspired [START_REF] Mcleod | The Inspired Enterprise Architecture Frameworks[END_REF][START_REF] Mcleod | An Inspired Approach to Business Architecture[END_REF] and TOGAF [START_REF]TOGAF ® Version 9[END_REF] EA methods and frameworks is being used, with the emphasis on the former. This paper describes a situation and solution relative to the "enhancement of core value chain" projects. The solution will in time be more broadly applied in the strategic theme as well. We believe the approach has general applicability for EA teams in other organizations and settings.
The situation relative to the core value chain involved a number of projects which had already begun, prior to the architecture oversight. Two prominent ones were Quoting and Billing. The core value chain and their position in this can be seen in Figure 1.
The Quote and Bill value chain elements were being addressed by active projects at the time of this intervention. Note: The value chain model did not initially include the support capabilities, viz. Product Management and Data and Information Management. The latter was also an active (but early stage) project in the environment. Product Management was a capability supported by a business department. Both the Quote and Billing projects highlighted the need for more capable, flexible, integrated and consistent Product Management, leading to it being identified as a focus area and the creation of a new project to address the related requirements. The evolution of the models and approach is described in the following sections.
Problems in stakeholder expectations, sponsor communication and development project alignment were highlighted by a steering body meeting which reviewed the Quotations project. It emerged that there was little consensus upon the scope of the project, the dependencies between various elements and the release plan. Stakeholders and the sponsor were unclear what would be delivered, in what tranches and when. The EA function was asked to audit the project and make recommendations. The audit found that there was a traditional Business Requirements Specification (BRS) which had been done by a business analyst after extensive interviews with personnel in the line functions affected. The BRS effectively sketched an "end state" for the complete quoting automation, including embedded capabilities. The latter included product management, customer information, existing install base information and document generation capabilities. The scope of the BRS was thus large. It was not broken into any releases or delivery packages. The BRS had a tentative solution design, but this was not comprehensive. The major contribution of the BRS was the documentation of the business requirements at a logical level.
The development team in IT, meanwhile, was pursuing an agile project management approach [START_REF]Agile Software Development: Current Research and Future Directions[END_REF][START_REF] Dyba | Empirical studies of agile software development: A systematic review[END_REF][START_REF] Vlaanderen | The agile requirements refinery: Applying SCRUM principles to software product management[END_REF], with a backlog of requirements managed in a tool. This detailed requirements at a more fine grained level and the team had identified some delivery tranches, which were variously dubbed versions and releases in discussions. The information was not in a form that could be easily packaged for consumption by the sponsor, stakeholders, EA or program management.
Programme management had visibility of the project via the previously mentioned steering body reviews. The discussions were ineffective as the terminology between releases and versions was not standardised, the agile requirements were not visible, and the allocation of capabilities between versions/releases was fluid. The problems could be summarised as follows: scope of requirements not clear; decomposition into releases not clear; definition of release and version terminology not standardised; lack of agreement between stakeholder expectations and project team plan; lack of visibility of capabilities to be delivered, dependencies and timing at program and project management levels; duplication of effort across projects; lack of traceability of requirements from traditional BRS to agile environment; confusion between requirements and solution elements; unclear link between value chain and supporting projects.
The audit was followed by a facilitated session where we teased out the capabilities contained in the BRS and the agile requirements at a business level and defined these as inter-related business building blocks (BBBs). TOGAF [4 section 37.2] recommends using building blocks to define required components of architecture (Architecture Building Blocks) and solution (Solution Building Blocks). TOGAF has a somewhat schizophrenic definition of building blocks: on one hand defining them thus:
"Architecture Building Blocks (ABBs) typically describe required capability and shape the specification of Solution Building Blocks (SBBs). For example, a customer services capability may be required within an enterprise, supported by many SBBs, such as processes, data, and application software.
Solution Building Blocks (SBBs) represent components that will be used to implement the required capability. For example, a network is a building block that can be described through complementary artifacts and then put to use to realize solutions for the enterprise. "
but also at another point [section 33.2] defining them as the elements in the content or meta model:
"
The content metamodel provides a definition of all the types of building blocks that may exist within an architecture, showing how these building blocks can be described and related to one another. For example, when creating an architecture, an architect will identify applications, ''data entities'' held within applications, and technologies that implement those applications. These applications will in turn support particular groups of business user or actor, and will be used to fulfil ''business services''."
This usage is more akin to the "atomic elements" of the Zachman framework [START_REF] Zachman | Extending and formalizing the framework for information systems architecture[END_REF]. Our usage will be more aligned with the capability based definition. We chose to refer to the higher level capability based building blocks which are independent of technology and implementation choices as Business Building Blocks (BBBs) while we use a similar approach to TOGAF for Solution Building Blocks (SBBs ). The former relate to capabilities that the business requires. They can encapsulate services, functionality, data and user access requirements. The latter relate to components chosen to meet the needs of requirements identified in BBBs. They will typically represent commercial off the shelf systems (COTS), data sources or key technologies.
The workshop on BBBs resulted in a whiteboard / Powerpoint model shown as Figure 2. Boxes denote Business Building Blocks i.e. capability; green arrows denote dependencies; dashed arrows denote events; green bubbles are release numbers and yellow text blocks are building block idendities; Generic support capabilities were normalised from the initial diagram and are shown at the base as horizontal lines; User interface modes are shown across the top as horizontal lines. Looking at the figure, we can see that Release 1 would comprise: a web interface; Product Definition (for a limited product set); Price Calculation and an Audit Trail. Release 2 would add: Product Definition for additional products; Workflow and Event handling; Document composition.
Fig. 2. Business Building Blocks for Quoting
We standardised terminology to describe a release as a package of capability that would be delivered to the business. Versions may be used by the maintenance team within IT to refer to adjustments of a release without major new functionality.
The initial iteration did not have the release annotations (green callout bubbles) or the block ids (yellow text). The initial diagram was validated with the business analyst, the IT project leader and participants from the Billing project who would deal with the results and data downstream. The programme manager, who was also a key player in defining the longer term strategy, participated in identifying dependencies and desirable business capabilities in terms of value. The block ids were subsequently added to provide concise and unambiguous reference as well as support traceability.
A later audit and similar building block definition for the Billing project surfaced common requirements in the Product Management capability. Quoting and Billing projects had been addressing this independently. Other projects in the organization were also finding this a dependency and trying to address shortfalls. Consequently, the EA function initiated a separate Product Management project. Defining the BBBs for Quoting also highlighted the need for a capable document composition solution. This was required in other areas of the business and a project was initiated to investigate this.
A further scoping issue was related to content of the delivered solution in terms of product/service types to be addressed per release. The initial release of Quoting would only provide support for one major product. Subsequent releases will add additional products, which in some cases involves not only the product modeling and capture into the solution systems, but additional system and interfacing capability, e.g. to handle "bundles" which are products composed from other offerings. We did not want to complicate the BBB diagram with this dimension, so opted to create a matrix which provides a view of capability, release timing and content coverage. The latter is provided in the cells as either (i) a list of product identities or (ii) sub-capabilities of building blocks (also shown in Figure 1 as bullet points within the block text). This is shown as Table 1.
Taking it Further
Following the development of the BBB diagram, addition of release tagging, and the release content and timing matrix, a meta model was developed to allow repository and tool support for the building block model techniques. The organization uses the EVA Netmodeler web and repository based EA toolset [START_REF] Inspired | EVA Netmodeler[END_REF]. Customising the meta model and defining suitable visual notation "model types" allows support for the required techniques in a shared repository. The EA team believed this would enhance rigour, promote sharing and provide visibility across projects. The resultant meta model is shown as Figure 3.
Building Block Categories allow differentiating between business and solution building blocks. Note that multiple own-type relationships exist between building blocks. These are necessary to capture the semantics between building blocks within a layer, as well as between business and solution building blocks. Content elements are related to a building block and a release. They occur in varieties of Feature, Product/Service and Location. These correspond to the cells in the previously presented release matrix (Table 1). The relationship via Model to Value Adding Activity links the building block model to the relevant value chain step.
We were able to define visual model types in the tool to create equivalent business building block diagrams to the Powerpoint model shown earlier. See Figure 4. These do not have the visual cues for release mapping and BBB id but these attributes are captured behind the scenes and can be reported upon or navigated easily.
We also created a visual model type for a solution building block (SBB) diagram. This was completed for the Quoting project, with input from the project team, the business analyst and members of the Billing team who had high knowledge of the existing infrastructure and telecomms processes in general. The resultant SBB model is shown as Figure 5. This shows actual incumbent or anticipated application system components, technologies and data sources. In building this view across the two projects, the need for a messaging infrastructure and consistent data for shared domain objects was apparent. The former was represented in the diagram as an Enterprise Service Bus (ESB) while the latter translates to the Master Data Management (MDM) element. Projects are now underway in the environment to address these requirements. The revised building block models and the release matrix were presented to stakeholders and the project sponsor in different sessions. Both sessions went extremely well. Stakeholders were easily able to apprehend the business capabilities that were represented and the relationships and dependencies between them. They were also able to grasp the implications of dependencies for release composition. It was clear to them what each release would deliver in terms of functionality and content. With the release matrix, this also relates to delivery time expectations. The three deliverables thus greatly facilitated accurate communication and arriving at a consensus view with stakeholders. Stakeholders expressed their appreciation that they now had a grasp of the scope, capabilities, delivery schedule, release breakdown and dependencies which would enable them to plan properly and engage meaningfully with the development team.
The project sponsor was also delighted with the results, exclaiming that she, for the first time in months, had an accurate and usable picture of what the project was about, what it would deliver, how it was put together and what would be required to deliver it. The programme manager/strategist also expressed her high satisfaction with the clarity which had been achieved.
From an EA perspective we were keen to expand the approach to other projects and to ensure that the clarity prevailed. We thus initiated a number of other activities:
• The business analyst was asked to bring the BBB perspective into the BRS. This was achieved with the BBB ids shown earlier. The BRS was tagged with the relevant ids to show where the requirements mapped onto building blocks. We also encouraged the programme manager and business analyst to align the programme management milestones with the BBB view and to update the project charter to reflect the release plan. In the end they chose to do the alignment in the milestones in the programme management tool, Sciforma [START_REF] Sciforma | Sciforma 5.0 Programme Management[END_REF] and to keep the release map matrix in the BRS.
• The development team was asked to tag the requirements in the agile management tool with the BB ids, so that we could track progress on development, testing and delivery of BB capabilities.
• The Billing team was engaged to produce similar models for their scope.
• A BBB model was developed for the Product Management capability in the organization. This, together with a phase plan, is now driving improvements in this area
• Other teams (Unified Communications and Data and Information Management) have been engaged and are producing similar models.
Sponsors, stakeholders, EA personnel, programme managers and strategist have all embraced the approach and are very positive about its benefits. The jury is still out on the development group response. They have complied, but we suspect that, for them, there is a bit more overhead than for the other groups, while they see less direct benefit to themselves. There may also be an element of resentment that an oversight group has interfered in a work process with which they were happy. Unfortunately other stakeholders were not satisfied. We hope that over time they will be convinced of the benefits in terms of improved stakeholder communication and a happy sponsor.
To summarise, the benefits include:
• Clear scope of projects via BBBs representing business capabilities and their collation into a picture per project
• Clear tie between value chain and project scope via decomposition of value chain activities into models holding the BBBs supporting that business capability
• Clear release content and planning, via the tagging of the BBBs and the collation of these into releases with delivery times
• Improved communications and agreement between strategy, programme management, sponsor, stakeholders and project team via accessible shared and simple pictures and matrix
• Identification of common requirements at both business and technical levels via visibility in respective building block diagrams
• Common basis for prioritisation within and across projects using the shared definitions of building blocks, dependencies identified and stakeholders knowledge of business issues and benefits
• Simple views at business and technical levels which are not arduous to produce and allow rich discussion and a shared understanding
• Traceability of requirements from business to solution to implementation effort via the linkages created between the building blocks views, the BRS and the agile requirements management
Limitations
The experience is reported for one organisation only. The techniques have now been applied to several projects and seem to work equally well across these. Some of these projects are core value chain and targeted at operational improvement while some are more strategic, aimed at business change (e.g. introduction of a new product category). Others, such as Product Management and Data and Information Management are broad supporting capabilities across the value chain. However, all projects are still in the same organisational setting and culture. From our experience in many organisations, industries and EA groups we believe that they will have broad applicability and will work in many settings, but this is no guarantee that they will actually work, or work as well there.
Some of the benefits mentioned, particularly those related to commonality across projects and improved traceability are likely to be reduced, or harder to obtain, if the techniques are not supported by a shared and capable repository.
The EA function is working to integrate the approach with business cases and prioritization in the organization. The models will also be used to underpin estimating for capital budgeting purposes.
More work can be done in the area of integration and feedback with the agile management practices used in the development group.
Integration with programme management is underway, but this could be enhanced, and possibly to some extent automated by feeding project, building block and release information across to the programme management tool.
Integration with portfolio management could be investigated for better reuse of information about the existing solution building blocks and infrastructure elements. Full baselines are still being built in the environment and this would require these and a mapping to reference models (e.g. Telecommunications Application Model (TAM), Shared Information/Data (SID) model, eTOM process model [START_REF]FrameWorkx[END_REF] from the TM Forum ).
A longitudinal assessment of how the techniques work down the full project lifecycle for requirements traceability should also be undertaken.
Fig. 1 .
1 Fig. 1. Core value chain and support capabilities
Fig. 3 .Fig. 4 .
34 Fig. 3. Meta model for supporting the building block approach
Fig. 5 .
5 Fig. 5. Solution building blocks model
Table 1 .
1 Release matrix
Target Date => April June
2013 2013
Target Capability V Systems Release Release Release Release Release
Interdependencies 1 2 3 4 5
Product Definitions Tribold EIA EIA
Price Calculation Tribold EIA
Audit Trail EIA
Proposal Document Creation Qvidian EIA
Workflow EIA
Customer Account Create/Read Siebel EIA
Sales Order/Work Order Siebel EIA
Creation Installs Project Management Siebel EIA
Customer Asset Read Siebel EIA
Opportunity Management Siebel
Pipeline Management Siebel
Sales Compensation Oracle Incentive
Calculations Compensation
Web/Mobile Access
Reporting & BI Microstrategy,
3rd Party Communication Informatica
Procurement/Stores
Knowzone Sales Document Knowzone
Creation
Authority Matrix | 22,250 | [
"1002476"
] | [
"303907"
] |
01474795 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01474795/file/978-3-642-41641-5_3_Chapter.pdf | Georgios Plataniotis
Sybren De Kinderen
email: [email protected]
Dirk Van Der Linden
Danny Greefhorst
email: [email protected]
Henderik A Proper
An Empirical Evaluation of Design Decision Concepts in Enterprise Architecture
Keywords: Enterprise Architecture, Design Rationale, Design Decision concepts, Evaluation, Survey
Enterprise Architecture (EA) languages describe the design of an enterprise holistically, typically linking products and services to supporting business processes and, in turn, business processes to their supporting IT systems. In earlier work, we introduced EA Anamnesis, which provides an approach and corresponding meta-model for rationalizing architectural designs. EA Anamnesis captures the motivations of design decisions in enterprise architecture, alternative designs, design criteria, observed impacts of a design decision, and more. We argued that EA Anamnesis nicely complements current architectural languages by providing the capability to learn from past decision making. In this paper, we provide a first empirical grounding for the practical usefulness of EA Anamnesis. Using a survey amongst 35 enterprise architecture practitioners, we test the perceived usefulness of EA Anamnesis concepts, and compare this to their current uptake in practice. Results indicate that while many EA Anamnesis concepts are perceived as useful, the current uptake in practice is limited to a few concepts -prominently 'rationale' and 'layer'. Our results go on and show that architects currently rationalize architectural decisions in an ad hoc manner, forgoing structured templates such as provided by EA Anamnesis. Finally, we interpret the survey results discussing for example possible reasons for the gap between perceived usefulness and uptake of architectural rationalization.
Introduction
Enterprise Architecture (EA) modeling languages, such as the Open Group standard language ArchiMate [START_REF]The Open Group: ArchiMate 2.0 Specification[END_REF], connect an organization's IT infrastructure and applications to the business processes they support and the products/services that are in turn realized by the business processes. Such a holistic perspective on an enterprise helps to clarify the business advantages of IT, analyze cost structures and more [START_REF] Lankhorst | Enterprise architecture at work: Modelling, communication and analysis[END_REF].
While EA modeling languages allow for modeling an enterprise holistically, the design decisions behind the resulting models are often left implicit.
As discussed in our earlier work [START_REF] Plataniotis | Ea anamnesis: towards an approach for enterprise architecture rationalization[END_REF], the resulting lack of transparency on design decisions can cause design integrity issues when architects want to maintain or change the current design [START_REF] Tang | A rationale-based architecture model for design traceability and reasoning[END_REF]. This means that due to a lacking insight of the rationale, new designs are constructed in an adhoc manner, without taking into consideration constraints implied by past design decisions. Also, according to a survey for software architecture design rationale [START_REF] Tang | A survey of architecture design rationale[END_REF], a large majority of architects (85,1%) admitted the importance of design rationalization in order to justify designs.
Furthermore, anecdotal evidence from six exploratory interviews we conducted with senior enterprise architects suggests that enterprise architects are often external consultants. This situation increases the architectural knowledge gap of the Enterprise Architecture, since without rationalization architects lack insights into design decision making in an organization that is new to them.
In earlier work [START_REF] Plataniotis | Ea anamnesis: towards an approach for enterprise architecture rationalization[END_REF][START_REF] Plataniotis | Capturing decision making strategies in enterprise architecture -a viewpoint[END_REF][START_REF] Plataniotis | Relating decisions in enterprise architecture using decision design graphs[END_REF], we introduced an approach for the rationalization of enterprise architectures by capturing EA design decision details. We refer to this approach as EA Anamnesis, from the ancient Greek word ανáµνησις (/aenaem"ni:sIs/), which denotes memory and repair of forgetfulness. The EA Anamnesis meta-model is grounded in similar approaches from the software engineering domain, prominently in the Decision Representation Language (DRL) [START_REF] Lee | Extending the potts and bruns model for recording design rationale[END_REF]. At this stage, EA Anamnesis complements the ArchiMate modeling language [START_REF]The Open Group: ArchiMate 2.0 Specification[END_REF] by conceptualizing decision details (alternatives, criteria, impacts) and by grouping EA decisions in three different enterprise architecture layers (Business, Application, Technology) in accordance with the ArchiMate specification.
In this paper we evaluate empirically the design decision concepts from the EA Anamnesis meta-model by means of a survey amongst enterprise architecture practitioners. On the one hand, our study shows that a majority of EA practitioners deem EA Anamnesis's concepts, such as "motivation" and "observed impact", as useful, in that these concepts help them with the maintenance and justification of enterprise architectures. On the other hand, however, our study shows a limited uptake of rationalization in practice. For one, while many architects capture a decision's motivations, there is less attention for capturing the observed impacts of decisions. Finally we find that, currently, there is little reliance on a structured rationalization approach, such as provided by EA Anamnesis. Rather, rationalization of decisions (if any) is done in an ad hoc manner, relying on unstructured tools such as MS Word or Powerpoint. Also, we speculate that the distinction between perceived usefulness and uptake in practice is, at least partially, due to a lacking awareness of rationalization, and potential usefulness it has for architectural practice.
This paper is structured as follows. Sect. 2 presents the EA Anamnesis concepts and a short illustration of them. Sect. 3 presents the evaluation setup, while Sect. 4 presents the results of our study. Subsequently, in Sect. 5 we discuss the survey results. Sect. 6 concludes.
Background
To make the paper self contained, this section presents the design rationale concepts of the EA Anamnesis approach that were confronted to practitioners during our study (in Sect. 2.1), accompanied by an illustration of our approach with a case study from the insurance sector (in Sect. 2.2).
EA Anamnesis Design Decision Concepts
In this paper we focus on decision detail concepts that provide qualitative rationalization information for design decisions. According to [START_REF] Tang | A rationale-based architecture model for design traceability and reasoning[END_REF], architectural rationale can be discriminated in three different types: qualitative design rationale, quantitative design rationale and alternative architecture rationale.
The meta-model of the EA Anamnesis approach is depicted in Fig. 1. To limit survey length we focus our study on a set of key concepts. Concepts of the meta-model that provide additional details, such as title (a descriptive name of a decision), are not discussed.
Below we provide a brief description of the concepts used in our survey.
Rationale: The reason(s) that leads an architect to choose a specific decision among the alternatives. According to Kruchten [START_REF] Kruchten | An ontology of architectural design decisions in software intensive systems[END_REF] a rationale answers the "why" question for each decision.
Alternative: This concept illustrates the EA decisions that were rejected (alternatives) in order to address a specific EA issue [START_REF] Kruchten | Building up and reasoning about architectural knowledge[END_REF][START_REF] Tyree | Architecture decisions: Demystifying architecture. Software[END_REF].
Layer: In line with the ArchiMate language [START_REF]The Open Group: ArchiMate 2.0 Specification[END_REF], an enterprise is specified in three layers: Business, Application and Technology. Using these three layers, we express an enterprise holistically, showing not only applications and physical IT infrastructure (expressed through the application and technology layers), but also how an enterprise's IT impacts/is impacted by an enterprise's products and services and its business strategy and processes.
Observed impact: The observed impact concept signifies an unanticipated consequence of an already made decision to an EA artifact. This is opposed to antici- In current everyday practice, architects model anticipated consequences using what-if-scenarios [START_REF] Lankhorst | Enterprise architecture at work: Modelling, communication and analysis[END_REF]. Unfortunately, not every possible impact of made EA decisions can be predicted. This is especially true for enterprise architecture, where one considers impacts across the enterprise rather than in one specific (e.g. technical) part. Some of the consequences of EA decisions are revealed during the implementation phase, or during the maintenance of the existing architecture design [START_REF] Proper | Lines in the water[END_REF]. These unanticipated consequences are exactly captured by the concept of an observed impact.
For us the main usefulness of capturing observed impacts is that they can be used by architects to avoid decisions with negative consequences in future designs of the architecture.
Impact (Decision traceability): the "Impact" concept makes explicit relationships between EA decisions. For example, how an IT decision affects a business process level decision or vice versa.
Illustrative example
We now briefly illustrate how the concepts of our approach can be used to express architectural design rationale, using a fictitious insurance case presented in our previous work [START_REF] Plataniotis | Ea anamnesis: towards an approach for enterprise architecture rationalization[END_REF].
ArchiSurance is an insurance company that sells car insurance products using a direct-to-customer sales model. The architectural design of this sales model, created in the EA modeling language ArchiMate, is depicted in Fig. 2.
Two business services support the sales model of ArchiSurance: "Car insurance registration service" and "Car insurance service". ArchiMate helps us to understand the dependencies between different perspectives on an enterprise. For example, in Fig. 2 we see that the business service "Car insurance registration service" is realized by a business process "Register customer profile". In turn, we also see that this business process is supported by the application service "Customer administration service".
Although disintermediation reduces operational costs, it also increases the risk of adverse risk profiles [START_REF] Cummins | The economics of insurance intermediaries[END_REF], incomplete or faulty risk profiles of customers. These adverse profiles lead insurance companies to calculate unsuitable premiums or, even worse, to wrongfully issue insurances to customers. As a response, ArchiSurance decides to use intermediaries to sell its insurance products. After all, compiling accurate risk profiles is part of the core business of an intermediary [START_REF] Cummins | The economics of insurance intermediaries[END_REF].
In our example scenario, an external architect called John is hired by ArchiSurance to help guide the change to an intermediary sales model. John uses ArchiMate to capture the impacts that selling insurance via an intermediary has in terms of business processes, IT infrastructure and more. For illustration purposes we will focus on the translation of the new business process "Customer profile registration" to EA artifacts in the application layer. The resulting ArchiMate model is depicted in Fig. 3.
In Fig. 3 we see for example how a (new) business process "customer profile registration", owned by the insurance broker (ownership being indicated by a line between the broker and the business process), is supported by the IT applications "customer administration service intermediary" and "customer administration service ArchiSurance". For this simplified scenario, 13 architectural design decisions were taken. These design decisions, in terms of our design decision concepts, were captured with EA Anamnesis by John during the transformation process.
Let us assume that a newly hired Enterprise Architect, Bob, wants to know the rationale behind the architectural design that supports the new business process of Archisurance. To this end, he relies on decision rationales captured by John. Table 1 shows one such rationalized decision: design decision 13, for the IT application "customer administration service intermediary".
As can be observed, design decision 13 regards the acquisition of the Commercial off-the-shelf (COTS) application B. Bob can determine the alternatives, "COTS application A" and "upgrade of the existing IT application". Furthermore, Bob determines that John's rationale for the selection of COTS application B was that COTS application B was more scalable.
Next, let us assume that Bob is interested in reviewing the relationship of this individual decision with other decisions. Firstly, he can identify by examining the Layer field that this decision is an application Layer decision. Moreover, by examining the Impact relationship field, he can understand that this decision Reduced performance of customer registration service business process is related with 2 other decisions, decision 07 (a business layer decision) and decision 10 (an application layer decision).
Last but not least, Bob can inspect the possible unanticipated outcomes of this decision. By examining the Observed impact field, he is aware of an issue that arose in the customer profile registration business process because of the unfamiliarity of clerks with the new application interface.
For a more detailed illustration of EA Anamnesis approach, how the different concepts are interrelated and how this rationalization information is visualized, see earlier work [START_REF] Plataniotis | Relating decisions in enterprise architecture using decision design graphs[END_REF].
Evaluation
In this section we describe the objectives of our study, the evaluation method for the validation of our decision design concepts, and limitations and considerations of the evaluation.
Objectives
The main objective of this study is to identify the usefulness of design rationale approaches in the context of Enterprise Architecture. As we mentioned in the introduction, anecdotal interviews with EA practitioners gave us a first insight regarding the perceived usefulness of design rationale approaches in EA. In particular, we aim at identifying the perception of EA practitioners regarding our design rationale concepts.
For our study we address three research questions:
Study setup
Participants: Participants were gathered during a professional event on enterprise architecture organized by the Netherlands Architecture Forum (NAF). NAF is a leading Dutch (digital) architecture organization, concerned with the professionalization of Enterprise and IT Architecture. A total of 65 people started the survey, 35 out of which actively finished the study. Given the different focus of the individuals, the number of participants for each individual part of the survey fluctuated between 33 and 35. The majority of the participants were of Dutch nationality, had at least several years of professional experience in enterprise architecture, and were fluent in the language the survey was taken in (English). Materials: The questions and input used for this survey derived from previous research and professional workshops on the use and creation of architecture principles in Dutch knowledge management and enterprise modeling organizations. The data analyzed and used for this study derives from a subset of the total survey, which contained additional sections dealing with other, related, factors of architecture principle creation and use. All questions were presented in English because non-Dutch speakers were expected. Furthermore, the survey was planned to be extended to other European countries afterwards. Method: The survey consisted mostly of structured and closed questions. The participants were given the context that the questions dealt with the larger area of architecture principles, specifically introducing them to the fact that principles provide a foundation for EA decisions, and what factors are important for such decisions.
To investigate to what extent the concepts of EA Anamnesis are grounded in reality, we queried for each concept (a short explanation of each concept was provided) whether participants considered them to 1) help with the maintenance of an enterprise architecture, 2) help them to justify an enterprise architecture, and 3) be currently actively documented in the participant's organization or professional experience.
For each of these dimensions participants could answer whether they disagreed, agreed, or strongly agreed with the dimension applied to the given concept. The format of the answers was adopted from a bigger survey, which was executed by an outside party. The outside party had already structured the questions' format of this survey and as such we adopted the same answering formats in order to reduce any potential confusion as much as possible. To follow up on the current practical state of design decisions, we then enquired whether any standardized approaches or processes existed for the capturing of EA design decisions.
Participants were given the choice of either stating that, for their organization, such approaches exist, do not exist, or that they were uncertain of their existence. In the case of nonexistence of documentation approaches, participants had the possibility to expose the reasons for this through a hybrid structure with predefined answers as well as free text comments. Data analysis: The data resulting from the main questions (whether our use case concepts help in the maintenance and justification of an EA and whether they are documented) were quantified by assuming "strongly agree" implied "agree", and that such answers could be treated as "agrees". Based on this, we calculated the total amount of "agree" and "disagree" answers for each of our concepts as they pertained to the investigated dimensions. Of course, questions that were not filled in were disregarded in our calculation. While the size of these groups did not differ much (resp. 33, 34 and 35 participants), and comparison between them should thus be a valid endeavor, care should also be taken not to assume they represent a breakdown of opinions in the exact same group. The data resulting from the question regarding the use of standardized templates for documenting EA design decisions were analyzed in a straightforward way, calculating the percentages of yes, no and uncertain answers for the group (n=35) of participants who answered this question.
Survey limitations
The main difficulty in executing this study was that our questions had to be integrated into a larger study, of which the structure and answer formats were already determined. Unfortunately, the opportunity to conduct a dedicated survey regarding design rationale with such a number of participants was quite limited due to time unavailability of practitioners. Therefore, we had a limitation regarding the number of questions we could incorporate into this larger study.
Thus, in order to ensure that participants would not feel confused by radically different question and answering formats, we had to deal with a suboptimal set of answers for our first question. Ideally, the question of whether certain concepts apply to a given dimension, would be done on a Likert scale, with equal amounts of negative and positive answers. However, as the goal of the wider survey was to elicit as much (strong) opinions as possible from practitioners, it was chosen to use answer structures which contained no neutral grounds and thus forced people to make a polarized choice.
We will take these issues into account during the analysis of our data, and attempt to account for the possible loss of nuance.
Results
Tables 2, 3, 4 show the survey results on to what extent EA Anamnesis's concepts help the EA practitioner to (1) maintain the architecture, (2) justify the architecture, by which we mean that the EA Anamnesis concepts can aid in motivating design decisions, and ( 3) to what extent EA practitioners document EA Anamnesis concepts in current practice.
For each question, we provide a division into "positive" and "negative", and a subsequent division of "positive" into "agree" and "strongly agree". We do this for the sake of transparency: on the one hand, we want to show aggregate results on positive reactions to a concept, but on the other hand we do not want to hide that the questions were posed in a possibly biased manner (as discussed in Sect. 3.3).
Furthermore, Table 5 shows us to what extent practitioners use standardized templates to capture EA design rationales. In case practitioners forego the use of standardized templates, Table 6 shows why this is so, by means of closed answers (such as "no time/budget") and open answers, whereby the architects could provide a plaintext description (such as "Enterprise Architecture is not mature enough").
Discussion
Generally, the results from Tables 2,3 indicate that EA practitioners perceive that the EA Anamnesis concepts will help them with the maintenance and justification of Enterprise Architecture designs. This can be concluded from the fact that, for each concept, a majority of architects agrees with its usefulness for both maintenance and justification. Yet, the results from Table 4 indicate that while the design rationale concepts are considered useful, the majority of them is not documented by practitioners. While many EA practitioners capture the rationale for a decision (70%) and the EA layer (79%), a majority of them does not capture either the observed impact, decision impact or rejected alternatives.
Moreover, in cases where practitioners document decisions, 40% of them use standardized templates for documentation, while 23% of them is not aware of the existence of such templates. The remaining 37% of practitioners, that do not use standardized templates, finds that standardized templates are not useful (30%), or that there are no available resources in terms of time/budget (3%), or that there no suitable tool for this (9%). Furthermore 58% of the EA practitioners do not use standardized templates because they feel covered by documenting design decisions inside MS Word/Powerpoint. Others insist the Enterprise Architecture is not a mature practice in the organization.
A possible reason for the currently limited rationalization of Enterprise Architecture designs is that practitioners are insufficiently aware of the potential usefulness of design rationale techniques. This may be caused by the relative immaturity of the Enterprise Architecture field compared to areas in which decision rationalization and their tool support is well established, such as the field of Software Architecture.
Let us now discuss our findings per concept:
Rationale: The Rationale concept, which captures why a decision is taken, is considered an important concept for the majority of practitioners. Specifically 91% believe that this concept helps with the maintenance of the EA, and 82% believe that it helps to justify existing Enterprise Architectures. Interestingly however, as opposed to capturing other concepts, the current practice of documenting rationale of decisions is quite high (70%). We argue that this happens, because architects usually have to justify their design decisions to other stakeholders and the management of the organization.
Rejected alternatives: The majority of practitioners (74%) acknowledges that captured rejected alternatives information assists them with the maintenance of the enterprise architecture and (71%) of them that they are helped with the justification of the enterprise architecture. Practitioners seem to understand that this information provides a better insight into the rationalization process. We speculate that rejected alternatives, in combination with selection criteria, provide them with additional rationalization information by indicating the desired qualities which were not satisfied by these alternatives.
However Table 4 indicates that only (27%) of the EA practitioners capture rejected alternatives. We reason that the capturing effort of rejected alternatives in combination with the ignorance of the potential usefulness of this information do not motivate practitioners to document this concept. Even if this information is documented, the added value it provides is not so high because of the lack of structured documentation. However when rejected alternatives are combined with other rationalization concepts (such as criteria) it does allow one to better trace the decision making process, as is commonly done in structured rational-ization templates for software architecture (see e.g. [START_REF] Tyree | Architecture decisions: Demystifying architecture. Software[END_REF]).
Layer: 91% of the practitioners agree that the concept of layer helps them with the maintenance of an enterprise architecture. The proportion of practitioners that agree that this concept helps them to justify enterprise architectures is 62%. Although the proportion itself is quite supportive, we can observe quite a big variation compared with the question on "helps with maintenance". We argue that this is because the Layer concept is not a justification concept in itself, but when it is combined with the other design rationale concepts it can actually contribute to justification. For example, design decisions that belong to the business layer can impact decisions in the application layer.
Observed impact: A majority of Enterprise Architects (77%) recognize that the explicit information of observed impacts helps them with the maintenance of the enterprise architecture. We speculate that practitioners, while they maintain existing architectures, are expected to use information of the unanticipated outcomes of past decisions in the enterprise to avoid past mistakes. Furthermore 82% of Enterprise Architects agree that the observed impact concept helps them with the justification of the EA.
Interestingly however, despite the fact that practitioners recognize the usefulness of capturing the observed impact, only the 23% of them has a standard practice to document this concept. We believe that when an unanticipated outcome of a design decision is observed, practitioners are focused on immediately solving this issue. From a short term perspective, the documentation of this observed impact is a minor issue for them. However, in the long term, the awareness of observed impacts raises awareness of unanticipated outcomes. Another reason could be the lack of a structured environment for architectural rationalization, which would allow architects to relate observed impacts to decisions, layers (impacts on a business process or IT level), and more.
Impact (Decision traceability):
A majority of the practitioners (86%) find that the impact concept can assist them with the maintenance of the enterprise architecture. Moreover, 74% indicate that this concept helps them with the justification of the enterprise architecture. Our approach provides impact (decision traceability) information by making explicit how design decisions are related to each other. The different types of decision relationships, described by decision relationships concept, provide different types of impact traceability. Regarding the documentation practice, some of the practitioners (42%) capture this concept but still the majority of them (58%) does not document it. In our view this indicates a tendency of practitioners to interrelate their design decisions and EA artifacts. However, on the other hand, we think that the capturing of decision impacts is still limited since architects lack structured ways to capture design decisions, as we can see in Table 6.
Conclusion
In this paper, we reported on a first empirical evaluation of the EA Anamnesis approach for architectural rationalization. Using data from a survey amongst enterprise architecture practitioners, we found that EA Anamnesis concepts are largely perceived as useful to architectural practice. Yet, we also found that the uptake of rationalization in practice is currently limited to only a few concepts, prominently "rationale". Furthermore, these few concepts are captured in an ad hoc manner, thereby forgoing structured rationalization approaches such as EA Anamnesis.
Finally, we speculated on (1) the distinction between perceived usefulness of rationalization concepts on the one hand, and the uptake in practice on the other, and (2) the seeming current limited use of a structured template for rationalization. A possible explanation is the relative immaturity of the field of Enterprise Architecture, compared to fields where rationalization is well accepted, such as Software Architecture. Such immaturity manifests itself in a lack of awareness of rationalization, including recognizing its potential usefulness for tracing design decisions, as well as in a lack of structured templates for documenting design decisions in enterprise architecture.
However, as we test only the perceived usefulness of EA Anamnesis concepts, we should use a single in depth case study to further investigate the claims made in this article. For one, the difference between perceived usefulness and uptake may also be caused by the effort that it takes to capture rationalization information, in addition to a lack of structured templates and usefulness awareness.
Fig. 1 .
1 Fig. 1. The EA Anamnesis meta-model
Fig. 2 .
2 Fig. 2. ArchiSurance direct-to-customer EA model
Fig. 3 .
3 Fig. 3. ArchiSurance intermediary EA model
Question 1 :
1 Do enterprise architecture practitioners perceive EA Anamnesis's concepts as useful for the justification and maintenance of EA Designs? Question 2: To what extent do EA practitioners currently capture EA Anamnesis's concepts? Question 3: If rationalization information is captured, to what extent are structured templates used? (such as provided by EA Anamnesis)
Table 1 .
1 EA decision 13 details
Title: Acquisition of COTS application B
EA issue: Current version of customer administra-
tion application is not capable to sup-
port maintenance and admin-
istration of intermediaries application ser-
vice
Layer: Application
Impact rela- Business: Decision 07
tionships: Application: Decision 10
Alternatives:COTS application A
Upgrade existing application (in-house)
Rationale: Scalability: Application is ready to support
new application services
Observed
Impact:
Table 2 .
2 To what extent study participants (n=35) find that EA Anamnesis's concepts help with the maintenance of the enterprise architecture.
Helps with the maintenance of EA
Concept Negative Positive Positive- Positive-
Agree Strongly agree
Rationale 9% 91% 42% 49%
Rejected Alternatives 26% 74% 43% 31%
EA Layer 9% 91% 46% 45%
Observed Impact 23% 77% 43% 34%
Decision Impact 14% 86% 40% 46%
Table 3 .
3 To what extent study participants (n=35) find that EA Anamnesis's concepts help with the justification of the enterprise architecture.
Helps with the justification of EA
Concept Negative Positive Positive- Positive-
Agree Strongly agree
Rationale 18% 82% 29% 53%
Rejected Alternatives 29% 71% 44% 27%
EA Layer 38% 62% 38% 24%
Observed Impact 18% 82% 50% 32%
Decision Impact 26% 74% 44% 29%
Table 4 .
4 To what extent study participants (n=33) currently document the EA Anamnesis concepts.
Current documentation practice
Concept Negative Positive Positive- Positive-
Agree Strongly agree
Rationale 30% 70% 55% 15%
Rejected Alternatives 73% 27% 27% 0%
EA Layer 21% 79% 40% 39%
Observed Impact 73% 27% 24% 3%
Decision Impact 58% 42% 36% 6%
Table 5 .
5 To what extent study participants (n=35) use a standardized template for documenting EA design decisions.
Question Uncertain Yes No
Does your organization use a standardized template for 23% 40% 37%
documenting EA design decisions?
Table 6 .
6 The proportions of the reasons that practitioners (n=33) do not use standardized templates for documenting EA design decisions.
Not useful
Acknowledgments. This work has been partially sponsored by the Fonds National de la Recherche Luxembourg (www.fnr.lu), via the PEARL programme. | 33,141 | [
"1002482",
"1002483",
"1002484",
"1002485",
"1002475"
] | [
"371421",
"300856",
"452132",
"452132",
"371421",
"300856",
"452132",
"486583",
"371421",
"300856",
"452132"
] |
01474936 | en | [
"phys"
] | 2024/03/04 23:41:46 | 2006 | https://hal.science/hal-01474936/file/MS2006%20El%20Gharbi%20Paper.pdf | N El Gharbi
email: [email protected]@mail.com
A Benzaoui
Paper No A006 SIMULATION OF AIR-CONDITIONNED OPERATING THEATRES
Keywords: Operating theatre, aerodynamics simulation, turbulent model, comfort, Airflow, Indoor air quality
A hospital is a place where we find simultaneously people whose health state is weakened, or vulnerable, and pathogenic micro-organisms able to worsen their health. The quality of the air in a hospital must be in conformity with precise criteria in such buildings in everyday usage, and in particular in the areas of the buildings where some risks of specific pollution exist such as in operating rooms. In this paper, we present a modelisation and three dimensional numerical studies made in an operating room. Results could be used when new operating rooms must be conceived or others existing will be modified. It could be used to avoid risks and to allow all controls of any risks. The air flow modelling aim, by analysing the stream coherence, is to control the contamination in the operating area by bringing out the contamination generated within the operating room. It will allow a clear understanding of the complex coupled phenomena thanks to animation and to virtual 3D reality. The use of real data of the studied operating room (geometry, volume, extracted and blown air flow, temperature and hygrometry) will allow to determine the drainage of air, the distribution of temperature, and the zones of poor air distribution. To be able to find the exact distribution zones of the contaminants, the chosen turbulence model has to be accurate with a good ability to predict the recirculation of the air. The obtained results show that only one of the four tested models can correctly define the recirculation of the air.
INTRODUCTION
The air conditioning system of a hospital operating room must provide a comfortable and healthy environment for the patient and the surgical team. Thermal comfort can be achieved by controlling the temperature, the humidity, and the air flow. A healthy environment can be achieved by minimizing the risk of contamination through appropriate filtration and air distribution scheme. To ensure these optimal conditions, a study of the aerodynamics flow in a conditioned operating room must be made, by using a digital simulation which constitutes not only one powerful tool of anticipation, but by his possibilities of calculating in any point various aerodynamics parameters, it is presented as a genuine tool for tracing.
Our purpose in this study is to detail the control of a digital simulation for a test case of an operating room with diagonal air distribution system, to visualize the zones of recirculation and stagnation of air which supports the accumulation of contaminants, then to conclude the minimal conditions which make it possible to have a quality of indoor air without these zones of stagnation or recirculation. The choice of the turbulent model is then justified.
( ) 0 U ρ div = X-Momentum ( ) ( ) U S gradU eff μ div x p U ρU div + + ∂ ∂ - = Y-Momentum ( ) ( ) ( ) V S βΔt 1 ρg gradV eff μ div x p U ρV div + - - + ∂ ∂ - = Z-Momentum ( ) ( ) W S gradW eff μ div y p U ρW div + + ∂ ∂ - = Thermal energy ( ) ( ) T S gradT eff T, Γ div U ρT div + = RH-equation ( ) ( ) RH S gradRH eff RH, Γ div U ρRH div + = k-equation ( ) ( ) k S ρε b G k G gradk eff μ k α div U ρk div + - + + = ε-equation ( ) ( ) ( ) ε S ε R k 2 ε ρ 2ε C b G 3ε C k G k ε 1ε C gradε eff μ ε α div U ρε div + - - + + = eff Pr eff μ eff RH, Γ eff T, Γ , ε 2 k μ ρC t μ , t μ l μ eff μ = = = + = , 0.9 eff Pr = , 0.0845 μ C = , 1.42 1ε C = , 1.68 2ε C = V W ∂ ∂ ∂ ∂ ∂ ∂ = + + ∂ ∂ ∂ ∂ ∂ ∂ U S μ μ μ U eff eff eff x x y x z x U W ∂ ∂ ∂ ∂ ∂ ∂ = + + ∂ ∂ ∂ ∂ ∂ ∂ V S μ μ μ V eff eff eff x x y x z x U V ∂ ∂ ∂ ∂ ∂ ∂ = + + ∂ ∂ ∂ ∂ ∂ ∂ W S μ μ μ W eff eff eff x x y x z x i x j U i x j U j x i U t μ k G ∂ ∂ ∂ ∂ + ∂ ∂ = , i x p t ρσ t μ i g b G ∂ ∂ - = , (
)
k 2 ε 3 βη 1 0 η/η 1 3 ρη μ C ε R + - = with Sk/ε η ≡ 4.38 0 η = 0.012 β =
CASE STUDY: OPERATNG THEATRE An operating room assigned to the service of neurosurgery is selected to study. The arriving air in the room is already filtered in the station of air treatment, using primary filters of 88% and 95% effectiveness, and then in a second time on the level of the inlet diffuser using HEPA (High Efficiency Particulate Air) filters. For our study, we consider the room empty without staff neither patient nor medical equipment, to define the air recirculation without presence of disturbance which could increase the zone number of recirculation.
ADOPTED METHODOLOGIES
The first phase is the determination of our operating room characteristics:
Specicific
• Area with very high risks for the patients (class 4 according to NF S 90-351 or ISO 5 according to ISO'S DIN 14644-1).
Aerodynamic
• The distribution of air is diagonal
Geometrical
The following parameters were measured:
• Volume of the room.
• Dimensions of the walls and of the doors.
• Dimensions and positions of the various objects in the room (Scialytic-operating table -medical equipment -staffs).
• Dimensions and positions of the inlet and outlet diffuser.
The second phase consists in calculating the heat flows released by the occupants, lighting, by the machines and transmission, to be able to use them as boundary conditions.
Validation study of turbulent model
To further evaluate the performance of different turbulence models for the prediction of 3D ventilation flows, an isothermal ventilation flow in 3D partitioned room was chosen as a test case. Buchanan (1997) quoted by [3] studied experimentally a ventilated room, their experiment was carried out in a reduced room model at 1/10 scale, with a dimension of (width × depth × height) = (0.915 × 0.46 × 0.3) m with one inlet and outlet, and partition wall at the middle of the room which is the half room height.
Measure and conditions
The Reynolds number of the inlet airflow was determined to be 1600, based on the vertical inlet velocity U y . Measurements were carried out along the inlet jet center line and along a line on the symmetry plane at the half height of the partition wall at y = 0.075m, only the vertical velocity components (Y Velocities) at different locations were measured.
Prediction with different turbulent models
Four turbulent two-equation models associated to wall treatments: the standard k-ε model with standard wall function, the RNG k-ε model the realizable k-ε model with both non-equilibrium wall function and SST k-ω model; were simulated and numerically tested for the same configuration given by Buchanan and compared to his experimental results in the two planes: the mid height line and the jet center line Figure 2 and Figure 3. We obtained these results: From the locations : x = 0 m to x = 1m, both the RNG k-ε and the realizable model yielded smooth predicted velocity profile that agree with the experiment data well, but for the inlet plane, we can see that the RNG k-ε model is correctly predicted than the realizable k-ε model. We can conclude that the two equations of RNG k-ε model with the non-equilibrium wall function, a performed good agreement has been achieved between predictions and measurements.
Current study
The RNG k-ε model were used as the computational tools in the current study. The basic equations are listed in Table 1.
A non uniform grid (1037497 tetrahedral cells) was used too with a refinement on the level of the inlet and outlet diffuser of the lamps and the operating table; the following heat flows were taken into account:
Table. 2. The heat dissipation
The fluid is supposed incompressible and a second order scheme is used to discretize the diffusion terms. The solution is obtained in primitive variable P-V and the pressure and velocity coupling is obtained by the SIMPLEC algorithm.
The convergence criterion was based on the maximum error less than a prescribed value taken equal to 10 -6 for energy, and 10 -4 for the other equations. Four planes were selected, each one is localised in the middle of the inlet diffuser represented on Figure 5 and localised at:
• z =1.13m in the medium of the first inlet diffuser.
• z = 2.54m in the medium of the second inlet diffuser.
• z = 3.95m in the medium of the third inlet diffuser.
• z = 4.47m in the medium of the fourth inlet diffuser. We notice on each explored studied plane, the presence of two zones:
-The one is showing a correct circulation of the air between the inlet and the outlet and. -The other is showing a recirculation area, in addition to stagnation zone of air. This image is perceptible through each examined plane and particularly over that containing the operating table where its presence can disturb the air jet and separates the room in two parts, creating two zones of recirculation. We can see a very intense area of stagnation on the left side and close to the corner of the right part. The danger is that, the stagnation zones are a potential sources of pollution, the air which is originally clean, can become contaminated and this situation can increase, because the microbes are in the majority enclosed in this area, that would be in favour of their germination which represents a real danger for patients, the medical personnel, as well as the sterilized material. Nevertheless, the air is in good exchanges, on the top of table and on its right side. On Figure 6: (a) and (d) the situation becomes more alarming since the recirculation zones occupy an increasingly large volume and only a very little space permitted to the air change. We are constrained to say that the air is rather very badly distributed and consequently, this type of diffuser does not agree with an operating room.
Item
Heat 080.120.170.210.250.290.330.370.410.460.500.540.580 070.100.130.170.200.230.270.300.330.370.400.430.470 070.100.130.170.200.230.270.300.330.360.400.430.460 070.100.130.170.200.230.270.300.330.360.400.430.460
CONCLUSION
The modelling of airflow in a closed room and its Aeraulic simulation makes it possible to optimize the ways of the airflow distribution according to the various parameters (operating table, patient, staff, medicals material…) in order to decrease, for the patient and the medical personnel, the risks of contamination. For this, we adopted turbulent model RNG k-ε after having carried out the tests with other models (standard k-ε, realizable k-ε and SST kω).This model can enables us to better locate the various zones of recirculation and the localization of the zones of low velocities. We can then recommend certain modifications or new solutions in order to obtain better distributions of inlet air and to avoid as much as possible the causes of contamination.
NOMENCLATURE
Fig. 1 .
1 Fig. 1 .Geometry of the operating room
According to literature, Nielsen P.V (1974), was one of the first researchers having used the k-ε turbulent model in 2D, to study the air movement and heat transfer in conditioned room[5].Murakami et al (1994), from the University of Tokyo quoted by[6] added to the standard k-ε model a wall function to model the air flow, in a clean room where the number of Air Change by Hour (ACH) is very high. Chow and Yang (2003) used the standard k-ε model using eddy viscosity hypothesis fort their numerical predictions on the ventilation of an operating theatre in a hospital of Hong Kong,[2]. Similarly the work made byMonika et al(2004), who used the same turbulent model and the flow near the boundaries were represented by using the standard logarithmic law[5]. They focused on the contaminant diffusion in an experimental operating room. Chen (1995) [1] studied five modified k-ε models and compares the obtained numerical results with existing experimental results. The RNG k-ε model was found slightly better than the standard k-ε model for simulating air flow displacement. WhileLuo (2003) [3] compared three turbulent models, RNG k-ε, realizable k-ε and SST k-ω, where an enhanced treatment near the walls was used for the two k-ε models. The study of the 3D displacement ventilation shows that the SST k-ω model predicts better the velocity profiles in the vicinity of floor.For our study the choice of the turbulent model is crucial to well predict the air recirculation. Such recirculation will be favourable for the accumulation of the contaminants. Four turbulent models (two equations) were studied:• The standard k-ε model with walls function • The realizable k-ε model with non equilibrium walls function ; • RNG k-ε model with non equilibrium walls function; • SST k-ω model. The choice of the treatment associated to walls of these models is taken from the results of the literature search [3].
Fig. 2 .
2 Fig.2 .Comparison of the predicted velocity profiles with measurements at the mid height line
Fig. 5 .
5 Fig.5.Planes of study
•
Fig.6.Different case of study: vector and iso-velocity
Table . 1
. . Basic equations of RNG k-ε model.
Equation of Equations
Continuity | 13,144 | [
"942079"
] | [
"6199",
"6199"
] |
01468881 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2014 | https://hal.univ-reunion.fr/hal-01468881/file/DM%20%26%20AT%203%20Simon%20ECDM%202014%20reflection.pdf | Jean Simon
email: [email protected]
community division of labor subject tool object outcome DATA PREPROCESSING ACCORDING TO ACTIVITY THEORY
Keywords: Educational data mining, Activity theory, data preprocessing, teacher training
In this paper, we propose one possible way to preprocess data according to Activity theory. Such an approach is new and particularly interesting in Educational Data Mining. In a first time, we present the methodology we have adopted and, in a second, the application of this methodology to the analysis of the traces left during five years by the preservice teachers of the Reunion Island teacher training school.
INTRODUCTION
This paper relies on the following approach in four points: 1.Activity theory (Engeström,87) is frequently used to study human activity specially CSCL (Computer Supported Collaborative Learning) because this theory is particularly suitable to understand what the people do when they collaborate. 2. One major problem in data mining consists in the data preprocessing. Better those data are preprocessed, more information we obtain by their treatment. 3. The idea developed here is thus to use domain knowledge to preprocess the data before their treatment; the advantage of this is to permit, after, to use any kind of data mining algorithm. 4. One possible domain knowledge to study CSCL is Activity theory; preprocessing based on Activity theory should make it possible to get more interesting results. In a first time, we present the methodology which develops the preceding points, in a second time we show the relevance of this methodology on a concrete case.
METHODOLOGY
According to Activity theory (AT) (Engeström,87), in the activity, the subject pursues a goal that results in an outcome. For this, he uses tools and acts within a community. His relation, to this community is defined by rules. To achieve the goal, it may be necessary to establish a division of labor within the community. For example, in the context of hunting, there will be hunters and beaters (Kuutti,96). The Activity theory could be represented by the activity system diagrams of Figure 1.
Figure 1. the triangles of AT according to [START_REF] Engeström | Learning by expanding: An Activity-Theoretical Approach to Developmental Research[END_REF]. Moreover, AT considers three levels of human activity (Kuutii, 96): activity, action, and operation. We will not use them here but it could be a point to develop further. Using AT to understand what happens on a platform is common in the field of CSCL. As soon as 1996, [START_REF] Han | Activity theory as a Potential Framework for Human-Computer Interaction Research[END_REF] suggested doing it. For [START_REF] Stahl | Computer support for collaborative learning: Foundations for a CSCL community[END_REF], AT is suitable to analyze CSCL because it proposes general structures of the broader effective context. In the same way for [START_REF] Halverson | Activity theory and Distributed Cognition: Or what does CSCW need to do with theories ?[END_REF], if the theory can't satisfy all the needs of the researcher in CSCL, it is powerful for, at least, three reasons: 1. This theory defines well its theoretical objects which are useful to manipulate data; 2. In this theory, the individual is at the center of everything; 3. Activity system diagrams highlight the processes and show both descriptive and rhetorical power. For his part, [START_REF] Lewis | An Activity theory framework to explore distributed communities[END_REF] explains how AT focuses on interdependent parameters which exist in collaborative learning. He shows how each triad helps to understand what the people do on a CSCL platform. For example, how the triad Community-Subject-Object highlights the role of the trainer. These are the reasons why we will use AT further.
Educational data mining. In education, the use of groupware and learning management system grows increasingly. Most of these systems record the traces of users' activity. The study of those huge masses of data is important to understand the behavior of the users and to improve learning. It can be done by data mining. This approach consists to use algorithms (decision tree construction, rule induction, artificial neural networks, etc. (Romero & Ventura,07) which enable to discover behavioral rules or data classification or clustering. In the context of education, we speak of Educational Data Mining (EDM). For [START_REF] Babič | The state of educational data mining in 2009: a review and future visions[END_REF] EDM studies a variety of areas, including individual learning from educational software, computer supported collaborative learning,... For [START_REF] Romero | Educational Data Mining: a Survey from 1995 to 2005[END_REF] there are four important steps in data mining: Collect data; Data preprocessing; Apply data mining; Interpret, evaluate and deploy the results. Data preprocessing is an important step because data tend to be incomplete, noisy and inconsistent (Han & Kamber,06). The data preprocessing includes the following steps (Han & Kamber,06): Data cleaning for correcting errors; Data integration when the data come from different sources; Data selection; Data transformation. Among data transformations, [START_REF] Romero | Educational Data Mining: a Survey from 1995 to 2005[END_REF] propose data enrichment which consists of calculating new attributes from the existing ones. In business data mining (Talavera & Gaudioso,04), or in medicine (Lin & Hang,06), the domain knowledge is used for deriving those new attributes. As far as we know, this is not yet done in EDM.
Using AT for data preprocessing. So, we propose here to use domain knowledge to enrich the data during the preprocesssing step. One good candidate to represent the domain knowledge is Activity theory. We have seen that AT is used in CSCL studies, but it's also used more and more in data mining. For instance, [START_REF] Verbert | Dataset driven research for improving recommender systems for learning[END_REF] see the datasets as activity streams and they reorganize their categorizations by using AT. Babic & Wagner (2007) proposed to use it to understand the activity on a platform through the three levels (activity-action -operation). Reimann &al (2009) see the groups as activity systems and the log file as containing records of these structured activities. However, none of these studies use it to preprocess data as we want to do here. What we propose is that "the preprocessing step will then consist in mapping original data to a higher-level, analysis oriented representation." (Talavera & Guadioso,04) and, to do this, we will use AT. For example, it is possible on any platform to count the different actions: document deposits, document readings, ... it is possible then to study what are the actions, the features of the tool, which are the most used. Interpreted in terms of AT, by doing this, we consider a single node, the node "tool". If we apply to our data a first preprocessing, the user identification (Romero & Ventura,07) ,in other words, if we associate with any action (reading, deposit, etc) the user who did it, we obtain a finer analysis. We will be able to discover that, according to the users, this is not the same features that are used, that a trainer, for example, deposits more documents than a trainee. Interpreted in terms of AT, by doing this, we studied the dyad "subject-tool".
Data useful for AT accessible on a CSCL platform. If we take the diagrams of Figure 1, we identify immediately three types of data readily available: the tool: it will be the traces of actions on the platform and objects on which these actions operate (e.g. document deposit, document reading); the subject: it will be the users registered on the platform; the group: every CSCL platform keeps traces of the groups created on it. Moreover, the traces of the links between these three types of data are also easily accessible. On all platforms, is recorded who are the members of a particular group (dyad: subject-group) and who did what (dyad: subject-tool). From these two dyads, it is easy to establish the third one (dyad: group-tool). The node "objective" is rarely the subject of a specific trace. It may be possible to find it through the name of the group or the name of the main folder that the group shares. Concerning the two last nodes "division of labor" and "rules within the group", it is exceptional to have explicit traces of them. This is not surprising because, most of the time, these two nodes are the goal of the study, what we try to understand as we shall see in the example of the third part. So they can not be the subject of a preprocessing data. Thus, we obtain figure 2: Figure 2. AT for data mining. Solid lines indicate data which are easily found among traces left on a CSCL platform and dotted lines the information that it will be necessary to infer. Technically, it is easy to see how we can preprocess the data. In the case mentioned above, instead of studying unique data, such as, for example, "actions", we study couples "subject-action", who is doing what on the platform. In the same way, we can go on and work with 3-tuples "subject-group-action" and when we can identify the objective, 4tuples "group-objective-subject-action" which correspond to the solid lines of figure 2. With these 4-tuples, we see it is easy to put the focus on one of the nodes, the subject, the tool, the goal, the group, without losing data interdependencies. In fact, by this preprocessing, we inject in the data their interdependency that Lewis considers to be central in AT. Once those 4-tuples are built, they can feed any kind of algorithms (decision tree construction, Bayesian learning,…).
APPLICATION
The Reunion Island teacher training school trains primary school teacher trainees (PE2s). At the end of their year of training, they should be able to teach in primary schools. PE2s have used a CSCW platform which allowed them to pool and share the work of preparation of the class. With their trainers, the platform has served various purposes: to deposit documents ( "collective memory"), to improve lesson plans, to help online trainees when they are in charge of a class, to validate the C2i2e which confirms that the trainee is able to use ICT in education.
As platform, BSCW [3] was chosen essentially because users may structure as they wish spaces they have created there. For this study, we analyze the traces left on it by the PE2s during 5 years. 1167 PE2s (from 343 in 2005 to 155 in 2009) have used the platform, and left there more than 3.000.000 traces. We want to show that, according to the type of group or the type of objective, the activity is not the same. For that, we will follow the various points raised in the methodology and apply them to those traces. First, we present the results of an analysis of raw data, second, an analysis of data that have been preprocessed according to AT. In those sections, we do a simple statistical treatment; this is why, thirdly, we summarized a research exposed in (Simon & Ralambondriany,12) where we have used data mining techniques.
Analysis of the raw data.When trainers and trainees work on BSCW they leave traces of what they do on the platform. It's possible to connect each trace to the user who has left it ("user identification" preprocessing). For each user we were able to say how many readings he has done, how many documents he has deposited, how many folders he has shared, etc. The users were anonymized, they were just "numbers". We obtain table 1 We see that 81% (11308 among 13936) of the documents are deposited by the PE2s and 19% (13936-11308=2628) by the trainers. We see also that if 93% of PE2s (1089 /1167) read documents found on the platform, only 72% of them deposit documents (840/1167). We can conclude that not all PE2s cooperate; we find the "lurker" phenomenon well-known in forums. However, if we analyze more precisely we will see that the situation is more complex.
Preprocessing data. On BSCW when a user wants to create a group, he must first create the folder in which the resource will be placed and then invite other members to share this folder. It is then possible to connect all the traces to this basic folder: the members of the group but also the objects and the events. In other words, it is rather easy to reconstitute the 3-tuples (group, subject, action on the tool). Moreover, as a name is associated with each of these folders, name often indicating the goal, it is also possible to build 4-tuples (group, goal, subject, action). We can then study the different groups or the different objectives or the different subjects or the different actions. So after we have done this preprocessing, we decide to do two distinctions: according to the composition of the group (with or without teachers : PE2s vs PE2s+trainers) and according to their goal (groups "TICE" and others).
Analysis according to the composition of the group. The 1167 PE2s have constituted more than 960 groups. 668 were groups of PE2s only and 292 groups including at least one trainer. One PE2, of course, could be in several groups. We made distinction "PE2s alone" groups (third column) and "PE2s+trainer" groups (fourth column) because we wanted to know if trainees would use the platform without being forced by the trainers. Among "PE2s+trainer" groups, we wanted also to focus on the groups whose objective was ICT for education ("TICE" groups, fifth column). As we can see the PE2s freely use the platform: the number of groups shared only by them is significantly greater than the number of groups shared with trainers (668 vs. 292). However the activity is much lower in groups shared only by PE2s than in groups shared with trainers. Thus, most figures in groups PE2s are lower than the figures in groups PE2s+trainers: fewer deposites and fewer readings. A possible explanation for this higher activity in groups PE2s+trainers is that there is an effect "teachers" that incites students to work more. For example, the trainee will be "invited" by the trainer to go and consult the documents that the trainer has deposited for him.
Analysis according to the objective. With the titles of the folders, it was very easy to isolate the groups named "TICE" (ICT for education). The folders associated with these groups were used to validate a certificate according to which the trainee is able to use ICT in Education in a correct way (C2i2e). All those groups have a trainer as member. We can therefore compare those "TICE" groups with all the groups with trainers "PE2S+trainers" in Table 2. In the "TICE" groups almost every PE2 works, 11/13 deposit (vs 5/20 for "PE2s+trainer") and 13/13 read (vs 15/20). Thus, even when the groups have the same composition, we can see that, according to the objective, activity is not the same. However, the "TICE" groups seem to reveal a strong cooperation and to be similar. We will see that it is not the case and that makes problem for the institution.
Data mining on the groups with teachers. Once the preprocessing is done, it is possible to operate more complex treatments and to use any kind of data mining algorithms. In (Simon & Ralambondriany, 2012), we have analyzed the groups with trainer of the year 2006-2007. We have done successively a Principal Component Analysis; a Ward Hierarchical Clustering and finally we use the k-means algorithm. We gathered by this way the groups with trainer in 7 clusters that we have named according to their features: C1 "groups with no activity", C2 "individualize accompaniment", C3 "weak cooperation", C4 "dissemination to a small group", C5 "dissemination to a big group", C6 "strong cooperation" and C7 "accompaniment during training course" (see Figure 3). By this way, we were able to see that some "TICE" groups were belonging to the cluster C3 instead of C6. It was bad because all the "TICE" groups should be similar and reveal a strong cooperation. Thus, this clustering has permitted to reveal that some "TICE" groups don't follow the "rules" defined by the institution. Interpreted in the terms of AT, there was a contradiction at the node "rules". Just for information, group 7, which stands out in this figure by its high activity, was created to provide an answer, just in time and just enough, to trainees during their internship in classroom when they had to face to pupils.
The wealth of data mining relies on its bottom-up approach. The researcher starts from the data and expects that the machine will propose a categorization of these data. In this approach, it is assumed that, somehow, the researcher has no a priori knowledge about the data. This approach is very interesting because it can lead to orient research in an unexpected way. However, it often happens that the proposed categorization is not exploitable for the researcher [START_REF] Talavera | Mining student data to characterize similar behavior groups in unstructured collaboration spaces[END_REF]. One possible way to overcome this problem is to incorporate domain knowledge in the data mining process. To represent the domain knowledge of CSCL, Activity theory is a good candidate. Incorporate AT in the data mining process has already been done in various ways (Babic & Wagner,07), (Reimann &al,09), [START_REF] Verbert | Dataset driven research for improving recommender systems for learning[END_REF] However, in those researches, in some manner, AT and data mining algorithm are tied. To avoid this, we propose to incorporate the domain knowledge in the data during the preprocessing step. By this way the choice of the data mining algorithm becomes free, it can go from simple statistics treatments to more complex algorithms (rule induction, artificial neural networks, Bayesian learning,…).
As we have seen, it is simple to do and we obtain results easily exploitable. The reason is that this kind of preprocessing is an injection of prior knowledge in the data: here, we inject in the data their interdependency. When we have applied this approach to data produced by preservice teachers with their trainers on a CSCL platform, we were able to show that according to the group in which they operate and the objective that these groups pursue the activity produced by preservice teachers is not the same.
A further step could consist to preprocess the data according to the different levels of Activity theory, activity, action, operation, but it is not sure we obtain significant results because most of the traces left on the platform fall under the "operation" level.
To conclude, as [START_REF] Halverson | Activity theory and Distributed Cognition: Or what does CSCW need to do with theories ?[END_REF] says "AT is powerful because it names and names well, but this both binds and blinds its practitioners to see things in those terms." It is possible that, by reducing the hypothesis space as we do by using AT, we avoid revealing assumptions that could be very interesting and we lose a part of the wealth of the data mining.
Figure 3
3 Figure 3 Clustering of the "TICE" groups of the year 2006-2007(Simon & Ralambondriany, 12)
Table 1 .
1 . Analysis of the trace traces left by PE2s during 5 years
PE2s on the platform over 5 years
Total number of PE2s 1167
Total number of documents shared by the PE2 on the platform 13936
Total number of documents produced by the PE2s on the platform 11308
Number of PE2 producers 840
Number of PE2s' readings 57916
Number of PE2 readers 1089
Table 2 .
2 Comparison of the groups with or without trainers
Groups without or without trainer over 5 years all PE2s PE2s + trainer TICE
Total number of groups 960 668 292 68
Total number of PE2s 1167 1167 1167 884
Average number of PE2s for one group 15 13 20 13
Average number of documents for one group 15 6 34 41
Average number of PE2 producers for one group 3 2 5 11
Average number of readings by PE2 for one group 60 29 132 135
Average number of PE2 readers for one group 10 8 15 13 | 20,101 | [
"13953"
] | [
"54305"
] |
01475042 | en | [
"sdv",
"stat",
"info"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01475042/file/SFRP%202017%20Time%20Serie%20Segmentation.pdf | Le Brusquet
Modelling the Extremely Low Frequencies Magnetic Fields Times Series Exposure by Segmentation
Introduction
ELFSTAT project, founded by the French ANSES (2015-2019, Grant agreement n. 2015/1/202), aims at characterizing children's exposure to Extremely Low Frequency Magnetic Fields (ELF-MF) in real exposure scenarios using stochastic approaches. The present paper gives details about the first step of the project: this step aims at developing stochastic models to model personal exposure from a dataset of recorded ELF-MF signals. These recordings are coming from the ARIMMORA project [START_REF] Struchen | Analysis of children's personal and bedroom exposure to ELF-MF in Italy and Switzerland[END_REF]: 331 children has worn an EMDEX and their exposition were measured during about 3 days. The stochastic models will then be used to construct realistic simulations of ELF-MF time series. Figure 1 gives an example of ELF-MF 24 hours-signal. The majority of ELF-MF time series are characterized by abrupt changes in structure, such as sudden jumps in mean level or in dispersion around a mean level. The developed model consists in detecting these changes and in modeling the signal between two consecutive changes by a stationary process. These stationary processes are described by parametric models. We chose Auto-Regressive model because of the possibility of characterizing mean effects, variance effects and time-correlation effects with a weak number of parameters. The full-model is obtained by modeling the distribution of the parameters from the whole dataset of 331 recordings. Figure 2 gives a schematic of the full approach.
Data sources
The 331 individuals of the database are distributed according the geographical location (Switzerland or Italy) and the time season (Winter or Summer), as shown in the following For each individual, we consider a full day of measurements (0h-24h). As the interval between two successive observations is 30 seconds, 2880 values per recording are considered.
Change-point detection and segment modeling
We consider changepoints (or equivalently breakpoints) to be the time points that divide a data set into distinct homogeneous segments (or equivalently stationary segments); the boundaries of segments can be interpreted as changes in the physical system. The problem of changepoints estimation has attracted significant attention and is required in a number of applications including financial data [START_REF] Killick | Optimal detection of changepoints with a linear computational cost[END_REF], climate data [START_REF] Davis | Structural break estimation for nonstationary time series models[END_REF], biomedical data [START_REF] Fryzlewicz | Wild binary segmentation for multiple change-point detection[END_REF] and signal processing [START_REF] Korkas | Multiple change-point detection for non-stationary time series using Wild Binary Segmentation[END_REF]. Different authors propose various approaches to the problem of changepoints detection or segmentation of time series. This issue is thoroughly surveyed in [START_REF] Basseville | Detection of abrupt changes: theory and application[END_REF], where different methods are proposed and an exhaustive list of references is given. Some authors study the estimation of a single changepoint problem [START_REF] Davis | Testing for a change in the parameter values and order of an autoregressive model[END_REF], while others extend it to multiple changepoints problem. In this last case, the number of changepoints is unknown and it is a challenge to jointly estimate the number of changepoints, their location, and also provide an estimation of the model representing each segment. In this work we consider the [START_REF] Killick | Optimal detection of changepoints with a linear computational cost[END_REF] procedure because of its ability to jointly estimate the number of changepoints, their location and the AR model parameter of each segment. To resume, the segmentation aims to:
(1) find the periods of stability and homogeneity in the behaviour of the time series;
(2) identify the locations of change, called changepoints;
(3) represent the regularities and features of each segment (estimate the model of each segment by a parameter set like changepoints location, segment amplitude, segment duration, segment regularity). Figure 1 gives the segmentation results for the given example.
Statistical characterization of model parameters
In order to produce a completely model of the time series, the marginal and joint probability distributions of all model parameters are required. The set of parameters related to piecewise autoregressive model are, at time 𝑡: duration 𝜏 𝑡 and coefficient of the AR model (amplitude or mean 𝜇 𝑡 , noise variance 𝜎 𝑡 2 and the autoregressive coefficients 𝜙 𝑡 ). These 4 parameters can be modeled separately (whether by a parametric probability distribution or not) as well as jointly (rather in a nonparametric way because of the difficulty to find a multidimensional probability distribution that efficiently models the parameters set). Figure 3 shows that parameters are correlated and thus cannot be modeled separately. We used the Multivariate Kernel Density Estimation [START_REF] Scott | Multivariate density estimation: theory, practice, and visualization[END_REF] that is a nonparametric and classical method. The advantage is that once we have an estimate of this density, we can simulate realistic realizations of these parameters. The figure 3
Figure 1 :
1 Figure 1: Example of ELF-MF recording (black points) and its obtained segmentation (in red).
Figure 2 :
2 Figure 2: Schematic of the full approach
also gives the Multivariate KDE contours « Congrès National de Radioprotection » SFRP -LILLE -7, 8 & 9 juin 2017 and 5000 simulations of these parameters via the Multivariate KDE. Concerning the fitted distributions, first result shows no significant difference between the summer database and the winter database. Finally, in the figure 4, we present an example simulated of ELF-MF exposure time series based in piecewise AR model.
Figure 3 :
3 Figure 3: Scatter plot of calculated model parameters (𝜙 𝑡 , 𝜎 𝑡 ) (black points), the confidence areas of the fitted distribution and simulated points (blue points).
Figure 4 :
4 Figure 4: Example of simulated ELF-MF time series.
Table :
:
Switzerland population Italy population
Winter 79 individuals 86 individuals
Summer 80 individuals 86 individuals | 6,522 | [
"21973",
"781259",
"781260",
"781261",
"779197"
] | [
"1289",
"1289",
"467967",
"467967",
"467967",
"467967",
"149966",
"528442",
"247965"
] |
01475080 | en | [
"info"
] | 2024/03/04 23:41:46 | 2016 | https://inria.hal.science/hal-01475080/file/LNAI_2015_VocabularyIncreasing_v1808.pdf | Irina Illina
Dominique Fohr
Georges Linarès
Imane Nkairi
Temporal and Lexical Context of Diachronic Text Documents for Automatic Out-Of-Vocabulary Proper Name Retrieval
Keywords: speech recognition, out-of-vocabulary words, proper names, vocabulary augmentation
Proper name recognition is a challenging task in information retrieval from large audio/video databases. Proper names are semantically rich and are usually key to understanding the information contained in a document. Our work focuses on increasing the vocabulary coverage of a speech transcription system by automatically retrieving proper names from contemporary diachronic text documents. We proposed methods that dynamically augment the automatic speech recognition system vocabulary using lexical and temporal features in diachronic documents. We also studied different metrics for proper name selection in order to limit the vocabulary augmentation and therefore the impact on the ASR performances. Recognition results show a significant reduction of the proper name error rate using an augmented vocabulary.
Introduction
The technologies involved in information retrieval from large audio/video databases are often based on the analysis of large, but closed corpora. The effectiveness of these approaches is now acknowledged, but they nevertheless have major flaws, particularly for those which concern new words and proper names. In our work, we are particularly interested in Automatic Speech Recognition (ASR) applications. Large vocabulary ASR systems are faced with the problem of out-of-vocabulary (OOV) words. This is especially true in new domains where named entities are frequently unexpected. OOV words are words which are in the input speech signal but not in the ASR system vocabulary. In this case, the ASR system fails to transcribe the OOV words and replaces them with one or several in-vocabulary words, impacting the transcript intelligibility and introducing the recognition errors.
Proper Name (PN) recognition is a complex task, because PNs are constantly evolving and no vocabulary will ever contain all existing PNs: for example, PNs represent about 10% of words of English and French newspaper articles and they are more important than other words in a text to characterize its content [START_REF] Friburger | Textual Similarity Based on Proper Names[END_REF]. Bechet and Yvon [START_REF] Bechet | Les Noms Propres en Traitement Automatique de la Parole[END_REF] showed that 72% of OOV words in a 265K-words lexicon are potentially PNs.
Increasing the size of ASR vocabulary is a strategy to overcome the problems of OOV words. For this purpose, the Internet is a good source of information. Bertoldi and Federico [START_REF] Bertoldi | Lexicon adaptation for broadcast news transcription[END_REF] proposed a methodology for dynamically extending the ASR vocabulary by selecting new words daily from contemporary news available on the Internet: the most recently used new words and the most frequently used new words were added to the vocabulary. Access to large archives, as recently proposed by some institutions, can also be used for new word selection [START_REF] Allauzen | Diachronic vocabulary adaptation for broadcast news transcription[END_REF].
Different strategies for new word selection and vocabulary increasing were proposed recently. Oger et al [START_REF] Oger | Local methods for on-demand out-of-vocabulary word retrieval[END_REF] proposed and compared local approaches: using the local context of the OOV words, they build efficient requests for submitting it to a web search engine. The retrieved documents are used to find the targeted OOV words. Bigot et al [START_REF] Bigot | Person name recognition in ASR outputs using continous context models[END_REF] assumed that a person name is a latent variable produced by the lexical context it appears in, i.e. the sequence of words around the person name and that a spoken name could be derived from ASR outputs even if it has not been proposed by the speech recognition system.
Our work uses context modeling to capture the lexical information surrounding PNs so as to retrieve OOV proper names and increase the ASR vocabulary size. We focus on exploiting the lexical context based on temporal information from diachronic documents (documents that evolve through time): we assume that the time is an important feature for capturing name-to-context dependencies. Compared to approaches of Bigot et al. [START_REF] Bigot | Person name recognition in ASR outputs using continous context models[END_REF] and Oger et al [START_REF] Oger | Local methods for on-demand out-of-vocabulary word retrieval[END_REF], we also use the proper name context notion. However, our approaches focus on exploiting the documents' temporality using diachronic documents. Our assumption is that PNs are often related to an event that emerges in a specific time period in diachronic documents and evolve through time. For a given date, the same PNs would occur in documents that belong to the same period. Temporal context has been proposed before by Federico and Bertoldi [START_REF] Bertoldi | Lexicon adaptation for broadcast news transcription[END_REF] to cope with language and topic changes, typical to new domains, and by Parada et al. [START_REF] Parada | Contextual Information Improves OOV Detection in Speech[END_REF] for predict OOV in recognition outputs. Compared to these works, our work extends vocabulary using shorter and more precise time periods to reduce the excessive vocabulary growth. We are seeking a good trade-off between the lexical coverage and the increase of vocabulary size that can lead to dramatically increasing the resources required by an ASR system.
Moreover, the approaches presented in [START_REF] Bertoldi | Lexicon adaptation for broadcast news transcription[END_REF][14] are a priori approaches that increase the lexicon before the speech recognition of test documents and are based on the dates of documents. This type of techniques has the disadvantage of a very large increase of the lexicon, which ignores the context of appearance of missing words in the documents to recognize. Our proposal uses a first pass of decoding to extract information relating to the lexical context of OOV words, which should lead to more accurate model of the context of OOV words and avoid excessive increase of vocabulary size.
This paper is organized as follows. The next section of this paper provides the proposed methodology for new PN retrieval from diachronic documents. Section 3 describes preliminary experiments and results. The discussion and conclusion are presented in the last section.
Methodology
Our general idea consists in using lexical and temporal context of diachronic documents to derive OOV proper names automatically from diachronic documents. We assume that missing proper names can be automatically found in contemporary documents, that is to say corresponding to the same time period as the document we want to transcribe. We hypothesize that proper names evolve through time, and that for a given date, the same proper names would occur in documents that belong to the same period. Our assumption is that the linguistic context might contain relevant OOV proper names or to allow to add some specific information about the missing proper names.
We propose to use text documents from the diachronic corpus that are contemporaneous with each test document. We want to build a locally augmented vocabulary. So, we have a test audio document (to be transcribed) which contains OOV words, and we have a diachronic text corpus, used to retrieve OOV proper names. An augmented vocabulary is built for each test document.
We assume that, for a certain date, if proper names co-occur in diachronic documents, it is very likely that they co-occur in the test document corresponding to the same time period. These co-occurring PNs might contain the targeted OOV words. The idea is to exploit the relationship between PNs for a better lexical enrichment.
To reduce the OOV proper name rate, we suggest building a PN vocabulary that will be added to the large vocabulary of our ASR system. In this article, different PN selection strategies will be proposed to build this proper name vocabulary: ─ Baseline method: Selecting the diachronic documents only using a time period corresponding to the test document. ─ Local-window-based method: same strategy as the baseline method but a cooccurrence criterion is added to exploit the relationship between proper names for a given period of time. ─ Mutual-information-based method same strategy as the baseline method but mutual information metric is used to better choose OOV proper names. ─ Cosine-similarity-based method same strategy as the baseline method but the documents are represented by word vector models.
In all proposed methods, documents of the diachronic corpus have been processed by removing punctuation marks and by turning texts to lower case, like in ASR outputs.
Baseline method
This method consists in extracting a list (collection) of all the OOV proper names occurring in a diachronic corpus, using a time period corresponding to the test document. Proper names are extracted from diachronic corpus using Treetagger, a tool for annotating text with part-of-speech and lemma information [START_REF] Schmid | Probabilistic part-of-speech tagging using decision trees[END_REF]. Only the new proper names (compared to standard vocabulary of our recognition system) are kept. Then, our vocabulary is augmented with the collection of extracted OOV proper names. Augmented lexicon is built for each test file and for each time period. This period can be, for example, a day, a week or a month. The OOV PN pronunciations are generated using a phonetic dictionary or a grapheme-to-phoneme tool [START_REF] Illina | Grapheme-to-Phoneme Conversion using Conditional Random Fields[END_REF]. This method will result in recalling a large number of OOV proper names from the diachronic corpus. Therefore, we consider this method as our baseline. The problem of this approach is if the diachronic corpus is large, we can have a bad tradeoff between the lexical coverage and the increase of lexicon size. Moreover, only temporal information about document to transcribe is used. In the methods, presented in the following, the lexical context of PN will be taken into account to better select OOV proper names.
Local-window-based method
To have a better tradeoff between the lexical coverage and the increase of lexicon size, we will use a local lexical context to filter the selected PNs. We assume that a context is a sequence of (2N+1) words centered on one proper name. Each PN can have as many contexts as occurrences.
In this method, the goal is to use the in-vocabulary proper names of the test document as an anchor to collect linked new proper names from the diachronic corpus. The OOV proper names that we need to find might be among the collected names. This method consists of several steps:
A) In-vocabulary PN extraction from each test document:
For each test document from the test corpus, we perform an automatic speech recognition with our standard vocabulary. From obtained test file transcription (that can contain some recognition errors) and we extract all PNs (in-vocabulary PNs).
B) Context extraction from diachronic documents:
After extracting the list of the in-vocabulary proper names from the test document transcription, we can start extracting their "contexts" in the diachronic set. Only documents that correspond to the same time period as the test document are considered. In this method, a context refers to a window of (2N+1) words centred on one proper name. We tag all diachronic documents that belong to the same time period as our test document. Words that have been tagged as proper names by Treetagger are kept, and all the others are replaced by "X". In this step, the substitution with "X" aims to save the absolute positions of the words composing the context. We go through all tagged contemporary documents from the diachronic corpus and we extract all contexts corresponding to all occurrences of in-vocabulary PNs of the test document: in the (2N+1) window centred on in-vocabulary proper name, we select all words that are not labelled as "X" and that are new proper names (that are not already in our vocabulary). The idea behind using a centered window is that the short-term local context may contain missing proper names.
C) Vocabulary Augmentation:
From the extracted new PNs obtained in step B, we keep only the new PNs whose number of occurrences is greater than a given threshold. Then we add them to our vocabulary. Their pronunciations are generated using a phonetic dictionary or an automatic phonetic transcription tool.
Using this methodology, we expect to extract a reduced list (compared to the baseline) of all the potentially missing PNs.
Mutual-information-based method
In order to reduce the vocabulary growth, we propose to add a metric to our methodology: the mutual information (MI) [START_REF] Church | Word association norms, mutual information, and lexicography[END_REF]. The MI-based method consists in computing the mutual information between the in-vocabulary PNs found in the test document and other PNs that have appeared in contemporary documents from the diachronic set. If two PNs have high mutual information, it would increase the probability that they occur together in the test document.
In probability theory and information theory, the mutual information of two random variables is a quantity that measures the mutual dependence of the two random variables. Formally, the mutual information of two discrete random variables X and Y can be defined as:
; = , log , 1
In our case, X and Y represent proper names and x=1 if it is present in the document and x=0 otherwise. For example:
= 1, = 1 = ! " 2
The higher the probability of the co-occurrence of two proper names in the diachronic corpus, the higher the probability of their co-occurrence in a test document.
Finally we compute the mutual information between all the combinations of the variable X (X is the in-vocabulary PN extracted from the test document) and the variable Y (Y is the OOV proper name extracted from the contemporary documents from the diachronic corpus).
Compared to local-window-based method, only the step B is modified.
B) Context extraction from diachronic documents:
After extracting the list of the in-vocabulary proper names from the test document transcription, we can start extracting their "contexts" in the diachronic set. Only documents that correspond to the same time period as the test document are considered. The list of in-vocabulary words from test document transcriptions is extracted like in localwindow-based method. As previously, we tag all diachronic documents that belong to the same time period as our test document. Words that have been tagged as proper names by Treetagger are kept. Finally, the mutual information between each word from in-vocabulary PN list and each extracted new PNs from diachronic document is calculated. If two PNs have high mutual information, this increases the probability that they both appear in the document to transcribe.
Using this methodology, we expect to extract a shortlist (compared to the reference method) potentially missing PNs.
Cosine-similarity-based method
In this method we want to consider additional lexical information to model the context: we will use not only proper names (as in the previous methods) but also verbs, adjectives and nouns. These words are extracted using Treetagger. They are lemmatized because we are interested in the semantic information. The other words are removed.
In this method we propose to use the term vector model [START_REF] Singhal | Modern Information Retrieval: A Brief Overview[END_REF]. This model is an algebraic representation of the content of a text document in which the documents are generally represented by the word vectors. The proximity between the documents is often calculated using the cosine similarity. So we will represent diachronic documents and documents to transcribe as a word vectors and use the cosine similarity between vector models of these documents to extract relevant PNs.
We use the same steps as in the previous methods, with the following modifications:
A) In-vocabulary PN extraction from each test document:
As in the previous methods, each test document is transcribed using the speech recognition system and the standard vocabulary. Then, each document to be transcribed is shown by the histogram of occurrences of component words: vector of words (bag of words, BOW). As stated above, only the verbs, adjectives, proper names and common names are lemmatized and considered.
B) Context extraction from diachronic documents:
Each selected (according to the time period) diachronic document is also represented by the BOW in the same way as above. Then, we build the list of new PNs by choosing from selected diachronic documents the words that have been labeled as "proper name" and which are not in the standard vocabulary. This list is built in the same manner as in the previous methods. For each PN of this list, we calculate a PNvector. For this, first of all, a common lexicon is built: it contains the list of words (verbs, adjectives, nouns and proper names) that appear at least once in the selected diachronic document or in the document to transcribe. Then, every BOW are projected on the common lexicon. Finally, the PNvector is calculated as the sum of the BOW of the selected diachronic documents in which these new PNs appear.
For each PN from this list, we calculate the cosine similarity between the BOW and its PNvector of the document to transcribe. The new OOV PNs whose cosine similarity is greater than a threshold are selected.
C) Vocabulary Augmentation:
This step is the same as step C of the local-window-based method.
Compared with previous methods, cosine method takes into account broader contextual information using not only proper names but also verbs, adjectives and nouns present in the selected diachronic documents and in the test document.
Experiments
Test corpus
To validate the proposed methodology, we used as test corpus five audio documents extracted from the ESTER2 corpora [START_REF] Galliano | The ESTER Phase II Evaluation Campaign for the Rich Transcription of French Broadcast News[END_REF] (see Table 1). The objective of the ESTER2 campaign was to assess the automatic transcription of broadcast news in French. The campaign targeted a wide variety of programs: news, debates, interviews, etc. In this preliminary experiment (Table 2), in-vocabulary PN extraction is performed from manual transcription of test documents instead of automatic speech transcription. The goal of this preliminary study is to validate our proposed approaches.
Doc1
Diachronic corpus
As diachronic corpus, we have used the Gigaword corpora: Agence France Presse (AFP) and Associated Press Worldstream (APW). French Gigaword is an archive of newswire text data and the timespans of collections covered for each are as follows: for AFP May 1994 -Dec 2008, for APW Nov 1994 -Dec 2008. The choice of Gigaword and ESTER corpora was driven by the fact that one is contemporary to the other, their temporal granularity is the day and they have the same textual genre (journalistic) and domain (politics, sports, etc.).
Using Treetagger, we have extracted 45981 OOV PNs from 6 months of the diachronic corpus. From these OOV PNs, only 103 are present in the test corpus, which corresponds to 71% of recall. It shows that it is necessary to filter this list of PNs to have a better tradeoff between the PN lexical coverage and the increase of lexicon size.
3.3
Transcription system ANTS (Automatic News Transcription System) [START_REF] Illina | The Automatic News Transcription System: ANTS, some Real Time Experiments[END_REF] used for these experiments is based on Context Dependent HMM phone models trained on 200-hour broadcast news audio files. The recognition engine is Julius [START_REF] Lee | Recent Development of Open-Source Speech Recognition Engine Julius[END_REF]. Using SRILM toolkit [START_REF] Stolcke | SRILM -An Extensible Language Modeling Toolkit[END_REF], the language model is estimated on text corpora of about 1800 million words. The corpus of texts comes from newspaper articles (Le Monde), broadcast transcriptions and data collected on the Internet. The language model is re-estimated for each augmented vocabulary. The baseline phonetic lexicon contains 218k pronunciations for the 97k words.
Experimental results
Baseline results
We call selected PNs the new proper names that we were able to retrieve from diachronic documents using our methods. We call retrieved OOV PNs the OOV PNs that belong to the selected PN list and that are present in the test documents. We build a specific augmented vocabulary for each test document, each chosen period and each method. The augmented vocabulary contains all words of standard vocabulary and the selected PNs given by the chosen method and corresponding to the chosen period. So, we need to estimate the language model (n-gram probabilities) for these retrieved OOV PNs. For this we have chosen to completely re-estimate the language model for each augmented vocabulary using the entire text corpus (see section 3.3). The best way to incorporate the new PNs in the language model is beyond the scope of this paper.
Our results are presented in terms of Recall (%): number of retrieved OOV PN versus the number of OOV PNs contained in the document to transcribe. We place ourselves in the context of speech recognition. In this context, the fact that PN present in the document to recognize is not in the vocabulary of the recognition system will produce a significant error because the PN cannot be recognized. However, adding to the vocabulary of the recognition system a PN that is not present (pronounced) in the test file, will have little influence on the recognized sentence (if we add too many words, there may increase the confusion between words and thus cause errors). So, in our case, the recall is more important than precision. Thus, we will present the results in term of recall.
For the recognition experiments, Word Error Rate (WER) is given. In order to investigate whether time is a significant feature, we studied 3 time intervals in the diachronic documents: ─ 1 day: using the same day as the test document; ─ 1 week: using 3 days before until 3 days after the test document date; ─ 1 month: using the current month of the test document. As we build an augmented vocabulary for each test file, the results presented in Table 3 are averaged over all test files. Table 3 shows that the use of diachronic documents whose date is closest to that of the test document (document to transcribe) allows greatly reduce the number of added new proper names while maintaining an attractive recall. For example, limiting the time interval to 1 month reduces the set of PN candidates to 13069 (Table 3) while still retrieving 67.6% of the missing OOVs, compared to 45 981 candidates (6 months) for almost the same recall (67.6%, cf. section 3.2). Moving from a one month time period to one day, we reduced the number of selected PNs by a factor of 14 (13069/925) while the recall is reduced by a factor of 1.5. This result confirms the idea that the use of the temporal information reduces the list of selected new PN for augmented vocabulary while maintaining a good recall. In the rest of this article, we will study three time periods (one day, one week and one month).
Local-window-based results
Table 4 presents the results for the local-window-based method on the test corpus for the three studied time periods and for different window sizes. The threshold occ is used to keep only the selected PNs whose number of occurrences are greater than occ. Compared to the baseline method, the local-window-based method reduces significantly the number of selected proper names and only slightly decreases the recall. For example, for a period of one day and window size of 100, the number of selected PNs is divided by 3.6 compared to baseline method, while the recall drops to 6% (253.4 versus 925 and 37.9% versus 44.0%). For the period of one month, the filter allows to divide the number of selected PNs by 11, losing only 9% of recall compared to the baseline method. This shows the effectiveness of the proposed local-window-based method. We notice that we were not able to recall 68% of the missing PNs as we did using the baseline. 58.6% of recall is the maximal value that we obtain using this methodology.
Mutual-information-based results
Table 5 shows the results for the method based on mutual information using different time periods and thresholds. Two PNs having a mutual information greater than this threshold will be added to selected PN list. The best recall is obtained using a time period of one week. As for local-window-based method, the use of the diachronic documents from one day period is sufficient to obtain a recall of over 30%. For the recognition experiments (see section 4.4) we will set the threshold to 0.001 for all time periods.
4.4
Cosine-similarity-based results
The results for the method based on cosine similarity are shown in Table 6. In order to further reduce the number of selected PNs, we keep only the PNs whose number of occurrences is greater than a threshold depending on the time period.
As for the mutual-information-based method, considering only one day time period to retrieve new PNs seems unsatisfactory. The best compromise between the recall and the number of selected PNs is obtained for the period of one month and a threshold of 0.05 (59.8% of recall).
Automatic speech recognition results
For validating the proposed approaches, we performed the automatic transcription of the 5 test documents using an augmented lexicon generated by the three proposed methods.
We generate an augmented vocabulary for each test file, for each time period and for each PN selection method. To generate the pronunciations of added PNs, we use an automatic approach based on CRF (Conditional Random Fields). We chose this approach because it has shown very good results compared to the best approaches of the state-of-art [START_REF] Illina | Grapheme-to-Phoneme Conversion using Conditional Random Fields[END_REF]. The CRF [START_REF] Lafferty | Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data[END_REF] is a probabilistic model for labeling or segmentation of structured data such as sequences, trees or trellises. The CRF allows to take into account long-term relations, achieve a discriminating training and converge to a global optimum. Using this approach, we obtained a precision and recall of more than 98% for phonetization of common names in French (BDLex) [START_REF] Illina | Grapheme-to-Phoneme Conversion using Conditional Random Fields[END_REF]. In the context of this article, we have trained our CRF with 12,000 phonetized proper names.
Table 7 presents the recognition results for five test documents using the local-window-based method for two time periods (one week and one month) and 100 for the window size. Compared to our standard lexicon, the augmented lexicon gives a significant decrease of the word error rate for a week period (confidence interval ± 0.4%). We recall that the OOV rate of OOV PNs in the test corpus is about 1% (cf. section 3.1). So, the expected improvement will hardly exceed that rate. The results for the mutual information method (threshold 0.001) are presented in Table 8. On average, the augmented lexicon reduces slightly the Word Error Rate.
Standard lexicon
Augmented lexicon (local-window size
The results for the cosine-similarity-based method (threshold 0.025 for a week and 0.05 for a month) are presented in Table 9. For both time periods, the WER is significantly reduced.
Standard lexicon
Augmented lexicon (MI threshold 0.001) For the three methods, performance depends on the test document. For documents 1 and 2, regardless the time period used to create the augmented lexicon, the word error rate improvement is significant. But for document 3, no improvement or degradation is observed. This can be due to the fact that the OOV proper names in the test document are not observed in the corresponding diachronic documents. The detailed results show that the performance in terms of WER depends on the type of documents: for some broadcast programs we do not observe any improvement (for example, a debate on nuclear), for others a strong improvement is reached (news).
Finally, the three proposed methods give about the same performance.
Standard lexicon
Augmented lexicon 1 week thr 0.025 occ>1 If we consider only the recognition of proper names, using the standard lexicon, the Proper Name Error Rate (PNER) is 47.7%. However, using augmented lexicon obtained by cos method (one month time period), PNER dropped to 37.4%. Therefore, we observe a huge decrease of PNER, which shows the effectiveness of our proposed methods.
and discussion
This article has focused on the problem of out-of-vocabulary proper name retrieval for vocabulary extension using diachronic text documents (which change over time). This work is performed in the framework of automatic speech recognition. We investigated methods that augment the vocabulary with new proper names. We propose to use the lexical and temporal features. The idea is to use in-vocabulary proper names as an anchor to collect new linked proper names from the diachronic corpus. Our context model is based on the co-occurrences, mutual information and cosine similarity.
Experiments have been conducted on broadcast news audio documents (ESTER2 corpus) using AFP and AWP text data as a diachronic corpus. The results validate the hypothesis that exploiting time and the lexical context could help to retrieve the missing proper names without excessive growth of the vocabulary size. The recognition results show a significant reduction of the word error rate using the augmented vocabulary and a huge reduction of the proper name error rate.
An interesting perspective could be to exploit "semantic" information contained in the test document: when a precise date is recognized in a test document, the diachronic document around this date could be used to bring new proper names. Our future work will also focus on investigating the use of several Internet sources (Wiki, texts, videos, etc.).
Table 1 .
1 Date of test documents.
Doc2 Doc3 Doc4 Doc5
2007/12/20 2007/12/21 2008/01/17 2008/01/18 2008/01/24
Table 2
2
Number of diff. words Number of occur. In-vocab PNs OOV PNs OOV PN occur.
Doc1 1350 4099 86 44 93
Doc2 1446 4604 89 39 70
Doc3 1958 11803 43 25 63
Doc4 2107 10152 90 39 71
Doc5 1432 7867 48 27 107
All - 38525 - - 404
presents the occurrences of all PNs (in-vocabulary and out-of-vocabulary) in each test document with respect to our 97k ASR vocabulary. To artificially increase OOV rate, we have randomly removed 75 proper names occurring in the test set from our 97k ASR vocabulary. We call this vocabulary a standard vocabulary. Finally, the OOV proper name rate is about 1% (404/38525).
Table 2 .
2 Proper name coverage in test documents.
Table 3 .
3 Coverage in test documents of retrieved OOV PNs. Average over all test files.
Time period Selected PNs Retrieved OOV PNs Recall (%)
1 day 925 16 44.0
1 week 4305 21 58.6
1 month 13069 24 67.6
Table 4 .
4 Local-window-based results according to window size and time period.
Time Window Selected Retrieved Recall
period size PNs OOV PNs (%)
1 day 50 164.6 11.8 33.9
(occ>0) 100 253.4 13.2 37.9
1 week 50 344.0 16.4 47.1
(occ>1) 100 596.4 17.4 50.0
1 month 50 589.8 19.0 54.6
(occ>2) 100 1137.2 20.4 58.6
Table 5 .
5 Mutual-information-based results according to threshold and time period.
Time period Threshold Selected PNs Retrieved OOV PNs Recall (%)
0.05 10.6 5.0 14.4
1 day 0.01 295.0 12.8 36.8
(occ>0) 0.005 421.2 14.2 40.8
0.001 531.2 15.4 44.25
0.05 3.8 3.0 8.6
1 week 0.01 50.8 8.8 25.3
(occ>1) 0.005 228.6 12.0 34.5
0.001 947.8 18.4 52.9
0.05 2.6 1.6 4.6
1 month 0.01 21.2 7.2 20.7
(occ>2) 0.005 56.4 9.4 27.0
0.001 806.4 17.2 49.4
Table 6 .
6 Cosine-similarity-based results according to threshold and time period.
Time period Threshold Selected PNs Retrieved OOV PNs Recall (%)
0.025 813.4 15.4 44.3
1 day 0.05 437.6 14.4 41.4
(occ>0) 0.075 131.4 11.2 32.2
0.1 51.8 8.4 24.1
0.025 1880.0 19.4 55.8
1 week 0.05 1127.6 18.8 54.0
(occ>1) 0.075 431.6 17.0 48.9
0.1 152.0 13.4 38.5
0.025 3795.6 21.4 61.5
1 month 0.05 2473.8 20.8 59.8
(occ>2) 0.075 1010.2 19.4 55.8
0.1 334.4 17.0 48.9
Table 7 .
7 Word Error Rate (%) for local-window-based method according to time period (local window size 100).
100)
1 week 1 month
Doc 1 19.7 17.7 17.6
Doc 2 20.9 19.9 20.1
Doc 3 28.3 28.2 28.7
Doc 4 24.5 23.9 24.5
Doc 5 36.5 36.0 36.6
All 27.1 26.4 26.9
Table 8 .
8 Word Error Rate (%) for mutual-information-based method according to time period (threshold 0.001)
1 week 1 month
Doc 1 19.7 18.0 18.4
Doc 2 20.9 19.7 20.0
Doc 3 28.3 28.1 28.2
Doc 4 24.5 24.2 24.2
Doc 5 36.5 36.1 36.0
All 27.1 26.6 26.7
Table 9 .
9 Word Error Rate (%) for cosine-similarity-based method according to time period.
1 month
thr 0.05
occ>2
Acknowledgements
The authors would like to thank the ANR ContNomina SIMI-2 of the French National Research Agency (ANR) for funding. | 33,796 | [
"15663",
"15652",
"4977"
] | [
"420403",
"420403",
"100376",
"420403"
] |
01475084 | en | [
"sdv",
"stat"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01475084/file/SFRP%202017%20Infant%20Exposure.pdf | Fiocchi
Le Brusquet
ELFSTAT PROJECT : ASSESSMENT OF INFANT EXPOSURE TO EXTREMELY LOW FREQUENCY MAGNETIC FIELDS (ELF-MF, 40-800 HZ) AND POSSIBLE IMPACT ON HEALTH OF NEW TECHNOLOGIES
Extremely low frequency magnetic fields (ELF-MF) have been classified as possibly carcinogenic to humans based on reasonably consistent epidemiological data for childhood leukaemia [START_REF]IARC Monographs on the evaluation risks to humans[END_REF]. Despite the classification and consequent implementation of numerous health risk assessment processes to evaluate possible risk of ELF exposure of children, the real everyday exposure to ELF-MF in Europe is not well known. Indeed, only a few studies analysed children exposure to ELF-MF by collecting personal measurements, correlating the daily exposure patterns to children's movement and behaviour [START_REF] Forssén | Relative contribution of residential and occupational magnetic field exposure over twenty-four hours among people living close to and far from a power line[END_REF][START_REF] Struchen | Analysis of children's personal and bedroom exposure to ELF-MF in Italy and Switzerland[END_REF][START_REF] Magne | Exposure of the french population to 50 Hz magnetic field: general results and impact of high voltage power line[END_REF][START_REF] Magne | Analysis of high voltage network and train network in the EXPERS study[END_REF][START_REF] Bedja | Methodology of a study on the French population exposure to 50 Hz magnetic fields[END_REF][START_REF] Magne | Exposure of children to extremely low frequency magnetic fields in France: Results of the EXPERS study[END_REF]. Furthermore, the exposure assessment to MF sources other than power lines has been not yet addressed. Therefore, an improved knowledge of these exposure contributions is needed to better understand biological mechanisms and to interpret previous epidemiological studies as well. A correct assessment of the induced fields in tissues should also be carried out. Indeed, so far, the estimation of induced fields are limited to exposure of a few children's anatomies [START_REF] Dimbylow | Development of pregnant female, hybrid voxel-mathematical models and their application to the dosimetry of applied magnetic and electric fields at 50 Hz[END_REF][START_REF] Bakker | Chidlren and adults exposed to low frequency magnetic fields at the ICNIRP reference levels: Theoretical assessment of the induced electric fields[END_REF] and on foetal exposure [START_REF] Bakker | Chidlren and adults exposed to low frequency magnetic fields at the ICNIRP reference levels: Theoretical assessment of the induced electric fields[END_REF][START_REF] Dimbylow | The effects of body posture, anatomy, age and pregnancy on the calculation of induced current densities at 50 Hz[END_REF][START_REF] Cech | Fetal exposure to low frequency electric and magnetic fields[END_REF][START_REF] Zupanic | Numerical Assessment of Induced Current Densities for Pregnant Women Exposed to 50 Hz Electromagnetic Field[END_REF][START_REF] Liorni | Dosimetric study of fetal exposure to uniform magnetic fields at 50 Hz[END_REF]. Personal exposure measurements and computational dosimetry contribute to provide the picture of the impact of exposure on health. However, due to the high variability of real exposure scenarios, the exposure assessment by means of those tools turns in high timeconsuming processes. ELFSTAT project started in November 2015 and is founded by the French ANSES (2015-2019, Grant agreement n. 2015/1/202). The main purpose of ELFSTAT is to characterize children's exposure to low frequency magnetic fields (MF, from 40 to 800 Hz) in real exposure scenarios using stochastic approaches. Both the global exposure at personal level and tissue dosimetry due to far-and near-field sources will be investigated. Finally, prediction of the impact of new technologies (e.g. smart grids, electric vehicles) on children's exposure will be carry out, enlarging the frequency range to the intermediate frequencies (IF).
ELFSTAT project aims to develop stochastic models able to provide exposure assessment of children in several exposure conditions and hence considering the high variability of real exposure scenarios, on the base of a relatively few experimental and/or computational data. The project is divided in three Work Packages (WP): WP1-Stochastic Models: the aim is to develop stochastic models to model personal exposure and tissue dosimetry. The exposure will be characterize in terms of magnetic field amplitude at personal level and induced electric field for tissue dosimetry. WP2-Children exposure assessment to ELF-MF: the aim is to characterize children's exposure to low frequency magnetic fields (MF) from 40 to 800 Hz using stochastic models developed in WP1. Furthermore, appropriate indicators to represent children's exposure based on the stochastic exposure assessments will be developed. WP3-Exposure to new technologies ELF devices: the aim is to evaluate the impact of new technologies for energy on children exposure. A systematic literature review will be conducted about research papers on the ELF-MF exposure of new technologies. An estimation of the change of exposure due to these new sources will be done to evaluate the impact on children's exposure of these new technologies. Ongoing work is about: i) modelling of extremely low frequency magnetic field time series obtained from the EU project ARIMMORA [START_REF] Struchen | Analysis of children's personal and bedroom exposure to ELF-MF in Italy and Switzerland[END_REF] and EXPERS database [START_REF] Bedja | Methodology of a study on the French population exposure to 50 Hz magnetic fields[END_REF][START_REF] Magne | Exposure of children to extremely low frequency magnetic fields in France: Results of the EXPERS study[END_REF] by segmenting them into blocks modelled by locally stationary processes [START_REF] Killick | Optimal detection of changepoints with a linear computational cost[END_REF]; ii) the development of stochastic models of induced electric field in foetal tissues and in the children exposed to MF by means of the polynomial chaos theory; iii) identification of the new technologies that could change the ELF EMF exposure scenario for the young population.
Aknowledgement
The ELFSTAT Project is supported by The French National Program for Environmental and Occupational Health of Anses (2015/1/202). The French data come from the EXPERS study database, subsidized by the French Ministry of Health, EDF and RTE, and carried out by Supélec, EDF and RTE. | 6,586 | [
"781259",
"781260",
"781261",
"21973",
"779197"
] | [
"467967",
"467967",
"467967",
"1289",
"1289",
"149966",
"528442",
"247965",
"467967"
] |
01475086 | en | [
"phys"
] | 2024/03/04 23:41:46 | 2016 | https://theses.hal.science/tel-01475086/file/TH2016RIVAFEDERICA.pdf | Claude Universite
Bernard -Lyon
M Le Professeur
Frédéric Fleury
Hamda Ben
Hadid M Le Professeur Didier
Revel M Le Professeur
Philippe Chevalier
M Fabrice
Vallée M Alain Helleu
Composantes Sante
J Etienne
La Mme
C Professeure
Burillon
D Bourgeois
Vinciguerra
M X Perrot
Eric Ziegler
Tamzin Laord
Sebastien Berujon
Samuel Da Chuna
Jurgen Hartwig
Manuel Perez
Roberto Homs
Jerome Lyon
for the Laue Claudio Debray
Manuel Ferrero
Olivier Sanchez
Hignette, Iwan Cornelius, for all the hints about the simulations. I would also like to thank all the wonderful people I met in Grenoble which made the last years an amazing period of my life. Thank you to Leandro, the rst friend I met in Grenoble, Leoncino, for all the dinner together (with or without invitation) and for the company during the "writing-weekends", Niccolo' for listening for the second time at my complaints while writing, Pasini for always washing the dishes, Flavia, for taking care of Pasini, Micheal, for the movie selection. Thank you to all the ocemates from the oce 01.1.06: Andrea, Sara, Ilaria, Genziana, Raphael (ok, you were there enough that I can thank you in this line). Thank you Stefan and Katharina, for all the time spent together in France, Germany and in the Netherlands! Thank you Claire, for these years of at-sharing and the intensive French lessons. Thank you to all the friends in Italy and to the ones that moved as me to other countries. Unfortunately, hanging out with you is not so easy as before, but thank you for every time you nd the time for a message, a call, or a drink together. Thank you to my family, my brother Matteo and his wife Francesca, for always being an example, and to my parents Paola and Edo, that supported me every single day. Aryan, out of one million reasons to
COMPOSANTES ET DEPARTEMENTS DE SCIENCES ET TECHNOLOGIE
Scope of the thesis
In 1985 the german scientist W. C. Röntgen discovered X-rays and showed their potential as a tool to investigate matter. Due to the development of X-ray sources, experimental techniques and detectors, X-ray imaging today has been improved to the point that structures down to the nanoscale can be resolved. Firstly, the availability of modern X-ray generators and synchrotron sources improved the quality of the X-ray beam in terms of ux, coherence and divergence. Secondly, many new experimental techniques have been developed. For example, X-ray imaging can exploit today not only X-ray absorption contrast but also X-ray phase contrast. Lastly, X-rays detectors evolved from photographic lms to modern semiconductor detectors, which allow fast recording of many digital images.
As a consequence of the requirements coming from many dierent X-ray applications, several kinds of detectors have been developed. At synchrotrons, the state-of-the-art detectors for high-resolution imaging (i.e. below 2 μm) are indirect detectors using a single crystal thin lm scintillator, microscope optics and a pixelated semiconductor camera. Such detectors and single crystal thin lm scintillators are the subject of this thesis. The aim was the study of the performance of indirect detectors and the development of new scintillators, in an attempt to improve the detectors' performance, especially at high energy (20-100 keV).
A general introduction summarizing advantages and limitations of dierent kinds of X-ray detectors and scintillators is presented in chapter one.
Detectors for synchrotron imaging applications
Afterwards, this thesis is divided into two main subjects. The rst part (chapters 2 and 3) describes a model to calculate the spatial resolution of indirect detectors. The second part is focused on materials development. Aluminum perovskite and lutetium oxide have been developed as single crystal thin lms using liquid phase epitaxy. The description of the optimization of the crystal growth process, material characterization and imaging performances is presented in chapters 4, 5 and 6.
1.2 Detectors for synchrotron imaging applications X-ray imaging techniques are powerful tools to investigate 3D structures without using destructive analysis. Currently X-ray imaging techniques can resolve details down to the nanometer scale and allow the investigation of structures with variable absorptions through the combination of absorption and phase contrast. An example is shown in gure 1.1, details can be found in [START_REF] Moreau | Multiscale 3D virtual dissections of 100-million-year-old owers using X-Ray synchrotron micro-and nanotomography[END_REF]. Fossil owers were imaged using a combination of phase contrast X-ray imaging techniques, with a resolution in the range from 50 nm to 0.75 μm. The breakthrough shown in these results is the 3D investigation of individual pollen grains and their nanometer structures. To achieve this, it was needed to increase the X-ray energy to reduce the sample absorption which is detrimental for the phase contrast. The energy was increased up to 29.5 keV, to reduce the sample's absorption, which is detrimental for the phase contrast. Applications as the one shown in gure 1.1, require a detector with spatial resolution down to the micrometer or sub-micrometer scale. Moreover, the detector must be ecient at X-ray energies above 20 keV, since these high-energies are often selected to increase the X-ray penetration in the object. Two dimensional (pixelized) detectors are today preferred for many X-ray applications, not only imaging but also crystallography, absorption or scattering experiments. Not only are these state-of-the-art detectors in demand at large experimental facilities as synchrotrons or X-ray free-electron lasers, but they are also often preferred for experiments using X-ray laboratory sources and widely used for medical or security applications. Depending on the mechanism of detection, X-ray area detectors can be classied into two groups. Direct detectors using semiconductors, which convert X-ray photons directly into an electronic signal, and indirect detectors which rst convert the X-ray photons into photons of lower energy which are detected subsequently. The detection mode is an other type of classication for detectors. There are two types, photon-counting or integrating detectors. In photon-counting detectors, the pulse generated from an individual X-ray photon is immediately processed and eventually counted. The pulse can give information on the arrival time of the single photon as well as its energy.
Integrating detectors accumulate the created charge over a set exposure time and the total signal is read at its completion. The information is the charge generated by the total amount of photons detected during the exposure time.
It is worth to mention that the use of point spectroscopy detectors often remains the best choice when high energy-resolution is required, as is for instance the case of elemental imaging using X-ray uorescence, where the spatial resolution is obtained by scanning the X-ray beam across the sample. The detectors based on indirect detection can be schematically divided into three elements which can be chosen to optimize the detector for a specic task. The rst element is the converter screen, called the scintillator. Many converter screens, produced using dierent materials and technologies, are available today. For example for X-ray imaging, single crystalline lm (SCF) scintillators are normally preferred as converter screens when micrometer or sub-micrometer resolution is required [START_REF] Martin | Recent developments in X-ray imaging with micrometer spatial resolution[END_REF][START_REF] Douissard | A novel epitaxially grown LSO-based thin-lm scintillator for micro-imaging using hard synchrotron radiation[END_REF], micro-structured crystalline scintillators are selected to improve eciency at high X-ray energies and powder or ceramic phosphors are often the most viable solution when a large eld of view is required. The second element is an optical guide or projection system which couples the converter screen with the imaging camera. This part can be made using lenses or optical ber bundles. The latter are normally more ecient, but resolution below a few micrometers can only be obtained using microscope optics [START_REF] Koch | X-ray imaging with submicrometer resolution employing transparent luminescent screens[END_REF][START_REF] Uesugi | Comparison of lens-and ber-coupled CCD detectors for X-ray computed tomography[END_REF]. The last element is the imaging camera. Two main technologies are available, CCD (Charge-Coupled Devices), as well as their derivative as EMCCD (Electron Multiplying CCD), and CMOS (Complementary Metal Oxide Semiconductor) sensors. In a CCD camera, each pixel is associated to a potential well, where the electrons are accumulated. At the end of the exposure the charge of each pixel is transferred from well to well in a sequence and nally amplied and converted into a digital signal. In a CMOS camera, additional electronics process the signal at each pixel. The reading can be done line by line without stopping the acquisition (rolling shutter mode) allowing higher frame rates as compared to a CCD. However, the additional electronics also limit the smallest achievable pixel size. Currently many dierent imaging cameras have been developed, and both CCD and CMOS detector types are mature and reliable technologies. A drawback of 2D indirect detectors is their noise, due to the camera but also due to the additional step in the detection chain. The maximum attainable dynamic range is limited on one side by the noise and on the other side by the full-well capacity. However, the only viable way to reach sub-micrometer scale resolution is through the use of indirect detectors which as an added benet can be used eciently at very high energies, if a proper converter screen is selected, and in high synchrotron uxes. Moreover, indirect detectors are cheaper than pixelized direct detectors (especially if compared to detec-tors based on high-Z semiconductor materials as CdTe) and their conguration is more exible, meaning they can be adapted to a broader range of experimental conditions.
Direct 2D detectors
Direct X-ray detectors convert X-rays directly in electronic charge. Two technologies have been developed for direct X-ray detection: hybrid pixel array detectors ad monolithic detectors.
Hybrid pixel array detectors (HPADs) are made of two layers. A pixelized sensor layer, where the X-rays are absorbed and converted into electron-hole pairs, and a second layer responsible for the signal processing. Each pixel in the rst layer is micro-soldered to a chip in the second one through a so-called bump. The advantage of hybrid pixels is the possibility to separately optimize the two layers. To enhance the absorption at high X-ray energies, CdTe or GaAs can be selected for the sensor layer while silicon can still be used for the electronic circuits. The drawback is the delicate and expensive operation of interconnection of the two layers, which also limits the smallest obtainable pixel size.
Most of HPADs work only in photon-counting mode which means that each time a photon is detected, the signal is immediately processed, compared with a threshold and counted or rejected. This mode allows noise-free performance and energy discrimination, but the main limitation is the maximum X-ray ux that these detectors can manage (10 6 -10 8 ph/mm 2 /s). If the ux is higher, an easy feat at synchrotron sources, the arrival time between two photons is lower than the detector's dead time and the two photons get counted as one. Today some HPADs working in integration mode are under development, as for example the MÖENCH detector [START_REF] Dinapoli | MÖNCH, a small pitch, integrating hybrid pixel detector for X-ray applications[END_REF].
Monolithic detectors (MDs) have both the absorbing sensor and the readout circuits on the same chip. The advanced circuits needed for the readout chip are currently only made using silicon which immediately limits the application of MDs to applications using relatively low X-ray energies as the absorbing sensor, also made of silicon, will become transparent for energies above 20 keV. Monolithic detectors with pixels as small as 20 × 20 μm 2 can be fabricated, which is smaller than the limit today attain- able for commercial HPADs, as for instance the MAXIPIX HPAD which has pixels of 55 × 55 μm 2 (7)). However, HPADs with smaller pixel size are under development: the MÖENCH prototype has pixels of 25 × 25 μm 2 . Two kinds of monolithic detectors have 1.2 Detectors for synchrotron imaging applications been developed and are currently investigated: passive-pixel MDs, that can be seen as direct-detection CCDs, and active-pixel MDs, which correspond to direct detection CMOS chips [START_REF] Hatsui | X-ray imaging detectors for synchrotron and XFEL sources[END_REF].
1.2.3 Some 2D detectors at the ESRF Compared to laboratory sources, the X-ray uxes at modern synchrotrons are much higher. This can be seen in gure 1.2, where the brilliance of laboratory sources is compared with others. While the brilliance of X-ray laboratory sources is well below 10 10 ph/s/mrad 2 /0.1%bw, for most modern synchrotrons, it can reach values above 10 21 ph/s/mrad 2 /0.1%bw, which is more than 10 orders of magnitude higher. The Figure 1.2: Brilliance of dierent X-ray sources [START_REF] Willmott | An introduction to synchrotron radiation: Techniques and applications[END_REF].
photon ux available at the sample position depends on the X-ray energy and beamline conguration, but uxes above 10 9 ph/s/mm 2 (typically 10 12 -10 13 ph/s/mm 2 ) are easily attained. Due to the required dead time in HPADs the maximum photon ux is approximately 10 4 -10 6 photons per pixel per second, which for a 100 × 100 μm 2 pixel corresponds to a maximum photon ux of 10 8 ph/mm 2 /s. Consequently HPADs can only be used for applications where the X-ray beam does not imping directly on 1.2 Detectors for synchrotron imaging applications the detector as is the case with diraction or inelastic scattering experiments. However, even in this case, the ux is often too high and still needs to be attenuated, as for example close to some intense Bragg reection peaks.
The ESRF as well as several other synchrotrons (see gure 1.3) can deliver high-energy X-ray photons, far above 20 keV. Such high energies are needed to increase the penetration of X-ray photons into matter and allow the investigation of thick and highly absorbing samples. Unfortunately, due to their higher penetration length, high-energy photons are also more challenging to detect. Since silicon sensors are not suciently absorbing above 20 keV, the solution has to be sought in indirect detectors or HPADs with high-Z sensors (e.g. CdTe, CdZnTe and GaAs).
Figure 1.3: X-ray energy spectra of dierent synchrotrons and free electron lasers [START_REF] Desy | [END_REF].
HPADs are the state-of-the-art detectors for crystallography and inelastic scattering 1.2 Detectors for synchrotron imaging applications experiments. The newest HPAD is the EIGER detector developed at PSI and commercialized by DECTRIS [START_REF] Dinapoli | Next generation single photon counting detector for X-ray applications[END_REF]. This detector was recently installed at two ESRF beamlines, ID30 and ID13 where it can only be used for low energy experiments since it is currently only available with a silicon sensor layer. For these beamlines this is however not problematic. The ID30 beamline is dedicated to structural biology applications where crystals made of macromolecules are investigated using X-ray diraction. Since the absorption of biological samples is low, the experiments are performed at X-ray energies below 20 keV. In fact, the EIGER detector is mounted in a setup for X-ray diraction working at a xed energy of 12.8 keV. Approximately the same energy (13 keV) is used at beamline ID13 which delivers a small focal spot used for diraction and small angle X-ray scattering (SAXS). A dierent HPAD system is the MAXIPIX detector, which was developed at the ESRF, based on the Medipix chip [START_REF] Ponchut | MAXIPIX, a fast readout photon-counting X-ray area detector for synchrotron applications[END_REF]. MAXIPIX detectors are widely used at the ESRF, for example at the beamlines ID01 for nano-diraction and ID03 for surface diraction.
Both beamlines are optimized to work below 25 keV, therefore, the MAXIPIX detector with silicon sensor is still acceptable in term of eciency.
However, the same experiments can be performed at higher energies, for example at ID31, where a monochromatic X-ray beam up to 140 keV can be delivered. Such highenergies are used for material investigations where high-penetration is required, for example in the study of deeply buried interfaces. Due to the interest from medical imaging and homeland security much progress in high-Z sensors made of CdTe, Cd(Zn)Te and GaAs has been seen in the last years. Although the homogeneity, the quality and the radiation hardness of these high-Z materials is not yet as good as silicon, some HPADs based on these materials have already been developed. For example, some HPADs used at the ESRF are available with a CdTe sensor layer, e.g. the MAXIPIX [START_REF] Ruat | Characterization of a X-ray pixellated CdTe detector with TIMEPIX photon-counting readout chip[END_REF][START_REF] Aef De | Preparation of a smooth GaNGallium solid liquid interface[END_REF], the Dectris Pilatus (ID31) and the Pixirad (BM05) detectors. An alternative for the sensor layer is GaAs which is being used in the LAMBDA detector [START_REF] Pennicard | The LAMBDA photon-counting pixel detector[END_REF].
In X-ray absorption and phase contrast imaging experiments part of the direct X-ray beam impinges on the detector, leading to X-ray uxes that are too high to be managed for the photon counting mode and can even damage the detector. As there are more advantages, imaging experiments often prefer indirect detection techniques over HPAD technology. Several advantages are listed below. Firstly, the radiation damage of the camera can be avoided by modifying the detector design. Secondly, in integration mode higher uxes can be managed by adjusting the integration time. Thirdly, the indirect detectors are more exible than direct detectors as the visible image emitted by the converter screen can be magnied to obtain smaller pixels or demagnied for a larger eld of view. And lastly, indirect detectors are normally cheaper than HPADs. Typical prices for 4 megapixel commercial products are in the order of 100 ke for indirect 2D detectors, 600 ke for HPADs with Si sensor and 1 Me for the ones with CdTe sensor.
A few examples of beamlines at the ESRF which use indirect detector technology are ID19, ID17 and ID16. ID19 is a beamline mainly dedicated to micro-tomography. The detectors are based on indirect detection and they can be congured to optimize the performance depending on the demands of the experiment. Dierent converter screens, optics and cameras are available and the scientists can quickly change the detector conguration [START_REF] Douissard | A versatile indirect detector design for hard X-ray microimaging[END_REF]. For sub-micrometer spatial resolution, single crystal thin lm scintillators are combined with high numerical aperture microscope optics and pixelized cameras, for example the FreLon camera [START_REF] Labiche | Invited article: The fast readout low noise camera as a versatile x-ray detector for time resolved dispersive extended x-ray absorption ne structure and diraction studies of dynamic problems in materials science, chemistry, and catalysis[END_REF]. Thicker scintillators doped with Ce are preferred to improve the eciency at high X-ray energies or for time-resolved experiments. ID17 is a beamline for biomedical and paleontology applications. The peculiarity of imaging experiments performed on this beamline is the large eld of view (up to 15 cm), while the resolution is normally limited to few hundreds of micrometers. In this case, congurations using powder phosphors (Gadox) and ber optic coupling have been implemented [START_REF] Nemoz | Synchrotron radiation computed tomography station at the ESRF biomedical beamline[END_REF]. ID16 is a beamline dedicated to nano-imaging and nano-analysis and is specialized in X-ray techniques to investigate materials down to the nano-scale. Varying detectors are used depending on the experiment. For absorption and phase contrast imaging, the detector is based on indirect detection with a conguration similar to ID19. For pthychography, a technique based on X-ray diraction, a MAXIPIX detector is used in combination with a Frelon camera in indirect mode. The central part is detected by the indirect detector and the ring by the MAXIPIX.
Spectroscopy detectors
X-ray imaging synchrotron techniques for elemental mapping, e.g. X-ray uorescence (XRF) imaging, mainly use point detectors which are sensitive to the X-ray energy. The spatial resolution is obtained by focusing the X-ray beam down to the nanometer scale 1.3 Detector characterization for high-resolution X-ray imaging and moving the sample across the beam.
A rst example at the ESRF is the nano-XRF setup at the beamline ID16B, which can exploits X-ray energies up to 70 keV [START_REF] Martinez-Criado | ID16B: a hard X-ray nanoprobe beamline at the ESRF for nano-analysis[END_REF]. The element discrimination is based on energy dispersive (ED) detectors: silicon drift detectors (SDDs) are used up to 25 keV, while at higher energies, they are replaced by Germanium based detectors.
A second example is the micro-XRF setup at the beamline ID21 [START_REF] Szlachetko | Wavelengthdispersive spectrometer for X-ray microuorescence analysis at the Xray microscopy beamline ID21 (ESRF)[END_REF]. The detection system includes a wavelength dispersive (WD) spectrometer. The uorescence X-ray photons are guided with polycapillary optics on a monochromator and are detected using a gas-ow proportional counter. Compared to energy dispersive SDDs, which have an energy resolution limited to hundreds of eV, the WD spectrometer enhance the energy resolution to tens of eV. Additionally, they are more ecient in the low X-ray energy range (1-10 keV), allowing a more precise and unequivocal elemental identication.
Detector characterization for high-resolution X-ray imaging
Many parameters need to be taken into account in the evaluation of a detector. Because of this, the optimization of one parameter often comes at the expense of an other. The design of a detector is thus a compromise. A rst example is the compromise between spatial resolution and eciency. High-resolution requires thin lm scintillators, leading to weak absorption. Additionally, the acquisition speed will be reduced due to the time needed to integrate the signal. A second example is the camera's frame rate (speed) which can be improved at the price of the dynamic range and the number of pixels.
Hybrid pixels detectors outperform indirect detectors in terms of sensitivity and low noise, but the ux which can be detected is lower. Consequently, the experiments to be performed need to be carefully evaluated in order to understand which detector parameters have to be optimized for a successful experiment. In addition it is important to keep the cost of the detector manageable. HPADs are much more expensive than indirect detectors, which are for this reason often the preferred choice in many elds. A brief introduction of some important detector parameters is presented in the following sections.
Detective quantum eciency
Every detector used to record a signal inherently introduces an uncertainty in the measurement. One of the most widely accepted parameters to quantify this uncertainty is the detective quantum eciency (DQE). The DQE value ranges from 0 for a detector which does not detect any signal, to 1 for an ideal detector which perfectly localizes the full energy of every incident X-ray photon. In reality a DQE equal to 1 can not be obtained, since any statistical process, background noise or loss of events involved in the detection process lowers the DQE value. Moreover, a compromise has to be made between the DQE and other properties as the readout speed and the dynamic range.
The DQE is dened as the square of the output signal-to-noise ratio divided by the input signal-to-noise ratio:
DQE = (S o /σ o ) 2 /(S i /σ i ) 2 , ( 1.1)
where S o/i and σ o/i are the average value and the standard deviation of the output/input signal. If the input signal is described by a Poisson distribution, equation 1.1 becomes
DQE = 1/(N i R) , R = (σ o /S o ) 2 , (1.2)
where N i is the number of incident X-ray photons and R is the relative variance of the output signal.
In the detection process the signal generated from the detected X-ray photon propagates trough the dierent elements of the detector, resulting in a signal at the output. Therefore, it is necessary to include the elements and processes involved if one wants to calculate the DQE. From the gain (or eciency), statistical distribution and noise of each process involved, the relative variance of the entire system (R) is given by
R = R o + R 1 m o + R 2 m 0 m 1 + ... + R n n-1 i=1 m i , ( 1.3)
where m o is the number of incident X-ray photons, R o is its relative variance, m i and R i are respectively the gain and relative variance of each of the processes involved in the detection cascade [START_REF] Arndt | X-ray television area detectors for macromolecular structural studies with synchrotron radiation sources[END_REF]. It is clear that for an increasing number of involved processes in the detection, the DQE will reduce. Many models have been developed to estimate the DQE of the several kinds of detectors 1.3 Detector characterization for high-resolution X-ray imaging and congurations used for dierent applications. Following the approach reported in [START_REF] Arndt | X-ray television area detectors for macromolecular structural studies with synchrotron radiation sources[END_REF] and [START_REF] Nm Allinson | Development of non-intensied charge-coupled device area X-ray detectors[END_REF] we can estimate the DQE evaluated at low frequencies for indirect highspatial resolution X-ray detectors. A cascade of processes is involved in the detection, each one with a statistical distribution:
X-ray absorption in the scintillator
m 0 = N i η abs R 0 = 1 N i η abs Scintillator light emission m 1 = η LY R 1 = 1 η LY + R s Light transmission through the optics m 2 = T l R 2 = 1 T l -1 Camera quantum eciency m 3 = η QE R 3 = 1 η QE Camera noise R4 = n 2 eff N i .
N i is the incident photon ux, η abs and η LY are the scintillator absorption eciency and light yield. R S depends on the scintillator, it is approximately 0 for a transparent single- crystal and is higher for a powder phosphor because of the scattered and re-absorbed light which broadens the statistical distribution. Trapping of light due to total internal reection is included in the evaluation of η LY . T l is the transmission of the optical path.
As rst approximation, the transmission of the optics can be assumed equal to its upper limit which is given by the eciency collection η col = 1 4 (NA/n) 2 . η QE is the quantum eciency of the sensor at the emission wavelength of the scintillator, while η eff is the camera noise. The absorption of the X-ray window before the scintillator is neglected. Equation 1.3 can be re-written as:
R = 1 η abs N i 1 + 1 η LY + 1 η LY 1 T l -1 + 1 η LY T l 1 η QE + 1 η LY T l η QE n 2 eff N i . (1.4)
The last term is negligible for high uxes, which is often the case for synchrotron radiation. Therefore, as reported in (4), the DQE for indirect X-ray detectors, congured with a thin lm scintillator and microscope optics, can be estimated as:
DQE = η abs 1 + 1 + 1/η QE η col η LY -1
.
(1.5)
Dynamic Range
The dynamic range (DR) of a detector is generally dened as the saturation level of the detector divided by the noise level.
Imaging sensors in indirect detectors work in integration mode. The DR is limited by the full-well capacity and the noise. In the case of the Frelon camera, a CCD widely used for X-ray imaging at the ESRF, the noise is approximately 20 electrons/pixel/s and the fullwell capacity is 3 • 10 5 electrons/pixel/s, leading to a dynamic range of 15000 gray levels, or 83.5 dB [START_REF] Labiche | Invited article: The fast readout low noise camera as a versatile x-ray detector for time resolved dispersive extended x-ray absorption ne structure and diraction studies of dynamic problems in materials science, chemistry, and catalysis[END_REF]. The scientic CMOS pco.edge has a noise of 1.6 electrons/pixel/s and a full-well capacity of 3 • 10 5 electrons/pixel/s, hence, the DR is 85.4 dB. However, the dynamic range can be further reduced depending on the conditions for the experiment. For example, fast imaging increases the noise, due to the increase of the camera readout noise when used at high speed, and thus reduces the DR. Additionally, the dynamic range for indirect detection is depending heavily on the uniformity of the scintillator and its optical quality. For instance, in a region of the scintillator where the light emission is signicantly higher compared to the average, the exposure time needs to be reduced to avoid saturation of the camera, leading to a reduction of the DR. Since HPADs count every X-ray photon individually, they can be practically noise free if the energy threshold is properly set and the saturation is only limited by the readout dead time. The DR is, therefore, higher than for sensors working in integrating mode and does not depend on the experimental conditions.
Spatial resolution and Modulation Transfer Function
The ideal pixelized detector response to a point-like object is a Dirac function. Therefore, two separate objects are always discernible in the image. In a real detector the response to a point-like object is a broader distribution, known as the Point Spread Function (PSF). For two PSF to be discernible a minimum distance between the two point-objects has to exist. This minimum distance denes the spatial resolution limit, but there is a certain ambiguity in the degree of separation accepted as sucient to distinguish two separate PSFs. The concept of contrast removes this ambiguity. Considering two objects with the same 1.3 Detector characterization for high-resolution X-ray imaging intensity, the contrast or modulation (M) is dened as
M = I Max -I Min I Max + I Min , (1.6)
where I Max is the maximum intensity and I Min the minimum intensity measured in between them [START_REF] Glenn D Boreman | Modulation transfer function in optical and electro-optical systems[END_REF]. For example, when we say that a system has 1 μm spatial resolution, the value of the contrast for which the spatial resolution is dened and if the limit is determined by the camera pixel size should be specied.
The Modulation Transfer Function (MTF) describes the spatial response of a system completely since it includes both the concepts of resolution and contrast. It is dened as the ratio between the modulation of the image M image and the modulation of the object M object at dierent spatial frequencies ν:
MTF(ν) = M image (ν) M object (ν) . (1.7)
Evaluation of the MTF Three methods to evaluate the MTF will be described.
A rst way to determine the MTF is to calculate the contrast in the image of a periodic grating made of X-ray absorbing and non-absorbing lines as displayed in gure 1.4.
Compared to the resolution of the detector, a high enough spatial frequency causes the overlap of the intensity distributions of the images of dierent lines, thereby reducing the contrast. The MTF is the curve describing the measured contrast as a function of the spatial frequency of the periodic grating.
From a mathematical point of view, we can describe the MTF by rst looking at the irradiance distribution g(x,y) of an image obtained with an optical system, which is as the convolution of the source distribution f(x,y) with the impulse response h(x,y):
g(x, y) = f(x, y) ⊗ h(x, y) .
(1.8)
The impulse response h(x,y) is the smallest image detail that the system can form.
When the source f(x,y) is an ideal point-source distribution, i.e. a two-dimensional Dirac delta function, the impulse response corresponds to the PSF of the system: g(x, y) = h(x, y) ≡ PSF(x, y) .
(1.9) The images start to overlap when the distance between two objects is close to the spatial resolution limit, reducing the contrast.
Equations 1.8 describes the image formation in the spatial domain. Applying the Fourier transform F and using the convolution theorem, the convolution in the spatial domain becomes a multiplication in the frequency domain:
F [g(x, y)] = F [f(x, y) ⊗ h(x, y)] , (1.10) [G(ξ, η)] = F(ξ, η) • H(ξ, η) . (1.11) F(ξ, η), G(ξ, η
) and H(ξ, η) are the Fourier transforms of f(x,y), g(x,y) and h(x,y) respectively.
[H(ξ, η)] is the optical transfer function (OTF), which is a complex function composed by a real part, the modulation transfer function MTF = |H(ξ, η)| and a complex part, the phase transfer function PTF = θ(ξ, η):
OTF ≡ H(ξ, η) = |H(ξ, η)| e [-jθ(ξ,η)] .
(1.12)
Mathematically the MTF corresponds to the modulus of the Fourier transform of the PSF, which is obtained applying the Fourier transform in equation 1.9:
MTF(ξ, η) = |F [PSF(x, y)]| .
(1.13) Since the width of a function is inversely proportional to the width of its Fourier transform, a sharper PSF results in a broader MTF, which means that a larger range of spatial frequencies can be imaged with high contrast. The ideal detector MTF is a at curve where the MTF is equal to 1 for every spatial frequency. The second way to evaluate the MTF is hence from the Fourier transform of its PSF. The PSF can be measured by acquiring the image of a point object, i.e. a point-object with dimensions much smaller than the resolution of the system.
Alternatively, the MTF can be calculated from the Line Spread Function (LSF), obtained from a line-object which has one dimension much smaller than the PSF of the system while the other dimension is much bigger. The line-source is dened as a delta function along x and a constant along y:
f(x, y) = δ(x) C(y) .
(1.14) Following equation 1.8 and 1.9, the image g(x,y), i.e. the LSF, is a two dimensional convolution of the PSF:
g(x, y) ≡ LSF(x) = [δ(x) C(y)] ⊗ PSF(x, y) = PSF(x, y )dy .
(1.15) The LSF hence only depends on the variable x. From the one-dimensional Fourier transform, we obtain the MTF:
MTF(ξ, 0) = |F [LSF(x)]| .
(1.16) Compared to the PSF, the limitation of the LSF is that it provides information about the spatial resolution along only one direction, perpendicular to the length of the lineobject. If the spatial resolution does not vary with the direction, the PSF is equal to the LSF, otherwise the LSF has to be measured in multiple directions.
Finally, a third way for the MTF evaluation is the slanted edge method [START_REF] Yue | Modulation transfer function evaluation of linear solid-state x-raysensitive detectors using edge techniques[END_REF][START_REF] Samei | A method for measuring the presampled MTF of digital radiographic systems using an edge test device[END_REF]. The Edge Spread Function (ESF) is acquired as the image of a knife edge object. The ESF is described by a step function s along x and a constant along y:
f(x, y) = s(x) C(y) .
(1.17) Mathematically, we can obtain the LSF as:
LSF (x) = d dx ESF (x) , (1.18)
and therefore calculate the MTF. For more mathematical details, see [START_REF] Glenn D Boreman | Modulation transfer function in optical and electro-optical systems[END_REF].
The slanted edge method is widely used to characterize the response of imaging systems to hard X-rays since the fabrication of high frequency gratings with sucient absorption is not trivial.
Frame rate
The detector's frame rate is the frequency at which consecutive images can be taken. It is dened as the inverse of the time needed to acquire the image and read the data, leading to an expression in frames per second (fps) or Hertz. Considering commercial products today the CMOS sensors can work at higher frame rate than HPADs, for the same number of pixels and dynamic range. For example, the PCO.dimax CMOS camera can record up to 7039 frames per second (1 Megapixel, 12-bit dynamic range), while the EIGER HPAD for the same conditions is limited at few hundreds Hz. To be able to exploit such a fast frame rate, a fast scintillator has to be selected. Because of the short decay time of Ce-doped scintillators, these crystals are normally preferred over Eu-or Tb-doped ones if fast imaging is required. Additionally, the integration time needed to acquire an image with enough signal has to be taken into account to evaluate the frame rate. The integration time can be reduced using a thicker scintillator, but this comes at the cost of a reduced spatial resolution.
1.4 Scintillators for X-ray area detectors The scintillation process in wide band gap materials can be divided into three steps: conversion, transport and luminescence (gure 1.5).
Firstly, in the conversion step, the X-ray photon interacts with the crystal lattice and transfers energy via the photoelectric eect and inelastic Compton scattering. This energy transfer creates a hot primary electron and deep hole, which are subsequently multiplied through a cascade of ionization processes (electron-electron inelastic scattering and Auger emission) which continues until their energy is too low to create further excitations. When the energy is below the forbidden gap E g electrons and holes interact with phonons. This stage is called thermalization. The overall process leads to low energy electrons and holes located at the bottom of the conduction band and at the top of the valence band. Secondly, the thermalized electrons and holes are transferred to the luminescence centers. During the transport, electron and holes migrates through the material and due to the presence of defects, they may recombine through non radiative processes and be trapped and detrapped, leading to a delayed luminescence (afterglow). Finally, the emission center is excited by the capture of a hole and an electron and ideally returns to the ground state through a radiative process (luminescence). Alternatively, the emission center can return to the ground state through non-radiative processes.
1.4.2 Performance of the scintillators for X-ray area detectors Some important parameters often considered in the scintillators characterization for area detectors are: the X-ray absorption eciency, the light yield (LY), the timing performance, dened by the decay time and the afterglow, the emission wavelength, which has to match the camera's quantum eciency, the linearity of the response with the X-ray energy and ux, the optical quality, the homogeneity of the response, the stability of the response, the properties of the substrate.
Figure 1.6 shows that the performance of the detector is aected by several scintillator properties.
The overall eciency of the detector and its DQE depend on the absorption eciency, the light yield, and the matching between scintillator emission spectra and the camera's quantum eciency, as seen in equation 1.5.
The spatial resolution obtained using high-resolution detectors is ultimately limited by the light diraction through the detector's optics and, therefore, depends on the emission wavelength of the scintillator. Additionally, the spatial resolution can be further degraded by the spread of the energy deposited in the scintillator and by the diusion of light. As a consequence, the optical quality of the scintillator, and its stopping power also play a role in the spatial resolution. The type of scintillator (powder, single crystal, micro-structured) as well as the material have, therefore, a signicant eect on the spatial resolution.
The speed of the detector is limited by the speed of the conversion process in the scintillator, i.e. the decay time. In addition, the speed of the detector is aected by the afterglow of the scintillator since a new image can only be taken when the afterglow of the previous image is reduced below the noise levels.
The afterglow as well as the optical quality also limit the dynamic range, reducing the number of exploitable signal levels of the camera sensor (see section 1.3.2).
Recently, the scintillator's linearity and stability are becoming a concern in X-ray imaging due to the demand of quantitative measurements. Therefore, a linear dependence of the detector's response on the X-ray photon ux and energy, as well as its stability after long exposures are required. These performances obviously depend in the rst place on the linearity and stability of the scintillator. Variation of the light yield with the X-ray energy (non-proportionality) or during the exposure (radiation damage and memory eect) are therefore, becoming an important subject of research in the scintillators eld [START_REF] Dorenbos | Non-proportionality in the scintillation response and the energy resolution obtainable with scintillation crystals[END_REF][START_REF] Moretti | Radioluminescence sensitization in scintillators and phosphors: trap engineering and modeling[END_REF].
As last example, in the case of thin lm scintillators on a substrate, the substrate can inuence the performances of the scintillator. Firstly, the optical and crystalline quality of the substrate aects the quality of the lm and, therefore, its imaging and scintillating properties. Secondly, any optical absorption of the photons from the lm in the substrate reduces the scintillator's eciency. Thirdly, light emission from the substrate 1.4 Scintillators for X-ray area detectors reduces the image quality that can be obtained. Lastly, X-ray uorescence in the substrate degrades the spatial resolution. This nal aspect is introduced in chapters 2 and 3.
Scintillators: materials and forms
The investigation of materials able to enhance the eciency of X-ray detectors based on photographic lms started immediately after the discovery of X-rays in 1895. The rst optimized converter screens were made of CdWO 4 powder phosphors, which were already available in the beginning of the 20 th century. In the seventies more ecient oxysulde materials were discovered [START_REF] Ka Wickersheim | Rare earth oxysulde x-ray phosphors[END_REF]. In particular Tb doped Gd 2 O 2 S, known as GOS or Gadox or P43, stood out for its high stopping power and light yield [START_REF] Zych | Spectroscopic properties of Lu2O3/Eu3+ nanocrystalline powders and sintered ceramics[END_REF][START_REF] Dujardin | Synthesis and scintillation properties of some dense X-ray phosphors[END_REF]. Today, dierent scintillator forms (single crystal, transparent ceramic, and structured scintillators) and dierent materials which outperform Gadox powder phosphors in many elds have been developed. However, Gadox screens are still widely used in medical and security applications, mainly because they can be produced as large area sheets at relatively low cost. Since powder screens are made of a grained phosphor mixed with a binding agent, the emitted light spreads in every direction due to scattering at the grain surfaces (see gure 1.7(a)). If the screen thickness increases, the spatial resolution decreases since the light is scattered by more grains before exiting the screen. In fact, the spatial resolution is approximately equal to the thickness of the scintillator (31). Single crystal scintillators have higher densities and, therefore, higher absorption efciencies than powder phosphors, for which the lling factor is approximately 50%. Additionally, since the light is not scattered inside the scintillator, compared to powder phosphors a better spatial resolution and contrast can be obtained for the same lm thickness (gure 1.7). A comparison between an image obtained with a powder phosphor and a single crystal lm is shown in gure 1.8.
A disadvantage of single crystal scintillators is their total internal reection which lowers the fraction of light able to exit at the surface and thus reduces the light to be collected. The light collection can be enhanced by surface treatments which increase the roughness, but they as a consequence also degrade the resolution. The rst materials to be developed as single crystals were NaI:Tl and CsI:Tl in the late nineteen-forties [START_REF] Van Sciver | Scintillations in Thallium-Activated Ca I 2 and CsI[END_REF] after which many other materials followed. For example, to improve the absorption eciency, research focused on materials with high density and high-eective Z-number. This resulted in, amongst others, Ce-doped LSO (Lu [START_REF] Lu | Neodymium doped yttrium aluminum garnet (Y 3 Al 5 O 12) nanocrystalline ceramicsa new generation of solid state laser and optical materials[END_REF][START_REF] Lempicki | A new lutetia-based ceramic scintillator for X-ray imaging[END_REF]. A disadvantage of ceramic scintillators is the degradation of the spatial resolution due to the scattering at the grain boundaries, as is also the case for the powder phosphors.
Grain boundaries may also contain excessive amounts of defects leading to traps and thus afterglow.
The so-called structured scintillators are made of pillars (gure 1.7(c)) that act as a light guide. CsI:Tl and CsI:Na for example, can be prepared with this structure. In the case of medical applications, structured scintillators are coupled directly to the photodiode. Many pillars are coupled to the same pixel and the ultimate spatial resolution limit is the photodiode pixel size. For high-resolution detectors, even if the camera's pixel size is reduced well below the light diraction limit using microscope optics, the diameter of the pillar is the detector's ultimate spatial resolution limit. Nevertheless, because of the optical waveguide properties, the resolution, remains constant for increasing lm thicknesses, while the absorption eciency is higher. Additionally, light collection from structured scintillators is more ecient compared to collection from single crystals, because less light undergoes total internal reection at the exit surface.
Today the minimum diameter of the pillars is a few micrometers, and hence they are not suitable for sub-micrometer spatial resolution imaging. Films made of sub-micrometer diameter Lu 2 O 3 pillars are currently under development, but they are not suciently homogeneous yet [START_REF] Marton | Ecient high-resolution hard x-ray imaging with transparent Lu2O3: Eu scintillator thin lms[END_REF]. In gure 1.9, a comparison between an image obtained using a sub-micron structured Lu 2 O 3 :Eu lm from RMD and a GGG single crystal lm is reported. In the inserts the at eld images are shown. In the case of the micro-structured scintillator, some inhomogeneities result in bright spots, which saturate the sensor and thus reduce the dynamic range of the detector. In addition, even if the exposure time is chosen so that these bright spots are not saturated, they are not completely eliminated through a at-eld correction.
A summary of the resolution limits for X-ray imaging using dierent kinds of screens is reported in gure 1.10. Today, single crystals are still the only viable solution for sub-micrometer spatial resolution X-ray area detectors. Their thickness must match the depth of eld of the microscope optics, otherwise the resolution is degraded. The use Figure 1.10: Scintillator requirements for high-resolution detectors [START_REF] Graafsma | Detectors for synchrotron tomography[END_REF].
of structured scintillators is currently still limited for applications that do not require resolutions below several micrometers. The main advantage of structured scintillators is the light guide eect, which allows the use of a thicker screen without signicantly reducing the spatial resolution. For this reason structured scintillators are good candidates for hard X-ray imaging.
Compared to single crystals, the use of powder phosphors and transparent ceramics reduces the obtainable resolution due to the light scattering on the grain boundaries.
Sub-micrometer resolution can not be obtained with these technologies. The main advantages of powder phosphors are the low cost and the possibility to fabricate materials that can not be grown as single crystals. Currently however, transparent ceramics are sometimes superseding powder phosphors because of their higher absorption eciency. Alternatively, thin single crystal scintillators can be produced thinning a bulk crystal by the mechanical-chemical polishing method [START_REF] Nikl | Lu3Al5O12-based materials for high 2D-resolution scintillation detectors[END_REF]. The use of a bulk crystal presents some advantages. No contaminations from the melt enter in the lm and polishing does not require a specic substrate. Some drawbacks are, however, present. Firstly, the minimum thickness that can be obtained is limited. Free-standing crystals can be thinned down to approximately 20-25 μm while crystals glued on a substrate are limited to a thickness between 5 and 10 μm. The depth of eld of a microscope objective with a numerical aperture higher than 0.6 is less than 1 μm. Consequently, combining a 10 μm thick SCF with high numerical aperture optics will degrade the spatial resolution because of the defocused image. Additionally, not every material can be polished down to 10-20 μm. Today, the polishing process is well optimized only for YAG and LuAG crystals. Secondly, due to the high temperature, an oxygen-free atmosphere is required for bulk growth and the scintillators polished from bulk crystals often present some 1.4 Scintillators for X-ray area detectors anti-site defects and oxygen vacancies, which lead to the presence of a slow component in the luminescence (afterglow) [START_REF] Nikl | Shallow traps and radiative recombination processes in Lu 3 Al 5 O 12: Ce single crystal scintillator[END_REF].
Scintillating screens down to hundreds of nanometers can be produced using the LPE technique which is, in addition, not limited to small sample areas. Next to that, for some materials, LPE lms show fewer structural defects compared to bulk crystals. This is caused by the lower growth temperature and leads to a reduction of the afterglow. This eect has been reported for example for Lu 3 Al 5 O 12 (LuAG) and for some aluminum perovskites [START_REF] Yu | Growth and luminescence properties of single-crystalline lms of RAlO3 (R= Lu, Lu-Y, Y, Tb) perovskite[END_REF][START_REF] Ku£era | Growth and characterization of YAG and LuAG epitaxial lms for scintillation applications[END_REF]. Moreover, using the LPE technique, the dopant concentration can be precisely tuned to maximize the conversion eciency and the dopant concentration in the lm is very homogeneous.
LPE also presents some drawbacks. Firstly, some unwanted impurities from the ux used for the LPE growth can enter in the lm. Depending on the nature of these impurities, the quality and the scintillation properties of the lm can be degraded. The detectors used for X-ray micro-imaging at synchrotrons are based on indirect detection, and can schematically be composed of three parts:
A scintillator, which absorbs the X-rays and converts the energy into a visible image;
Microscope optics, eventually combined with an eyepiece, which magnify the visible image and project it onto the imaging camera; A 2D imaging camera (i.e. a CCD or a CMOS) that converts the visible image into an electronic digital signal.
Depending on the conguration of the detector and on the conditions of the experiment, a combination of dierent phenomena can limit the spatial resolution and the contrast of the image. These phenomena are:
Scintillator response. When an X-ray photon interacts with a material, it can be deected (elastic or inelastic scattering) and generate secondary X-rays or electrons through atomic ionization. These electrons can relax through X-ray uorescence and Auger emission. Consequently a fraction of the incoming energy spreads from the initial interaction position. In applications which demand micrometer and sub-micrometer spatial resolution, this energy spread is non-negligible.
Light diraction. When a wave (i.e. the visible light emitted by the scintillator) goes through an aperture (i.e. the microscope optics) diraction occurs. The best focal spot that can be obtained, and consequently the highest spatial resolution that can be achieved, depends on the size of the diraction pattern after the aperture. The spatial resolution of a diraction-limited system depends on the numerical aperture and the wavelength of the light.
Out-of-focus light. If the thickness of the image source along the optical axis (here corresponding to the thickness of the scintillator) is larger than the depth of eld (DoF) of the microscope optics, part of the light is projected as a defocused image on the camera and degrades the quality of the recorded image. Using a scintillator which is thicker than the DoF, therefore, results in a system that is not diraction limited.
Camera resolution. According to the Nyquist-Shannon sampling theorem [START_REF] Elwood | Communication in the presence of noise[END_REF][START_REF] Nyquist | Certain topics in telegraph transmission theory[END_REF], the highest spatial resolution achievable with a 2D camera is approximately twice the pixel size. Since the visible image is magnied (or demagnied) in the case of an indirect detector, an estimate of the spatial resolution limit due to the camera is determined by dividing the camera's pixel size by the optical magnication or demagnication.
The main goal of the calculations presented in chapters 2 and 3 is to estimate the MTF of the detector as a function of the combination of the scintillator (composition and thickness) with the microscope optics (numerical aperture), in the case of optical magnication, i.e. the camera pixel size is reduced below the light diraction limit.
The model we developed includes Monte Carlo and analytical calculations. The rst enables to determine the scintillator response and the latter estimates the eects of diraction and out-of-focus light. We assume to be in a conguration where the camera does not inuence the spatial resolution, which is the case when the pixel size is approximately half of the diraction limit or smaller.
Nevertheless, the bottleneck of the experiment is not always the spatial resolution. It could be, for example, the speed (e.g. time-resolved experiment) or the maximum allowed dose on the sample (e.g. biological samples). Sometimes, it is more convenient to magnify the X-ray image and reduce the detector's spatial resolution, choosing a conguration which optimizes other properties. Hence, the assumption regarding the pixel size of the camera may not be valid and the choice of the scintillator is based on dierent criteria (e.g. the best DQE, the shortest decay time, the lowest afterglow, the highest light yield).
Hence, we focus here on the congurations demanding micrometer to sub-micrometer resolution, and we neglect the imaging camera by assuming that its spatial resolution is always well below the diraction limit and thus not a limiting factor.
An overall scheme of the model is reported in gure 2.1. The Monte Carlo code, based on the Geant4 (G4) Monte Carlo toolkit (56) simulates an X-ray pencil beam impinging on the scintillator. The scintillator is made of a thin scintillating lm deposited on a non-scintillating substrate. When a photon interacts with the scintillator or with eV) and it is now widely used for dierent applications: not only nuclear physics but also astrophysics, medical physics, radio-protection, etc. The rst Geant4 version was released in 1998; since then, many updates were released and a team of around one hundred scientists from all over the world works on its development and maintenance.
E dep (z) λ δz δz z 0 2.
Every user can freely access the whole code, but modications of the core part of the software are not recommended.
Since Geant4 is written in the C++ programming language, its object oriented nature allows the user to customize and extend the tool building his own application upon an existing framework. Additionally, the structure is modular and allows the user to load only the components needed for the application.
The application that is presented here has been developed using the version Geant4.9.6.
Our Geant4 application
Dierent classes have been implemented to develop the application: three mandatory classes describing the geometry and materials (G4VUserDetectorConstruction), the physical model (G4VPhysicsList ) and the primary particles generator (G4VPrimaryGenerator). Note that several other classes were used to dene the scorers needed to extract the energy distribution and the other quantities of interest.
The geometry of the simulation, as well as the axis convention that will be used in the rest of the discussion, is shown in gure 2.2. The scintillator is dened as a rectangular box of thickness t S and a lateral size of 1.4 cm, free standing or lying on a second 150 μm thick box representing the substrate. The scintillator has a surface normal along the z-axis. A one-dimensional X-ray pencil beam distributed along the y direction hits the scintillator orthogonally to its surface. Every primary X-ray and the secondary particles generated in the cascade are tracked individually down to zero energy. Due to the broad range of applications covered by Geant4, dierent physical models were developed and validated for dierent conditions. For the here described application, the low energy Livermore model has been selected, which has been validated for electrons and X-ray or gamma photons in the energy range from 250 eV to 1 GeV (59, 60). The production threshold for the secondary particles was set to 250 eV. Note that the limit of 250 eV is not critical for our model since we are studying a diraction-limited resolution, which is larger than the attenuation length of electrons at 250 eV. The materials used for the scintillator and the substrate are dened by the density and the elemental stoichiometry. Depending on these two parameters the software assumes that a particle traveling in the material has a certain probability to interact with a specic kind of atom, while the concepts of crystal and electronic band structure as well as phonons are not included. The list of all the materials used for the calculations are summarized in table 2.1.
Once the geometry, the physical model and the primary particle generator have been dened, the Geant4 application is ready to run. However, the simulation runs silently, meaning that the software does not keep track of every single step. Integrated quantities need to be calculated while the simulation runs to get useful output. Hence a sensitive detector (i.e. a scorer implemented inheriting from the class G4VSensitiveDetector ) has been coupled to the scintillator, meaning every time a particle moves a step in the scintillator this scorer is called by the software. The sensitive detector denes a tridimensional matrix in the scintillator, calculates the bin associated with the position of the step and increments a counter associated with the bin. Dierent counters can be incremented simultaneously. A rst one accumulates the energy deposited in every [START_REF] Vv Nagarkar | Structured CsI (Tl) scintillators for X-ray imaging applications[END_REF] bin to obtain the energy distribution in the scintillator, which corresponds, considering the whole detector system, to the light source distribution projected through the optics.
At the same time other counters can be coupled to the scintillator (or the substrate) to calculate for example the energy deposited by a single kind of particle (e.g. only electrons), by a specic phenomenon (e.g. only Compton scattering), or to count the number of interactions, the number of secondary particles, etc. Due to the symmetry of the geometry, in the y-direction a single bin has been considered. We can therefore, describe the output of the simulation as a two dimensional matrix M G4 where every line is a LSF curve calculated at a dierent depth z j in the scintillator (gure 2.3(a)). From M G4 dierent results can be extracted: the matrix of the MTF curves as function of the z coordinate (g.2.3(b)), the total LSF and the total MTF of the scintillator (without any consideration of the optical eects), the energy deposited in the scintillator as function of z, etc. The size of the bins is 0.1 μm in the x direction and 0.2 μm in the z direction. The bin sizes have been selected as a compromise between resolution and noise: increasing the bin size will degrade the MTF and decreasing it requires more statistics (i.e. longer computational time).
Results
Material and X-ray energy dependence
The Modulation Transfer Function (MTF)
The eect of the X-ray energy (between 5 and 80 keV) on the energy distribution has been studied for various scintillators with a thickness of 5 μm. In gure 2.4 the PSF and MTF curves for dierent X-ray energies and various scintillators are reported. The results are shown for the state-of-the-art thin lm scintillators (a LSO lm on an YbSO substrate and a GGG lm on a GGG substrate) and for two candidate materials to be developed (a Lu 2 O 3 lm on a Lu 2 O 3 substrate and a GdAP lm on a YAP substrate).
The simulated substrates correspond to the ones that are actually used for the SCFs. Undoped GGG is relatively easy to produce as a bulk single crystal (SC) bulk and is commercially available as a substrate. It is therefore, ideal for Eu-doped GGG lm growth, since the lm-substrate lattice mismatch is close to zero. A drawback is the luminescence of the substrate due to elemental contaminations which can vary depending on the lot and on the supplier. LSO:Tb lms are grown on YbSO or LYSO:Ce substrates. YbSO was developed specically for LSO lm growth and has no emission in the visible range. The drawback of YbSO is that it is only produced in small quantities and that it is expensive. Alternatively, LSO:Tb lms are grown on LYSO:Ce bulk SCs, which are widely available since they are used themselves as scintillators. In this case, the cerium visible emission has to be suppressed using an optical lter. Undoped YAP
Results
single crystals are available and relatively cheap. The crystal structure is the same as GdAP, LuAP and GdLuAP and the lattice mismatch can be reduced by optimizing the Gd/Lu ratio, as presented in chapter 4. Also in this case, an emission which varies with the supplier and with the lot is observed, and has to be suppressed using optical lters. Lu 2 O 3 bulk SCs are dicult to grow, but many progress has been made recently.
Substrates are starting to be available, although the crystalline and optical quality are No signicant dierence is observed in the width of the central peak of the PSF, but signicantly higher tails appear in the PSF calculated from GdAP. These tails are not due to the scintillating lm itself, but due to the yttrium X-ray uorescence produced in the substrate that interacts with the scintillator and creates an oset in the PSF.
Similar tails are visible for a GGG lm on GGG substrate in gure 2.4(a) and 2.4(b), caused in this case by the gallium K-edge at 10.4 keV. To conrm that the reduction of the contrast is due to the substrate, the results for 5 μm free-standing GdAP and GGG are also plotted in g 2.4(b), in blue and red dashed lines respectively. Compared with GdAP on YAP and GGG on GGG (blue and red continuous lines respectively) no osets in the PSFs and no low-frequency drops for the MTFs are observed. The contrast degradation due to the substrate is smaller for a GGG substrate than a YAP one, due to the lower uorescence yield of gallium compared to yttrium. Although the atomic density of gallium in GGG is higher than the one of Y in YAP (2.11 • 10 22 Ga atoms /cm 3 vs 1.97 • 10 22 Y atoms /cm 3 , the uorescence yield for the K-shell (ω K ) sharply increases with the atomic number Z in the range Z=20 to Z=40 (Ga ω K ≈ 0.45 vs. Y ω K ≈ 0.7).
Since the absorption eciency is approximately the same at 20 keV, the number of X-ray uorescence photons produced in YAP can be thus roughly estimated to equal 1.5 times the number of the ones produced in GGG. By increasing the X-ray energy the PSFs become broader and the MTFs are degraded.
The calculated contrast at 500 lp/mm decreases from 85 % at 20 keV to 25 % at 45 keV for Lu 2 O 3 and from 57 % to 17 % for GdAP (gure 2.4(c)). At the same time, a signicant broadening of the PSFs is observed. However, once the energy is above the K-edge of the high-z element contained in the scintillator, a higher contrast is obtained. For example at 55 keV, approximately 5 keV above the gadolinium K-edge, the contrast of GdAP at 500 lp/mm goes up to 50 % and the PSF broadening is less signicant (g.2.4(d)). Similarly, the contrast calculated for To summarize the results obtained as a function of the X-ray energy and the material, the value of the MTF at 500 lp/mm (1 μm resolution), is reported in gure 2.5 for the 2.4 Results dierent scintillators. Lutetium and mixed gadolinium-lutetium aluminum perovskite, as well as lutetium and gadolinium aluminum garnets have been added for comparison in addition to the materials already discussed above. GdAP on GdAP is shown to illustrate the eect of a dierent substrate compared to GdAP on YAP. We can divide the considered energy range in few intervals, where dierent phenomena play the crucial role and dierent scintillators show the best spatial response. At low X-ray energy, below the yttrium K-edge (5-17 keV) the considered materials, which all have a density above 7 g/cm 3 , show high contrast (more than 80 % at 500 lp/mm). Between 10 and 17 keV the GGG SCFs are slightly less performant due to gallium uorescence.
In the energy range between the Y and Gd K-edge (17-50 keV), the contrast calculated for materials on Y-based substrates (i.e. YAP and YAG substrates) is 15-35 % lower compared to the values obtained for scintillators on yttrium-free substrates (i.e.
Lu 2 O 3 and LSO SCFs). The eect is caused by the uorescence of the substrate which is partly reabsorbed in the SCF. As a reference, the results are compared for GdAP, both on YAP and GdAP substrates (continuous and dashed blue lines respectively). The contrast at 20 keV is 56 % for GdAP on YAP and 75 % for GdAP on GdAP.
In the energy range 50-80 keV the major role is played by the Gd K-edge and the Lu K-edge. Gd based materials show higher contrast in the 52-65 keV range, Lu-based ones in the 65-80 keV range, while GdLuAP shows a atter response as a function of the energy and could compete in terms of contrast both with GGG and LSO state-ofthe-art thin lm scintillators. Once again the substrate plays an important role due to the uorescence photons which reduce the contrast at low frequencies. YAP and YAG substrates have a lower absorption at high-energy and a lower uorescence rate compared to the Lu 2 O 3 , GGG and YbSO ones. Consequently, the scintillators on Y based substrates show better contrast compared with the other investigated materials because of the lower number of uorescence photons produced in the substrate. In this case the contrast for GdAP SCFs is higher when a YAP substrate is selected over a GdAP one (54 % vs. 48 % at 62 keV).
Considering these results, it should be kept in mind that the optical quality of the lm, and therefore the spatial resolution of the detector, is aected by the lattice mismatch between lm and substrate. The considerations made in this chapters about the sub-strate inuence on the MTF are valid only if the optical quality of the lms grown on dierent substrates is comparable.
Absorption eciency
Up to this point we only focused on the energy spread in the materials and the consequential limitations on the spatial resolution. An additional limitation of thin lm scintillators is their low absorption, especially at high energy, which aects the whole detector eciency.
By considering the dierent atomic cross sections, for example by using the NIST database, the percentage of incoming photons that interact with a scintillator of a certain thickness can be calculated. The result of this calculation is shown in gure 2.6(a), for dierent materials of 5 μm thickness. However, the interaction of an X-ray photon with the scintillator does not always lead to the deposition of energy and light emission. The photon can be deected by Compton and Rayleigh scattering or generate secondary particles, which may be able to escape from the scintillator. Since the scintillator is a few micrometers thick, the probability that the energy of the primary photon is not completely deposited is non negligible. Therefore, the attenuation in the material may not be a good approximation for the absorption eciency, especially at high X-ray energy. X-rays and electrons using Geant4.
Results
We therefore used Monte Carlo calculations to evaluate the real amount of energy deposited in the scintillator: the results are reported in gure 2.6(b). The attenuation is a good approximation of the absorption in the material at low energy, but it becomes less precise as the energy increases. In particular, above the K-edge, the gain in absorption can be signicantly lower than what is expected from the attenuation due to the creation of a large amount of high energy uorescence photons which can escape the scintillator easily. For example, while the The ratio E dep /attenuation is always lower than 1 and decreases slowly for increasing energy. In fact, while increasing the X-ray photons energy, secondary particles with higher energy and therefore, higher probability to escape the lm, are produced. When the energy is above the K-edge of yttrium the ratio sharply increases because part of the uorescence photons produced in the substrate are re-absorbed in the lm, which as discussed above, degrades the resolution. When the energy is high enough to cause the uorescence in the scintillator, the ratio reduces due to the orescence photons that can easily escape the thin lm scintillator. The lower ratio is obtained above the scintillator K-edge, and then starts to increase slowly.
Results
Figure of merit
To achieve the sub-micrometer spatial resolution demanded in some X-ray imaging experiments, thin lm scintillators are selected. However, the low absorption of the selected thin lms reduces the detector's eciency. For example, 50 μm of LSO attenuates at least 8 % of the incoming radiation up to 80 keV, while for 5 μm of LSO the attenuation already reduces to 8 % at 25 keV. To evaluate the best compromise between a sharp image and an ecient detector, we dened a gure of merit (FoM): It is important to keep in mind that here only the absorption in the matrix of the scintillator is considered. To get more precise results, other parameters should also be included.
FoM(E) = MTF G4 500 lp/mm (E) * E dep (E) (2.
Firstly, the light yield (LY), that may change the overall scintillators eciency. However, this parameter is dicult to evaluate for the materials that have not been developed and carefully optimized yet. Therefore, the dierent scintillators are assumed to have the same LY. Additionally, the LY may depend on the X-ray energy, due to non-proportionality phenomenon [START_REF] Dorenbos | Non-proportionality in the scintillation response and the energy resolution obtainable with scintillation crystals[END_REF], presented in section 3.3.2. Secondly, the eect of the microscope optics and emission wavelength of the scintillator, which depends on the choice of the dopant. The resolution is ultimately limited by the light diraction: the smallest spot that the optics can focus depends both on the wavelength λ and on the numerical aperture NA. Moreover, both the wavelength λ and NA dene the depth of eld of the optics, meaning the maximum thickness of the scintillator which can be projected as a focused image. This part of the calculation will be described in detail in chapter 3.
A more precise evaluation of the gure of merit as a function of the energy E should therefore be:
FoM(E) = MTF G4+Optics 500 lp/mm (E) * E dep (E) * LY(E) (2.2)
What happens at the K-edge?
In this section the improvement of the MTF curve above the scintillator K-edge energy is investigated. To avoid confusion with eects that may come from the substrate, the results are reported for free-standing scintillators.
Energy distribution along the lm thickness
The energy distribution and the MTF were studied as a function of the coordinate z along the thickness of the scintillator. Figure 2.9(a)-top shows the percentage of the incoming energy deposited in the scintillator (E dep ) as a function of z, for 5 and 50 μm thick GdAP free-standing lms, below and above the Gd K-edge. In the case of 5 μm thick GdAP, the curve increases going from the surfaces (z = 0 and z = 5 μm) to a maximum located approximately half way into the scintillator. The shape is similar at 49 keV and 55 keV, although the values are higher above the K-edge. This trend is far away from what we could expect from the Beer-Lambert law, which describes the energy attenuation along the thickness t of a material as an exponential decay depending on a certain attenuation coecient μ, which depends on the energy E:
I = I 0 e -μ(E) t (2.3)
For a 50 μm thick scintillator, once again E dep (z) decreases close to the two surfaces, but in the central part it can be described as an exponential decay. for secondary electrons (electrons 2 ), primary and secondary X-ray photons (X-rays 1 , Xrays 2 ) as a function of the depth z in the scintillator, at 49 and 55 keV, for 5 μm and 50 μm free-standing GdAP scintillators.
To understand the curve E dep (z) obtained from the Monte Carlo calculation one has to
Results
remind that the Lambert-Beer law only keeps the cross section for primary interactions of the incident X-ray photons into account. In the case of incident X or gamma rays, the attenuation can not always be considered as a good approximation of the the dose deposited in the material, as we discussed above, due to the secondary particles cascade.
If we imagine to divide the scintillator along z in slices of thickness dz, to calculate the energy deposited in the j th slice in the scintillator (E j dep ) we have to sum the energy deposited by the primary X-ray photon interactions in the j th slice (E j dep 1 ), the energy deposited by the secondary particles produced in the j th slice (E j dep 2 ) and the energy deposited by the secondary particles produced in the other slices that can reach the j th slice (E =j dep 2 ):
E j dep = E j dep 1 + E j dep 2 + E =j dep 2 (2.4)
In reality, the X-rays do not deposit the energy directly, but generate secondary electrons, that eventually deposit energy. In the Monte Carlo model, an energy threshold is dened for the production of the secondary particles, meaning that secondary particles with an energy lower than the threshold are not generated. The remaining energy is hence counted as deposited by the X-ray in the position of the interaction. The energy threshold is user dened and was set at 250 eV. This approximation is suciently accurate since the attenuation length for electrons at 250 eV in GdAP is below the spatial resolution range which is studied. It is important to keep in mind that the energy deposited by X-rays depends on the production threshold. In our case, it corresponds to the amount of energy deposited by electrons with a diusion length shorter than the size of the voxel dened in the simulation.
For a certain value of the production threshold, an incident X-ray photon interacting with GdAP deposits a fraction C 1 of its energy and transfers the remaining (1 -C 1 ) to secondary particles. A fraction C 2 of the secondary particles is re-absorbed in the thickness dz of the slice. The rst and the second components of the equation 2.4 can be simply re-written as a function of the primary X-ray photons able to reach the j th slice (located at z = d):
E j dep 1 ∝ C 1 e -1 X 1 d (2.5)
Results
E j dep 2 ∝ (1 -C 1 ) (C 2 ) e -1 X 1 d (2.6) X 1
is the attenuation length of the incident X-ray photons in the material and it is inversely proportional to the attenuation coecient μ. The last component in equation 2.4 is the sum of the secondary particles produced in every i th slice (where i = j) and not re-absorbed in the same slice, attenuated as a function of the distance | d id | between the i th and j th slice. Dening X 2 and el 2 as the attenuation lengths of the secondary X-ray photons and electrons respectively, we can write:
E =j dep 2 ∝= N,i =j i=1 e -μ d i (C el 3 e -1 el 2 |d i -d| + C X 3 e -1 X 2 |d i -d| ) (2.7)
For the primary X-ray photons in GdAP μ = 21.3 cm -1 at 49 keV and μ = 75.4 cm -1 at 55 keV, corresponding to an attenuation length of 469 μm and 133 μm respectively. For electrons in GdAP the attenuation length calculated from the the CSDA (Continuous Slowing Down Approximation range) in the energy range 1-55 keV is 1-10 μm [START_REF]CSDA database[END_REF].
The secondary X-ray photons will mostly have the energy of the L and K-shell of Gd, corresponding to an attenuation length of 5 and 100 μm respectively. Additionally, the uorescence rate for the K-shell in Gd is approximately six times the uorescence rate of the L-shell. The rst two components of equation 2.4 are therefore almost a constant along the depth of the scintillator. This is also the case for the decay describing the high energy secondary X-ray photons. Dening 2 as the average attenuation length of the secondary electrons and secondary low energy X-ray photons we nally rewrite equation 2.4 as:
E j dep = K + C 4 N,i =j i=1 e -1 2 |d i -d| = K + t 0 e -1 2 |z-d| dz (2.8)
This function has a maximum at z = t/2 and decreases going toward 0 or t, in agreement with the results for the 5 μm thick scintillator in gure 2.9(a)-top. The shape of E dep (z)
describes, therefore, the ux of secondary particles at dierent depth in the scintillator, that is lower at the surfaces. However, when the thickness is larger than , all the slices at the center, located at higher distance than 2 from the surfaces, will be reached by about the same ux of secondary particles. The particles generated above a certain
Results
distance from the considered slice, in fact, do not contribute to the ux. In gure 2.9(a)top E dep (z) is reported for a 50 μm thick scintillator. The variation of the secondary particles ux is observed close to the surfaces of the scintillator, while in the central region the curve is well described by the Beer-Lambert law.
The curves describing the number of interactions of the dierent particles as a function of z reported in gure 2.9(b) conrm what is described above. Firstly, the number of interactions of the primary X-rays (X-rays 1 ) is reduced of a value corresponding to the attenuation calculated using the NIST database reported in table 2.2.
The number of interactions of the primary X-ray photons therefore simply follows the exponential decay e -μ(E)z as dened in the equation 2.3.
Secondly, the number of interactions of the secondary electrons (electrons 2 ) describes the same curve as seen in gure 2.9(a), which is showing that the electron ux follows the distribution described above. Additionally, it conrms that most of the energy is deposited by electrons. Lastly, the curve describing the interactions of the secondary X-rays (X-rays 2 ) is similar to the one of the electrons, but the slopes close to the surfaces are less steep, due to the longer attenuation length of the X-ray photons.
In addition to its contribution to the eective energy deposition, the diusion of the secondary particles plays a crucial role for the MTF. Indeed it degrades the MTF due to the energy being deposited far from the position of the rst interaction. In gure 2.9(a)-bottom the MTF is reported as a function of z. It can be seen that the contrast is higher close to the surfaces, and decreases going to the center of the scintillator where the secondary particles ux is higher.
For the 5 μm thick GdAP, the complete PSFs and MTFs calculated at dierent z are reported in gure 2.10. All the MTF curves at 49 keV show lower contrast than the MTFs calculated at 55 keV, and all PSFs are less sharp. For the 50 μm thick scintillator, the value of the MTF at 500 lp/mm is approximately constant in the central part of the scintillator (z ≈ 3 μm to z ≈ 47 μm) both at 49 and 55 keV (gure 2.9(a)-bottom).
We can therefore conclude that the MTF improvement at the K-edge is not just caused by a dierent energy distribution and PSF broadening along the depth of the scintillator.
Contribution of the dierent interactions
The contribution of the dierent particles to the energy deposition was studied below and above the K-edge. Although the MTF is calculated from the energy deposited, for simplicity only the number of events for every kind of interaction was considered. In fact, to really compute the energy deposited by every dierent interactions, the energy deposited by all the secondary particles produced should be kept into account. However, even doing so, the result is not trivial to interpret as a consequence of the fact that dierent kind of interactions take place in the same cascade.
The spatial distribution of the number of events for the X-ray and electronic interactions is reported in gure 2.11(a-e) for 5 μm thick free-standing GdAP lm at 49 and 55 keV. For X-rays, the distributions of the photoelectric eect, Rayleigh and Compton scattering are considered separately. The PSFs evaluated from the total energy deposited in the lm are reported in gure 2.11(f) as a reference.
The distribution of the X-ray interactions, including both primary and secondary Xrays, (X-rays 1+2 , g. 2.11(a)) is almost completely dened by the photoelectric eect (X-rays 1+2 , g. 2.11(b)). This result is not unexpected since X-ray photons in the energy range 1-100 keV interacting with such a high-Z material do so mainly through Figure 2.11: Normalized distributions of the number of events for the dierent kinds of interactions: (a) all the X-ray interactions, (b) photoelectric eect, (c) Compton scattering, (d) Rayleigh scattering, (e) all the electronic interactions. (f) PSF calculated from the total deposited energy. In (a) and (b) the curves counting the primary or secondary X-rays (Xrays 1 , X-rays 2 ) are also reported. Scintillator: GdAP 5 μm free-standing. Energy: 49 and 55 keV. photoelectric interaction (more than 80 % of the interactions) than Rayleigh or Compton scattering. The scattering events dene to the tails of the distribution (gure 2.11(c,d)). Their contribution is more important at 55 keV than at 49 keV, in contradiction to both the MTF improvement and the increase of the photoelectric eect cross-sections above the 2.4 Results K-edge. For example for GdAP, the photoelectric eect probability increases from 84 % at 49 keV to 97 % at 55 keV. This contradiction can be explained by considering the primary X-ray interactions separately (X-rays 1 , g. 2.11(a)). The spatial distribution resembles a Dirac function with tails due to the X-ray scattering. These tails are lower at 55 keV, in agreement with the lower scattering probability. When the X-rays 2 interactions are included, the tails are higher at 55 keV, due to the higher number of secondary X-rays produced by K-shell uorescence. The improvement of the MTF above the K-edge can therefore be partially attributed to a dierent distribution of the interactions of the incident X-rays due to the increase of the number of photoelectric interactions, which reduce the probability of the primary X-rays to diuse in the tails. More electrons are generated in the central peak of the distribution, at X=0. However, to completely explain the improvement of the MTF at 55 keV, the electrons diusion has to be taken into account. The distribution of the electron interactions is reported in gure 2.11(e). Although the electrons are produced where the X-rays interact with the atoms, the X-ray interaction distribution diers from the one of the electron. The last resembles the PSF (g.2.11(f)) and is narrower at 55 keV, in agreement with the MTF improvement. This can be explained by considering the electron attenuation length below and above the K-edge. Most of the energy is transferred to electrons through a photoelectric interaction. The X-ray photon is completely absorbed and a core electron is ejected, leaving the atom in an excited state, which relaxes through Auger of uorescence emission. The photoelectron is ejected with an energy E kin el :
E kin el = E X -E binding ,
(2.9) where E X is the the energy of the incoming X-ray and E binding is the binding energy of the electron. For a given E binding , the photoelectron energy and, therefore, the attenuation length increases with E X . However, above the K-edge, the electrons can, in addition to the M and L shells, also be emitted from the Gd K-shell with a lower energy because of the stronger binding energy with the atom. The energy spectra of the secondary electrons at the creation are reported in gure 2.12(b), for X-ray energies equal to 35, 49 and 55 keV. The peaks due to the L and M-shell electrons are located in the energy range 25-55 keV. Their positions move to higher energy for increasing X-ray energy and can be calculated from equation 2.9. For Gadolinium, the binding energies for the M and L shells are approximately 1.5 and 7.5 keV respectively. At 55 keV, the peak due to the K-shell appears at ≈ 5 keV (binding energy 50.2 keV). The average electron energy, reported in the legend, decreases above the K-edge. The additional smaller peaks visible in the spectra are due to Auger electrons or photoelectrons produced by secondary particles, as well as interactions with aluminum atoms.
In the considered energy range the attenuation length in GdAP for the K-shell photoelectrons is a few hundred nanometers, while for the M and L-shells photoelectrons is 3-11 μm. The fraction of energy transferred to K-shell photoelectrons reduces the average electron diusion length and hence the energy spread, which explains the sharper PSF above the K-edge. On the contrary, the energy of the secondary X-rays increases, due to the K-shell uorescence. The energy spectra of secondary X-rays is reported in gure 2.12(b). However, the K-shell uorescence X-ray photons are in the energy range 43-50 keV, which corresponds to an attenuation length above 100 μm in GdAP. Consequently, the spatial resolution is not strongly degraded by these photons, since they mostly escape without interacting and thus without depositing energy in the lm. Their inuence is visible in the tails of the PSF, which are more intense at 55 keV (gure 2.11(f)).
Results
Substrate eect
The choice of the substrate is critical from the point of view of the performance of the thin lm scintillator. Firstly, the crystalline structure of the substrate has to be the same as that of the lm and the lattice mismatch has to be suciently small to be able to grow a scintillator with good optical quality, a mandatory criterion to ensure a good image quality. Secondly, a substrate which is non-scintillating at the same emission wavelength as the scintillator is required. In the simulations these rst two constraints are not considered, and all the scintillators are supposed to have the same optical quality and no visible luminescence from the substrate. However, the substrate uorescence can also aect the image quality, since the uorescence photons can interact with the lm creating an oset in the PSF which reduces the MTF at low spatial frequencies. This eect has been introduced in section 2.4.1. In gure 2.4(b), the MTF at 20 keV calculated for a GdAP lm on a YAP substrate shows a 20 % reduction of the contrast at low spatial frequencies, which is not observed for the free-standing GdAP. In gure 2.13 the spatial distribution of the X-ray and electron interactions in the GdAP lm at 15 and 20 keV is reported. From 15 to 20 keV, almost no dierence is observed for the primary X-ray interactions (X-rays 1 ), while the secondary X-ray interaction distribution (X-rays 2 ) presents tails which are signicantly higher at 20 keV than at 15 keV. The same tails are visible in the electron interaction distributions, due to the production of electrons by secondary X-ray interactions far from the central part of the distribution (x = 0). On the contrary, the central part of the distribution remains unchanged, mainly due to electron diusion, as no signicant variation is expected in the electron attenuation length. As reference, the electron interaction distribution is reported at 49 keV. The central part of the distribution is broader, due to the higher energy photoelectrons produced in the lm, while the tails are only slightly higher. The reduction of the MTF values at low frequencies is due to the tails in the interaction distribution and in the PSF. They originate from the secondary X-rays produced in the substrate which interacts with the lm, creating a cascade which deposits energy far from the position of the rst interaction. Their number increases signicantly above the substrate K-edge causing the loss of contrast in the MTF. To conrm this hypothesis, the MTF was calculated removing the secondary electrons or X-ray photons generated in the substrate (gure 2.14). When the substrate's secondary electrons are removed, the obtained MTF is the same as the one obtained after tracking all the secondary particles. Moreover, when the substrate's secondary X-ray photons are removed, the low frequency contrast reduction disappears.
Thickness dependency
For a high resolution experiment, a thin lm scintillator is required. The resolution degrades when the lm thickness increases due to the distribution of the energy deposited in the scintillator and due to the out-of-focus light that is collected by the optics. The eect of the out-of-focus light is not included in this chapter's results. Therefore, the eect of the thickness on the MTF described in this chapter is only due to the energy distribution in the scintillator.
with an energy of 42-47 keV, corresponding to a 7-9 μm attenuation length. Therefore, when the thickness is increased from 3 to 10 μm the MTF is degraded, while from 10 to 50 remains constant. At 55 keV, above the Gd K-edge, even tough the MTF is degraded by the X-ray uorescence photons, the attenuation length of the electrons decreases because of the lower energy electrons emitted from the Gd K-shell. For larger thicknesses, the probability that these uorescence photons interact with the lm degrade the resolution becomes higher. A contrast reduction at low frequencies is in fact observed for the 50 μm GdAP lm. Considering GdAP scintillators on a YAP substrate (gure 2.15 continuous lines) the behavior is similar to the free-standing GdAP. However, a thicker scintillator can improve the MTF by reducing the amount of uorescence from the substrate. In fact, at 20 keV, a better MTF is predicted for the 50 μm scintillator as compared to the thinner ones. For increasing energy, this eect starts to compete with the longer attenuation length of the secondary electrons. At 49 keV, for example, better contrast is observed for the thickest investigated scintillator at low frequencies, due to the substrate uorescence, while at high frequencies the contrast is approximately the same for the dierent considered thicknesses.
Conclusions
A Monte Carlo application based on the Geant4 toolkit has been developed to study the distribution of the energy deposited in free-standing or substrate-based few micrometer thick SCFs. The obtained distribution was used to evaluate the absorption eciency and MTF response of the lms. Dierent scintillating lm compositions have been studied as a function of the X-ray energy, in the range 5-80 keV. The MTF decreases with the X-ray energy, but a signicant improvement is predicted above the K-edges. The improvement is attributed to the increase of the probability of the photoelectric eect and to the reduction of the ejection energy and attenuation length of the photoelectrons. The substrate also plays a crucial role. The X-rays not absorbed in the lm interact with the substrate and, depending on the energy, generate X-ray uorescence. These secondary photons can deposit energy in the lm and create an oset in the energy distribution, corresponding to a drop in the contrast at low spatial frequencies. The total amount of energy deposited in the scintillator was also evaluated. For a thin lm, the attenuation calculated from the cross section of the interactions of the primary X-ray was found to be a good approximation of the absorption eciency only at low energies. It becomes less precise at high energy and in particular above the K-edge, where the absorption eciency is overestimated due to the escape of secondary particles from the thin lm. A gure of merit based on the MTF response and absorption eciency have been evaluated to select the most promising materials. Lutetium oxide is, due to the high absorption eciency, the most promising among the simulated materials at X-ray energies in the range 5-51 keV and 64-80 keV. Compared to the state-of-the-art LSO SCF, the MTF response of Lu 2 O 3 is higher in the 5-50 keV range and approximately the same in the 64-80 keV range. In the 51-64 keV range, the highest gure of merit was predicted for gadolinium perovskite. These results do not keep into account the optical quality and the light yield of the SCFs, which can not be precisely evaluated before the development of the materials. Lastly, the eect of the scintillator thickness was also evaluated. For free-standing scintillators, increasing the thickness was found to be detrimental for the MTF only up to a thickness which corresponds to the attenuation length of the secondary electrons. Above this value, the MTF remains constant. On a substrate a thicker scintillator could be benecial for the MTF response since less photons are able to reach the substrate and produce X-ray photons which degrade the MTF. However, the MTF variation with the lm thickness presented in this chapter does not keep into account the microscope optics used in high-spatial resolution detectors. This aspect is introduced in the next chapter.
Chapter 3
The indirect detector model
Blurring of the microscope optics
The energy distribution calculated using Monte Carlo corresponds to the light source that is produced by the scintillator, which is not necessarily equal to the light distribution measured by the high-resolution detector. In fact the light source, while projected on the camera by the microscope optics, is also blurred. Further calculations are therefore needed to estimate the spatial resolution and the MTF of the detector system in a realistic way, comparable with the experimental data. The best achievable spatial resolution is related to the numerical aperture (NA) of the microscope objective and to the scintillator's emission wavelength (λ). Even in the case of an ideal aberration-free optical system, a perfect point source is always focused as an interference pattern, due to light diraction. The central maximum of this interference pattern is called the Airy disk. If we dene the spatial resolution limit according to the Rayleigh criterion, it can be estimated as the distance R Rayleigh between two point sources such that the maximum of the Airy disk of the rst one occurs at the minimum of the second one. R Rayleigh and the corresponding spatial frequency f Rayleigh can be estimated as [START_REF]xikon mirosopy wesite[END_REF]:
R Rayleigh ≈ 0.61 λ NA ; f Rayleigh ≈ 0.82 NA λ .
(3.1)
The cuto frequency f C , which is the spatial frequency where the MTF contrast value reduces to zero, can be calculated as [START_REF]xikon mirosopy wesite[END_REF]:
f C ≈ 2 NA λ . (3.2)
The depth of eld (DoF) is dened in optics as the distance between the nearest and farthest object that appears in focus. For a microscope objective, the DoF also depends on NA and λ (71):
DoF = λ n NA 2 + n NA M e , ( 3.3)
where n is the index of refraction (n = 1 for a dry objective, n ≈ 1.5 for an immersion objective), e is the camera pixel size and M is the total magnication of the optical system, which considers both the microscope optics and the eyepiece.
In gure 3.1 the DoF (dashed lines) and the f Rayleigh (continuous lines) are calculated as a function of NA, for dierent λ. Shorter wavelengths, as well as higher numerical apertures, increase the resolution limit. However, they also reduce the depth of eld, making it harder to optimize the detector to be a diraction limited system. A thinner scintillator, as well as a sub-micrometer precision focusing system is required to achieve that. If a 1 μm spatial resolution is required (i.e. 500 lp/mm spatial frequency) the numerical aperture of the optics has to be higher than 0.3 for visible light. However, such a microscope objective has a DoF of 5-10 μm. If the scintillator is thicker than the DoF, only the light produced within a certain depth dz in the scintillator is projected as a focused image on the camera, while the light produced outside this region is projected as a defocused image. Since the focused and defocused images contribute to the total signal on the camera, the overall image quality is degraded.
An analytical model to calculate the response of an aberration-free optical system was described by Hopkins [START_REF] Hopkins | The frequency response of a defocused optical system[END_REF]. Taking into account the light diraction and the defect of focus δz, the optical transfer function (OTF) of a defocused optical system is calculated as a convergent series of Bessel functions:
OT F (f, δz) = 4 πa cos a|f ∼ | 2 {βJ1(a) + sin(2β) 2 [J1(a) -J3(a)] - sin(4β) 4 [J3(a) -J5(a)] + ...} - 4 πa sin a|f ∼ | 2 {sinβ[J0(a) -J2(a)] - sin(3β) 3 [J2(a) -J4(a)]+ + sin(5β) 5 [J4(a) -J6(a)] -...} a = 2πn λ |f ∼ |sin 2 (α) δz) , β= arccos |f| 2 , f ∼ = λ nsinα , ( 3.4)
where f is the spatial frequency in the object plane, n the refractive index of the scintillator and α the acceptance angle of the scintillator. For a symmetrical pupil function of the lens, the OTF is equal to its modulus, which is the MTF.
The image on the camera is the sum of superimposing signals originated at dierent positions along the thickness of the scintillator. Koch et al. ( 4), therefore, using equation 3.4, approximate the system response to the response of a defocused optical system, keeping the thickness of the scintillator into account:
MTF(f) = |OTF(f)| = z-z 0 -z 0 OTF(f, δz)exp -μ(δz+z 0 ) dδz (3.5)
where z 0 is the distance of the plane where the system is focused from the scintillator surface.
Following this approach the resolution can be evaluated as a function of the scintillator thickness and the optics' numerical aperture. The resolution, evaluated from the spatial frequency where the MTF contrast is 50 %, is reported in gure 3.2, as a function of NA. For every thickness, there is a minimum in the curve corresponding to the numerical aperture which gives the best resolution, i.e. the numerical aperture with a DoF equal to the thickness of the scintillator. Increasing NA above that value reduces the resolution due to the contribution of the defocused signal. This approach do not keep the X-ray energy and the scintillator composition into account. It is only a valid approximation of the response of the detector if the energy spread in the scintillator is negligible compared to the optics blurring.
The detector's response
To keep both the scintillator and the microscope optics responses into account, each plane in the scintillator is considered as a light source. Their light distribution is described by the energy deposition calculated with the Monte Carlo simulations. The image of each plane is blurred by the optics as a function of the position of the plane along the thickness of the scintillator. Assuming the system is focused at a certain position z 0 , the planes within a certain thickness dz (equal to the DoF) around z 0 are projected as a focused image and thus only blurred by the light diraction. The planes outside dz, however, are additionally blurred as a function of the distance from z 0 (δz). For the calculation, the scintillator has been divided along z in bins of size S z equal to 0.2 μm. S z is selected to be approximately half of the minimum DoF of the systems that has been investigated in this study. For a dry objective in fact, the maximum NA is equal to 1 and the DoF between 0.4 and 0.5 μm for UV light (350 nm). The total MTF, assuming that the system is focused on the j th bin in z (MTF tot z 0 =j ), has been calculated as the average of every plane in the scintillator and weighed by the deposited energy:
MTF tot z 0 =j (f) = N i=1 MTF scint i (f) • MTF opt i (δz, f) • E dep i N i=1 E dep i , (3.6)
where N is the total number of bins along z, MTF scint i (f) is the MTF calculated from the energy deposition in the i th slice and MTF opt i (f) is the optics response calculated using equation 3.4. The position of z 0 was selected by calculating the maximum total MTF as a function of the focus position along z:
MTF tot = max(MTF tot z 0 =j ) j=t j=0 .
(3.7)
As a consequence of the out-of-focus light the eect of the thickness is more important than what has been shown in chapter 2, even at low X-ray energy. In gure 3.3, the scintillator response at 15 keV, reported as a reference (MTF scint ), has been combined with microscope optics with numerical aperture 0.4. For λ = 0.6 μm, DoF ≈ 4 μm (equation 3.3) and f C ≈ 1200 lp/mm (equation 3.2). At 15 keV, due to the low average free path of the electrons, the scintillator response is approximately the same for a scintillator thickness between 3 and 50 μm, and the contrast is above 80 % up to 1000 lp/mm. By adding the microscope optics we observe, rstly, that the system resolution is limited by the light diraction through the optics, even for a scintillator thinner than the DoF: the value of the MTF calculated using the full model for the 3 μm thick scintillator is reduced to zero at the cuto frequency. Secondly, for a scintillator thicker than the DoF, the contrast, compared to a diraction limited system, reduced due to the contribution of the defocused planes of the scintillator. The contrast at 500 lp/mm decreases from 50 % to 10 % while increasing the thickness of the scintillator from 3 to 50 μm. In gure 3.4 the MTF including scintillator and optic responses (MTF tot ), as well as the separate contributions of MTF scint and MTF opt , are reported for three dierent cases with numerical aperture 0.8. As a reference, the MTF of a diraction-limited system is also reported. The rst case (g. 3.4(a)) is almost a diraction-limited system. Since the scintillator is thinner than the DoF (approximately 1 μm at 550 nm) no degradation due to the defocus is observed (MTF opt = MTF diffraction ). At low X-ray energy (15 keV) the energy distribution in the scintillator is sharp. Therefore, the MTF tot is mainly dened by the light diraction. However, due to the energy spread in the scintillator, a degradation of 10 % in the contrast is observed as compared to the diraction-limited system. The second case (g. 3.4(b)) is a strongly defocused system. The MTF is degraded by the defect of focus, due the fact that the scintillator is much thicker than the DoF. The third case (g. 3.4(c)) uses the same conguration as the rst one. Hence, no defocus contribution is observed. However, the resolution is more degraded by the scintillator response, because at 49 keV the energy distribution in the scintillator is broad due to the diusion of the high-energy secondary particles. The MTF is limited at low-spatial frequencies by the scintillator response and at high-spatial frequencies by the light diraction.
In the three examples shown in gure 3.4, the MTF is mainly determined by one of the involved phenomena, i.e. the light diraction, the defocus of the system and the scintillator response, respectively. However, considering only the most important phenomenon and approximating MTF tot ≈ MTF opt at low energy and MTF tot ≈ MTF scint at high energy leads to wrong estimation of the MTF. Moreover, the system often has to be considered in an intermediate situation, where the dierent phenomena all contributes to the MTF, as for example in the presence of a small defect of focus at medium X-ray energies. In such cases, the evaluation of the In gure 3.5, congurations using dierent scintillators are evaluated: 0.4 and 5 μm thick LSO:Tb lms on YbSO substrates, a 5 μm thick GdAP:Eu lm on a YAP substrate and a 25 μm thick LuAG:Ce free-standing scintillator. The MTF tot curves are compared for dierent X-ray energies and numerical apertures. At low energy (15 keV) light diraction and defocus play the crucial role. Hence, as expected, no dierence is observed among the scintillators which are thinner than the DoF. At high NA the most performant scintillator is simply the thinner one. For the 25 μm thick LuAG:Ce scintillator the contrast using NA=0.8 is lower than using NA=0.15
MTF opt correctly approximates MTF tot at 15 keV, but it overestimates the resolution at higher energies.
Up to NA = 0.15 the optics response is the same for all the considered scintillators, since the DoF is thicker than the thicknesses. Consequently, at low energy the response of the system is the same for all the considered scintillators, while at high energy it is determined by the scintillator response. For example at 49 keV, due to the YAP substrate uorescence a 5 μm thick LSO lm outperforms a GdAP lm with the same thickness, and a 25 μm thick LuAG lm will do so too for low and medium numerical apertures (NA < 0.6). At high numerical aperture the thick LuAG degrades the resolution because of the out-of-focus light, making the thin lm more performant, even considering the uorescence of the substrate.
Due to Gd K-edge GdAP gives, among the investigated scintillators, the best resolution at 60 keV. A spatial resolution of 2 μm can be obtained by choosing a GdAP lm, while it is limited between 4.5 and 8 μm choosing 5 μm thick LSO or 25 μm thick LuAG. The setup of the experiment including the last set of slits, the edge with its alignment stage and the high-resolution detector are visible in gure 3.8. The measurements were performed at the ESRF beamline BM05. The Xray energy is selected in the optical hutch using a multilayer monochromator (ΔE/E ≈ 10 -2 ). To reduce the divergence of the beam, two pairs of slits are located before and after the monochromator. A third pair is positioned a few centimeter before the setup (gure 3.8). The edge is positioned as close as possible to the scintillator, at a distance of approximately 2-3 mm, mounted on high precision motors that allow the alignment of the tilt angle and the position of the edge in the X-ray beam. Ideally the edge should be in contact with the scintillator, rstly to reduce the eect of the remaining beam divergence that may degrade the spatial resolution and lastly to remove the phase contrast. However, this is not possible due to the alignment requirement of the edge tilt angle perpendicular to the beam direction. As a consequence, the eect of the phase contrast can enhance the image of the edge, which improves the experimental MTF compared to the calculated one, especially when the detector is congured to be a diraction limited system.
X-ray energy, scintillator composition and thickness
The measured MTFs at 15 and 30 keV are reported in gure 3.9 (continuous lines) for dierent scintillators with thicknesses between 1.6 and 25 μm which are combined with microscope optics of numerical aperture 0.45 and 20X magnication. A 3.3X eyepiece is added in the optical path, therefore the nal pixel size is 0.11 μm. For λ = 550 nm, the DoF is 2.3 μm, the resolution limit according to the Rayleigh criterion is 0.74 μm (671 lp/mm) and the cuto frequency is 1636 lp/mm. The calculated MTFs using the detector's full model, which includes the scintillator response and the optics blurring, are also reported in gure 3.9 (dashed lines). In the case of the thinnest considered Additionally, the scintillator response of LSO at 15 keV shows much higher values than the MTF calculated by only including light diraction and therefore, the detector is almost diraction-limited. This was shown for example in gure 3.4(a) in the case of NA = 0.8. In the case of NA = 0.45, the dierence between the scintillator response and the optics response is higher than the dierence observed at NA = 0.8, due to the lower cuto frequency. Therefore, the MTF at 15 keV for the 1.6 μm LSO is mainly limited by the diraction of light. In fact, the calculated MTF corresponds almost to a straight line approaching zero at the cuto frequency (1636 lp/mm). However, the measured MTF shows higher values than the calculated one: the resolution limit (contrast equal to 50 %) should be at 0.74 μm (671 lp/mm) according to the Rayleigh criterion, while experimentally it is at 0.60 μm (825 lp/mm). This eect is due to the phase contrast which is not included in the calculation. Due to this the edges in the image are sharper and the contrast is enhanced by approximately 20% at 500 lp/mm compared to the calculated one. By increasing the thickness of the scintillator above the DoF, the outof-focus light signicantly degrades the scintillator's MTF. Although a perfect matching between the calculated MTFs and the simulated ones was not obtained because of the phase contrast, the MTF degradation for increasing thickness is correctly foreseen by the simulations. Additionally, the slight dierence between the 8 μm thick LSO and GGG scintillators, due to the dierent emission wavelength and scintillator response, is predicted by the simulations and observed experimentally. At 30 keV (g.3.9(b)) all the evaluated MTFs are degraded by the scintillator response.
Once again, the phase contrast increases the high-frequency MTF values. Therefore the values obtained for the experimental MTFs are higher than for the simulated ones. However, considering the spatial resolution at 15 and 30 keV and calculating the degradation of the spatial resolution (ΔR = R 15 keV -R 30 keV
R 15 keV
), as reported in table 3.1, a good agreement between experiments and the simulations can be observed. The dierence between experimental and simulated ΔR is ≈ 2 %. The X-ray uorescence of the substrate
To further validate our results, the eect of the substrate X-ray uorescence was experimentally investigated. The MTFs where measured at 16 and 18 keV to observe the dierences between the scintillators grown on a Y-free substrate (GGG on GGG and LSO on YbSO) and the ones on substrates containing Y (LuAG on YAG and GdLuAP on YAP). The results are shown in gure 3.10 where the experimental and simulated results are reported using continuous and dashed lines respectively. The high-resolution detector was equipped with microscope optics of 10X magnication and NA 0.4, a 3.3X eyepiece and a PCO2000 camera. The nal pixel size is 0.22 μm. As in the previous results, the experimental MTF is enhanced due to the phase contrast. However, as foreseen from the simulations, the MTF curves of all the considered scintillators are similar at 16 keV (g.3.10(a)) while at 18 keV (g.3.10(b)) a reduction of the contrast to 80 % in the low frequency range, i.e. below 50 lp/mm, is observed when an Y-based substrate is used. The values of the experimental MTF in this range agree well. For the scintillators that do not contain Y in the substrate, no signicant dierence can be observed between 16 and 18 keV.
The non-proportionality of the scintillators
An important parameter in the scintillator characterization is the non-proportionality, which is the nonlinear dependence of the light yield on the X-ray energy [START_REF] Dorenbos | Non-proportionality in the scintillation response and the energy resolution obtainable with scintillation crystals[END_REF]. Not only does the non-proportionality aect the energy resolution of the scintillator, but it also has to be taken into account for applications that require a quantitative measurement under polychromatic beam conditions, as for example encountered in some uorescence imaging experiments. Moreover, even for monochromatic beam conditions, the scintillator eciency should be properly evaluated at the energy that will be selected for the experiment, if the eciency is strongly non-proportional.
To measure the non-proportionality, the dose deposited in the scintillator needs to be precisely estimated as a function of the X-ray energy. The attenuation coecient may not be a good approximation of the dose for incident X-and gamma rays, especially when the scintillator size is reduced, as in the case of thin lms. In fact, a fraction of the energy of the incoming photons that interact with the scintillator does not contribute to the deposited dose due to the escape of the secondary particles created by the interacting photons. Additionally, the presence of a substrate may increase the dose due to secondary particles that are generated in the substrate and subsequently reach the scintillator where they can be absorbed. Using the developed MC code described in chapter 2, to track the incident X-ray photons and all the secondary particles interacting with the scintillator or with the substrate can give a more precise estimation of the dose. An example of the dierence between the attenuation and the deposited dose was already reported in gure 2.6 for a thickness of 5 μm.
The non-proportionality measurement was performed at the ESRF beamline BM05, where the X-ray energy was selected with a silicon(111) monochromator (ΔE/E ≈ 10 -4 ).
The light yield was evaluated from the average signal from a at eld image (i.e. without objects in the eld of view) which was recorded using a high-resolution detector equipped with 2X microscope optics (NA=0.08) and a PCO2000 CCD camera. The 1×1 mm 2 beam size was controlled using a set of slits located a few centimeters before the detector and the X-ray ux was measured using a Canberra 500 μm silicon photodiode. In the energy range 16-64 keV, the photon ux measured on the diode was in the order of 10 9 photons/s/mm 2 . Since YAP substrates present a strong emission in the visible range, the GdLuAP samples were measured by placing a bandpass optical lter (central wavelength 634 nm, full width-half maximum 70 nm) in the optical path before the CCD. This ensures the selection of only a part of the Eu emission and removes most of the substrate luminescence. However, a fraction of the substrate emission, corresponding to approximately 10% of the emission intensity of the lm, is not ltered and adds to the scintillator emission.
The recorded data for GdLuAP:Eu, LSO:Tb, and GGG:Eu thin lm scintillators, as well as YAG:Ce 500 μm bulk scintillators are reported in gure 3.11(a). The spectra are corrected normalized to 1 at 16 keV and corrected by the total amount of incident energy, which is calculated from the measured X-rays ux. The signal intensity for the thin lms decreases with the X-ray energy, due to the lower percentage of X-rays that interact with the lm, and increases above the K-edges. The signal recorded for YAG:Ce increases from 17 keV up to 30 keV, due to the increasing thickness of the scintillator that contributes to the light emission. Above 30 keV, the thickness of the YAG sample is not sucient to attenuate the X-rays completely. Therefore, the signal intensity decreases for increasing energy above 30 keV. The attenuation of the lm is calculated using the NIST database [START_REF]Nist database website[END_REF] and it is used to correct the data reported in gure 3.11(b). The data corrected by the dose deposited in the scintillator calculated using Monte Carlo are reported in gure 3.11(c). Additionally, in the case of GdLuAP, the experimental data has been corrected by subtracting the signal originating from the substrate. This signal has been calculated (1): from the fraction of X-rays not attenuated in the scintillator that are attenuated by the substrate (g.3.11(b)) or (2): from the dose deposited in the substrate (3.11(c)). As a reference, the uncorrected data are also reported as dashed lines in gure 3.11(b,c). We can observe that when the data are corrected by the attenuation, the signal sharply increases after the substrate's K-edge (e.g. above 17 keV for GdLuAP and above 61 The GdLuAP signal was additionally corrected by the luminescence of the YAP substrate
(LY YAP = 0.1 * LY GdLuAP:Eu ).
keV for LSO) and sharply decreases after the lm's K-edge (e.g. above 17 keV for YAG, above 50.2 keV for GdLuAP and GGG and above 63 keV for LSO). These trends are caused, at least partially, by either an underestimation of the dose due to the secondary particles from the substrate, or by an overestimation due to the escape of secondary particles from the thin scintillator. In fact, when the data are corrected by the dose calculated using MC, the jumps in the curve almost completely disappear for YAG, GGG and LSO. In the case of GdLuAP the substrate signal is also subtracted from the data.
Compared to the approximation from the attenuation coecient, these results conrm the higher accuracy of the dose calculation by Monte Carlo tracking.
Conclusions
The model presented in chapter 2 was combined with analytical equations describing the optics, keeping the light diraction and the defocus due to the scintillator thickness into account.
The new model allows, compared to consider separately the scintillator response and the optics blurring, a more precise evaluation of the most convenient detector conguration. This is especially true in the intermediate cases, where none of the involved phenomena prevails. For example, a scintillator thicker than the DoF of the optics can give the same contrast as a thinner scintillator, with the additional benet of a higher eciency, if the composition is carefully chosen. Among the considered materials this is the case for a 1 to 10 μm thick scintillator on Y-based substrates compared to thicker free-standing scintillators (10-25 μm), in the energy range 17-50 keV, for low and intermediate numerical apertures (NA < 0.6). The same eect was observed while comparing a 5 μm thick GdAP with a 0.4 μm thick LSO scintillator in the 51-63 keV energy range, even for high NA. Moreover, reducing the scintillator thickness at high X-ray energy was observed to be benecial not only to suppress the out-of-focus light, but also to improve the scintillator response. Consequently, the MTF improves while reducing the thickness of the scintillator even below the value of the DoF, which is not the case at low energy.
The detector model was successfully validated experimentally. The energy deposited in the scintillator calculated using MC was compared with the value of the emitted light at dierent energies. The model correctly predicts sharp increases or decreases of the LY above the lm or substrate K-edges, due to the X-ray uorescence.
Additionally, the simulated detector's MTF was compared with the experimental data.
A good match between experiment and simulations was observed. The experimental MTFs are enhanced by the phase contrast, which is not included in the simulations. However, the degradation of the MTF due to the increase of the X-ray energy, as well as the low frequency drop in the contrast due to the substrate, were correctly predicted.
SCF shows an improvement of approximately 10 % at 500 lp/mm above the Gd K-edge. The contrast obtained for GdLuAP SCF on YAP is almost as high as for GGG on GGG in the range 52-63 keV, and signicantly higher in the range 63-80 keV, while it outperforms the contrast obtained for LSO scintillator on YbSO substrate in the range 52-68 keV.
The light yield eciency of the dierent scintillator was not kept into account in the model presented in chapter 2, mainly due to the fact that this parameter strongly depends on the growth technology and a precise estimation is not possible before the development of the scintillator. However, an estimation can be done from data found in literature. Rare-earth aluminum perovskites have been reported as good scintillators when doped by appropriate rare-earth ions [START_REF]gFtF hilpD e udgthur tyrmD xF hhnnjyD rF xghushnD FgF rshnthD hFF unithD FgF hrmD gF hivkumrD nd fFwF xghushn. GdAlO3:Eu3+:Bi3+ nanophosphor: Synthesis and enhancement of red emission for WLEDs[END_REF][START_REF]tin oung rkD rong ghe tungD qF eet m juD fyung uee woonD tung ryun teongD eEwo onD nd tung rwn uim. Enhanced green emission from Tb3+âBi3+ co-doped GdAlO3 nanophosphors[END_REF]. If comparable light yield to GGG:Eu 3+ is obtained, increased total eciency (eciency = E dep × light yield) is expected.
GdAP and GdLuAP have therefore been selected for the development as thin lm scintillators, on YAP substrates. The bulk growth of YAP is well developed and YAP substrates with good crystalline quality are commercially available at a relatively low price. This condition is required if SCFs are to be used as part of X-ray detectors. Some results about the LPE growth of ReAlO 3 (Re = Y, Lu, Tb) on YAP substrates have been already reported [START_REF] Yu | Growth and luminescence properties of single-crystalline lms of RAlO3 (R= Lu, Lu-Y, Y, Tb) perovskite[END_REF]. In the frame of X-ray imaging applications, our group at the ESRF has presented results about LuAP SCFs on YAP substrates [START_REF]Ee houissrdD hierry wrtinD pederi ivD iri wthieuD uriy orenkoD olodymyr vhynD ny orenkoD nd elexnder peE dorov. Scintillating Screens for Micro-Imaging Based on the Ce-Tb Doped LuAP Single Crystal Films[END_REF]. Optically good GdAP was not successfully grown using bulk techniques (i.e. Czochralsky or Bridgman), but the the possibility of growing GdAP crystals by the ux method has been shown for an other purpose than scintillators [START_REF]t endreet nd f tovni. Growth and optical properties of Cr3+ doped GdAlO3 single crystals[END_REF][START_REF]wzelskyD i urmerD nd r ropkins. Crystal growth of GdAlO 3[END_REF]. The addition of lutetium may play a role in stabilizing the crystal during the growth as well as tuning the absorption efciency exploiting the K-edges of Lu and Gd. Unlike GdAP, GdLuAP (Gd 1-x Lu x AlO 3 ) has been successfully grown using the Czochralsky method [START_REF]tiri e wre²D wrtin xiklD etr wl yD urel frto²D urel xejezhE leD urel flzekD p he xotristefniD g h9emrosioD h uertolsD nd i osso. Growth and properties of Ce 3+-doped Lu x (RE 3+) 1x AP scintillators[END_REF][START_REF]t ghvlD h glementD t qiD t rylerD tEp voudeD te wresD i wiE hokovD ghristin worelD u xejezhleD nd w xikl. Development of new mixed Lu x (RE 3+) 1-x AP: Ce scintillators (RE 3+= Y 3+ or Gd 3+): comparison with other Ce-doped or intrinsic scintillating crystals[END_REF].
In the case of the LPE growth the lattice mismatch between the lm and the substrate plays a critical role in the crystalline structure and in the luminescence properties of the lm. For instance, Kucera at al. [START_REF] Ku£era | Defects in Ce-doped LuAG and YAG scintillation layers grown by liquid phase epitaxy[END_REF] report this eect for lutetium and yttrium aluminum garnets, while previously Stringfellow ( 80) has shown it in the case of Ga x In 1-x P on GaAs substrates. Since the strategy of this work was the development on YAP substrate, the mixed composition of GdLuAP was also exploited to reduce the mismatch and improve the crystal quality.
The LPE growth process for GdAP and GdLuAP on YAP substrates using a PbO-B 2 O 3 ux has been developed. The growth conditions and the crystal structure are presented in this chapter, while the scintillation and X-ray imaging properties will be introduced and Ce 2 O 3 5N pure starting powders. The melt was contained in a Pt crucible and the growth was performed by the isothermal vertical dipping method [START_REF] Jm Robertson | Thin single crystalline phosphor layers grown by liquid phase epitaxy[END_REF]. The sample was attached to a Pt sample holder, which rotated during the growth at a speed of 70 rpm, with alternate direction of the rotation every 5 s. The thicknesses of the lms were determined by weight measurement and ranged from 0.3 to 30 μm. The growth was performed at temperatures between 980 and 1080 • C resulting in growth rates in the range 0.05 to 2.34 μm/min.
More explanation about the liquid phase epitaxy technique for optical materials can be found in [START_REF] Denis Pelenc | Elaboration par epitaxie en phase liquide et caracterisation de couches monocristallines de yag dope: realisation de lasers guide d'onde neodyme et ytterbium a faibles seuils[END_REF] and [START_REF] Dhanaraj | Springer Handbook of Crystal Growth[END_REF].
Results
Figure 4.1 shows the concentration triangle for the pseudo-ternary system of the melt.
The system composed by Pb, B, Al, Gd and Lu is reduced to a pseudo-ternary system on the three axes of which the relative atomic concentration of Pb+B (ux), Al and Gd+Lu is reported. The round marks represent the conditions in which the growth of an aluminum perovskite lm covering the overall surface of the substrate was achieved, regardless of the quality of the lm (gure 4.2a-4.2b). The color represents the ratio R Lu . The melt is stable when the atomic ratio Pb B is kept between 5 and 6, meaning that the growth speed is linear with the temperature (and repeatable over dierent The growth parameters and the Al, Gd and Lu relative concentration in the melt are reported in table 4.2. When islands were crystallized together with the lm, the thickness and the growth rate is not reported, due to the lack of precise evaluation by the weighing method.
R Al = Al
Gd+Lu , obtained structure (Str.) (f = lm, i = islands), thickness (Th.) and growth rate (G.R.) for LPE growth of Gd x Lu 1-x AlO 3 on YAP. When islands were crystallized, the thickness and the growth rate is not reported due to the impossibility of a precise evaluation by weighing method. The best optical and structural morphology was obtained for R Lu between 0.55 and The composition of the lm was analyzed using a Castaing Cameca SX50 electron probe micro analysis (EPMA) equipped with tungsten cathode and 4 vertical spectrometers.
R Lu R Al Str. Th.[μm] T[ • C ] G.R.[ μm min ] 0 1 f 4.2-
GdLuAP-YAP lattice mismatch minimization
The GdLuAP single crystal lms growth was performed by LPE on YAP substrates with dierent orientations. The lattice parameters of the lm were tuned by a careful optimization of the lm composition to reduce the mismatch with the substrate. X-ray diraction techniques were used in combination with electron micro probe analysis and electron microscopy to improve the growth conditions and the lm crystallographic and optical quality.
Experimental
The surface morphology was investigated using a LEO 1530 scanning electron microscope (SEM).
The crystallographic structure of the GdAP or GdLuAP lms and the lattice mismatch with the YAP substrate have been evaluated using X-ray diraction (XRD) on a vertical reectometer at the BM05 beamline at the ESRF (Grenoble). The X-ray energy was set to 15 keV using a double crystal Si(111) monochromator. The diraction spectra were recorded using a silicon diode. The in-plane diraction experiments were carried out using a six circle z-axis diractometer installed at the ID03 beamline of the ESRF (Grenoble) [START_REF] Balmes | The ID03 surface diraction beamline for in-situ and real-time X-ray investigations of catalytic reactions at surfaces[END_REF]. The sample was kept in an Argon ow during the experiment in order to prevent damages induced by oxygen and ozone. In order to be able to penetrate the lm and identify the crystallographic orientation of the substrate the energy of the incident beam was 24 keV. The data were acquired using a Maxipix detector, data reduction and analysis have been performed using BINoculars [START_REF] Roobol | BINoculars: data reduction and analysis software for two-dimensional detectors in surface X-ray diraction[END_REF].
Results
By varying the Lu percentage in the melt composition, and therefore in the lm, dierent surface morphologies have been observed (gure 4.2). Depending on the substrate orientation, an optimal concentration of Lu and Gd in the melt leads to a homogeneous lm surface (gure 4.2a), while for a dierent melt composition the lm surface is wavy (gure 4.2c) and the optical quality of the lm is not good enough for imaging. The best results were obtained for R Lu between 0.5 and 0.6. The optical quality of the lm, which is strictly connected to the crystalline quality and to the surface morphology, depends on the lattice mismatch between the substrate and the lm. In table 4.4, the lattice parameter values for GdAP [START_REF] Nl Ross | High-pressure structural behavior of GdAlO 3 and GdFeO 3 perovskites[END_REF], LuAP [START_REF] Vasylechko | Anomalous lowtemperature structural properties of orthorhombic REAlO3 perovskites[END_REF] and YAP [START_REF] Nl Ross | High-pressure single-crystal X-ray diraction study of YAlO 3 perovskite[END_REF] single crystals are reported: the calculated mismatch between GdAP (or LuAP) and YAP is dierent in the three crystallographic directions due to the orthorhombic structure. A signicant mismatch reduction can be achieved for Gd x Lu 1-x AlO 3 for x ≈ 0.5. from the distance between the GdLuAP diraction peak and the YAP diraction peak:
the measured value of the lattice mismatch for dierent samples is reported in gure 4.5, as a function of the R film Lu . Since the composition was not measured for every samples, the composition of the lms grown at the same melt concentration was approximated to the composition of the measured samples. However, a slight dierence in the R Lu ratio between dierent samples can be observed, mainly due to dierences in temperature and growth rate. This eect has to be taken into account as source of error for the results reported in this plot.
As expected, the distance between the two peaks reduces going towards Gd 0.45 Lu 0.55 AlO 3 .
However, the minimum mismatch in the two directions does not occur at the same lm composition.
We can observe in gure 4.4 the broad and asymmetric peaks related to the lms. Such a peak shape (asymmetric, broader) is typical for quasi-heteroepitaxial growth with relatively large lattice mismatch (above 1%). It indicates a worse structural quality of lms due to some deviations in their content, plane orientation, and formation of the lm/substrate transition layer. Thus, the SCF is still single crystalline but possesses a worse structural quality than in the homo-epitaxy case (gure 4.4, left column, graphs for Lu 0.55 Gd 0.55 AlO 3 : Eu sample). In the right column for this R Lu the peaks from the substrate and the lm strongly overlap and resemble as a broader peak. Together with Rocking curves around the (002) reection for GdLuAP lms and YAP substrates,(001)-oriented. The spectra have been shifted to set the maximum of the peak at 0 • . Red triangles correspond to the case of the low lattice mismatch (Gd 0.45 Lu 0.55 AlO 3 ) and the blue squares to the high lattice mismatch case (Gd 0.9 Lu 0.1 AlO 3 ). same kind of substrate and at the same conditions, except for the dierent R Lu ratios. In the case of Gd 0.9 Lu 0.1 AlO 3 , R Lu is equal to 0.2 in the melt, Δc ≈ +0.8% along the (001) direction and the observed peak for the lm is much larger than the substrate, indicating that the crystallinity is deteriorated with respect to the one of the substrate.
On the contrary, the RC of Gd 0.45 Lu 0.55 AlO 3 , R Lu is equal 0.55 (Δc ≈ +0.25%) and the diraction peak width is similar to the one of the substrate, indicating a similar crystallinity. The setup mounted on the beamline BM05 only allows the study of the symmetric Bragg reections, i.e. the families of planes parallel to the crystal surface.
To conrm that the lm is a single crystal and not a polycrystal with a preferred grains orientation perpendicularly to the surface, in-plane Bragg reections were also studied, using the diractometer at the beamline ID03. The X-ray beam impinges on the sample with a small angle respect to the sample surface (grazing incidence geometry). To separate the potential contribution of the substrate and the lm Xray diraction response, out of plane diraction experiments were repeated at dierent incidence angles. The results are presented in gure 4.7. Since lower incidence angles favor the lm response, the lm diraction peak originating from the lm can be clearly identied. Diraction rings or additional peaks were not observed, demonstrating that the lm is a single crystal and is oriented as the substrate.
Film thickness evaluation
In the LPE process many lms are grown from the same melt. Between the growths of two samples, the melt is homogenized using a Pt stirrer for at least two hours. Afterwards, the melt requires approximately 1 hour to stabilize the temperature. The lifetime of the melt, the number of lms grown and the delay between two growth processes depends on many parameter as for example the temperature, the melt composition, the crucible size and the setup of the furnace. In the process that was developed at the ESRF, the melt is kept at a temperature in the range 950-1150 during few weeks.
Typically two or three samples are grown every day with a few hours waiting time between them.
The lm thickness needs to be determined just after the growth for every sample. Some techniques often used to determine the thickness of a lm are based on optical interference or X-ray reectivity. However, in the case of the thin lm scintillators these techniques can not be easily applied, the rst, because the refractive index of the lm and the substrate are extremely close and the second, because the lm is highly absorbing at small incidence angle. Other techniques damage or destroy the sample. For example, the thickness can be measured using SEM imaging on the sample side, but the sample needs to be cleaved.
The weighting method, i.e. the thickness determination from the weight gain during the growth process, is a quick and cheap way to measure the lm thickness. The measurement uncertainty of this method was estimated to be approximately 5 % for 10 μm thick lms [START_REF] Denis Pelenc | Elaboration par epitaxie en phase liquide et caracterisation de couches monocristallines de yag dope: realisation de lasers guide d'onde neodyme et ytterbium a faibles seuils[END_REF]. However, this estimation is valid if the thickness of the two lms grown on the two largest surfaces of the substrate is the same and the growth on the edges is negligible.
Figure 4.8: Cross sectional SEM image of a GdLuAP lm on (001)-oriented YAP substrate. The thickness was found to be homogeneous and the estimation from the SEM image is 5.5 μm, while the thickness determined using the weighting method is 9.6 μm.
In the case of the GdLuAP lms, few samples were cleaved and the lm thickness was evaluated from cross-sectional SEM microscopy (gure 4.8) and compared with the ones obtained from weighting method. Among the considered orientation, the lm thickness evaluated by the weighting method was found to be correct for the (011) and (100) oriented sample, while it was overestimated of approximately a factor 2 for the (001) oriented ones, for a lm thickness in the order of 10 μm.
This dierence is due to the growth on the edges of the substrate. The GdLuAP lms were grown on YAP square substrates. The geometry and crystallographic orientation of the surfaces for the (001) oriented samples is reported in gure 4.9. The two surfaces where the two SCFs are grown have an area of 10 × 10 mm 2 and are polished down to a roughness of 5 Å. The four substrate edges have an area of 10 × 0.5 mm 2 and are not polished. Since the total surface where the SCFs are grown (top and bottom) is equal to 10 times the lateral surface, a lateral growth rate equal to 10 times the SCFs growth rate results in an estimation of the lm thickness by the weighting method of twice the real value. The surfaces of the edges are oriented (100) and (010), for (001)-oriented substrate.
In gure 4.3(a) the growth rate for GdLuAP SCFs on dierent substrate orientations is reported as a function of the temperature. The growth rate on the (010) oriented surfaces was not evaluated in this work. We can observe that for a given temperature in the supersaturation range, the growth rate on the (100) oriented surfaces is signicantly higher than on the (001)-oriented ones. At 1012 , the growth rate is ≈ 0.25 μm/min on the (001) orientation and 1.5 μm/min on the (001). The lateral growth rate can not be precisely estimated from this data, due to the dierences in the growth rate on a polished surface as compare to a rough one, and to the missing information about the (010) oriented surface. However, a non-negligible lateral growth is expected for the (001) oriented substrates, which explains the overestimation of the lm thickness by weighting method. On the contrary, when the substrate is (100)-oriented, the lateral growth on the (001) lateral surfaces is expected to be close to zero, leading to a correct estimation of the lm thickness by the weighting method.
Conclusions
A LPE process to grow GdLuAP:Eu SCFs on YAP bulk substrates has been developed, using a PbO -B 2 O 3 based ux. The improvement of the lm crystallographic structure and surface quality with the reduction of the lm-substrate mismatch has been demonstrated using X-ray diraction techniques. Non-negligible contamination of Pt from the crucible have been detected in the SCFs, while the Pb contamination from the ux is less signicant.
Chapter 5
Gd and Lu perovskites X-ray imaging properties
The scintillation properties of the newly developed GdLuAP thin lms, doped with various rare earth ions, are presented in this chapter. The Eu-doped GdLuAP SCFs have been optimized for imaging and their performances as scintillators for high spatial resolution detectors have been compared with the state-of-the-art Eu-doped GGG SCFs. The eect of the birefringence of the aluminum perovskite crystals on the quality of the image is also presented.
Scintillation properties
Experimental
To evaluate the light yield (LY), the scintillator was irradiated by 8 keV X-rays and the signal was recorded by a PCO Sensicam camera, combined with 2X optics. The signal intensity was corrected by the calculated absorption of the X-rays in the scintillator and by the sensor quantum eciency and compared to the signal obtained with a YAG:Ce bulk sample chosen as reference (produced by Crytur). The photoluminescence spectra were measured at room temperature (RT) using a Horiba/Jobin-Yvon Fluorolog-3 spectrouorimeter with a 450 W xenon lamp and a Hamamatsu R928P photomultiplier. The photoluminescence excitation (PLE) spectra were corrected for the xenon lamp emission spectrum.
Results
The perovskite SCFs can be doped with various rare earth ions ensuring the scintillation properties. In this work we tested europium, terbium and cerium as activators. In gure 5.1 the emission spectra under UV excitation of GdAP:Tb 3+ , GdAP:Ce 3+ and GdLuAP:Eu 3+ are reported. The dopant concentrations R melt X in the melt, dened as the atomic ratio R melt X = X X+Gd+Lu (where X = Tb, Ce, Eu), were 6.3%, 0.5% and 2.0% for the GdAP:Tb 3+ , GdAP:Ce 3+ and GdLuAP:Eu 3+ samples respectively. The Ce 3+ doped sample shows a broad UV band due to the electric dipole allowed d-f radiative recombination. The maximum wavelength peaking at 360 nm is typical for the cerium emission in perovskite phases. As shown in XRD, no residual garnet phase can be optically observed. Generally the optical components transmission of high-spatial resolution detectors is close to zero for wavelengths below 400 nm, therefore this dopant was not selected for further optimization in the frame of this work. Eu 3+ and Tb 3+ exhibit the expected emission lines from of the f-f recombination, respectively in the red and green ranges, which is well transmitted by most of the available optics. Note that the divalent europium emission, normally located in the UV-blue region, was not observed. If the various activators can enter in the lm, our nal aim is to obtain the best light yield, combined with an appropriate emission wavelength and oering a good optical quality for X-ray micro-imaging. So far, we focused on the optimization of Eu 3+ doped GdLuAP SCFs, with R Lu ≈ 0.55. As described above, this composition shows the smallest lattice mismatch with YAP substrates. This composition leads thus to the best optical quality SCFs which is crucial for imaging and to proper scintillation yield evaluation. Using a standard experimental set-up including a pulsed excitation source operating at 404 nm, we measured the uorescence decay time of GdLuAP:Eu 3+ at 614 nm emission wavelength and found a value of 1.49 ms. This means that Eu doped perovskite SCF are suitable for imaging experiments at acquisition frame rate lower than 500 Hz. In gure 5.2b the LY of dierent Eu-doped GdLuAP SCFs is reported as a function of the percentage of the reference bulk YAG:Ce LY. Note that the measurement was corrected by the absorption in the lm, the light emission of the substrate and the quantum eciency of the camera. The density being of the same order of magnitude, similar penetration depth of X-rays in the samples is expected, enabling to consider similar light collection eciency from sample to sample. On the (011)-oriented substrates, the optimized light yield is about 90 % of the YAG:Ce bulk scintillator used as reference. The LY of the GGG:Eu 3+ SCFs is around 70 % while the currently used LSO:Tb SCF shows a scintillation yield of 100 %. In terms of eciency, the GdLuAP:Eu SCFs can therefore compete with the existing SCFs, especially in the energy range 52-63 keV, where the absorption of the Lu-based materials is lower than the Gd-based ones. The gure of merit for 5 μm thick GGG, LSO and GdLuAP is plotted in gure 5.3. The FoM is calculated as in equation 2.2, including the scintillator MTF response and the deposited energy from the MC model described in chapter 2, and the light yield ex-perimentally evaluated at 8 keV, assuming the scintillators being perfectly proportional with the X-ray energy and ux. The GdLuAP:Eu FoM is 1.3 times higher as compared to GGG:Eu and up to 3.4 times as compared to LSO:Tb in the energy range 52-63 keV, while in the energy range 64-80 keV it decreases down to 0.7 times as compared to LSO:Tb and increases up to 4 times as compared to GGG:Eu. An other important aspect to underline in gure 5.2 is the dependence of the GdLuAP:Eu scintillation yield on the crystallographic orientation: it is in fact only 20 % for (100)-oriented samples and 40-60 % for the (001)-oriented ones. For the (001)-oriented samples, keeping into account the overestimation of the lm thickness due to the fast lateral growth on the sample borders, the light yield is approximately 30-40 %, still lower as compared to the (011)-oriented samples. The reason for this dierence is not yet clear and requires more detailed investigations. So far, we can exclude that it could be due to a dierent segregation coecient of Eu and therefore, to a dierent Eu concentration in the lm. The Eu concentration in the lms was measured by EPMA and the obtained values are similar for samples of dierent orientations grown at the same melt concentration (gure 5.2-a). Moreover, the Eu concentration in the melt was varied from R film Eu 0.5 % to 5 %: in this range, no signicant variation of the scintillation yield was observed. Therefore, this dierence is mainly due to the dierent strain and defects that could lead to a dierent eciency of the energy transfer between the perovskite crystal and the Eu atoms. The Pb and Pt contents were evaluated by XRF on three samples: the results were reported in table 4.3. The Pb content is close to the detection limit and does not seem correlated with the light yield. For the (011)-oriented sample (LY≈90 %) the Pt and Pb content are comparable. In comparison to the (011)-oriented sample, the Pt content is twice higher for the (100)-oriented sample (LY≈40-60 %) and ten times higher for the (100)-oriented sample (LY≈30 %). A dierent segregation coecient of Pt in the lm could explain the dierent light yield. However, more experiments to improve the statistics are needed to conrm this result.
High-resolution X-ray imaging
The imaging properties of the perovskite SCFs were tested at the ESRF on the beamline BM05. The scintillators were mounted in a high spatial resolution detector, equipped with microscope optics and PCO2000 CCD camera. The scintillators were polished down to 170 μm (total thickness lm plus substrate) to match the standard correction for the glass coverslip implemented in most of the commercial microscope objectives. In gure 5.4, the at eld images recorded using (a) GdAP:Tb, (b) Gd 0.9 Lu 0.1 AlO 3 : Tb, (c) Gd . [START_REF] Dhanaraj | Springer Handbook of Crystal Growth[END_REF] Lu .55 AlO 3 : Eu and (d) GGG:Eu scintillators are compared. When the lattice parameter of the lm is not optimized to reduce the mismatch with the substrate, the at eld image is inhomogeneous (a,b) due to the presence of regions where the lm is thicker or surface structures leading to light scatter which enhance the light collection from the scintillator. This eect reduces the dynamic range of the detector and the image quality. In the case of the Gd . [START_REF] Dhanaraj | Springer Handbook of Crystal Growth[END_REF] Lu .55 AlO 3 : Eu SCF, the lattice mismatch is not reduced to zero but it is signicantly reduced. However, the at eld (c) is as homogeneous as the one obtained with the homoepitaxially grown GGG:Eu (d) demonstrating that the perovskite lm possesses the required optical quality high spatial resolution imaging. The X-ray radiography of a y recorded at 12 keV using dierent scintillators is shown as an illustration in gure 5.5. The at eld correction has been applied to all the images. When the optical quality of the lm is degraded by the lattice mismatch, as in the case of GdAP:Eu SCF on YAP substrate (g.5.5a), the image is distorted and the small details can not be be clearly identied as in the case of a SCFs with higher optical quality (g.5.5b). On the contrary, for the GdLuAP:Eu SCFs (g.5.5c), the image quality is at least as high as the one obtained using a GGG:Eu state-of-the-art SCF. To quantify the image quality, the modulation transfer function was calculated The optical quality obtained for the GdLuAP lms on YAP is at least comparable to the one of the state-of-the-art GGG SCF scintillator. The possibility of obtaining the optical quality required for high spatial resolution imaging, reducing the mismatch of the lm with the substrate tuning the lm composition, was demonstrated.
5.3 Eect of the scintillator birefringence on the MTF Many transparent solids, e.g. glasses, many polymers, or crystals with a cubic structure, are optically isotropic. This means that the index of refraction is the same along every direction in the material, which is caused by the arrangement of atoms, and therefore the electronic structure of the material, being the same along the three axis directions. The interaction of light with an isotropic material does not depend on the angle between the propagation direction of the light and the material axes. The light is refracted at a constant angle, travels at a single velocity and is not polarized by the interactions with the electronic structure. Crystals with a non symmetric structure are often optically anisotropic: the index of refraction depends on the propagation direction of the light and on its polarization. These anisotropic materials, dened as birefringent, can be uniaxial or biaxial. In the rst case an axis can be found around which a rotation of the crystal will not change its optical behavior because all the directions perpendicular to this axis are optically equivalent. This axis is called optic axis and the light propagating along it behaves as the light passing through an isotropic material. Uniaxial material can be described by two indexes of refraction: an ordinary index of refraction n o , governing the light polarized perpendicularly to the optic axis, and an extraordinary one n e , governing the light polarized along it. Biaxial materials are characterized by three indexes of refraction n α , n β , n γ and have two optic axes. When the light propagates along a direction dierent from the optic axes, both polarizations are considered as extraordinary, with two dierent indexes of refraction. The dependence of the refractive index on the propagation direction of light in an anisotropic crystal is represented by a geometrical gure called an optical indicatrix [START_REF] Robert E Newnham | Properties of materials: anisotropy, symmetry, structure[END_REF]. The birefringence phenomenon was already observed in 1669 in calcium carbonate [START_REF] August | Nova Experimenta Crystalli Islandici Disdiaclastici1[END_REF], with n o = 1.658, n e = 1.486 at 590 nm, one of the crystals presenting the strongest birefringence. Looking at an object through this crystal results in a double image due to the double refraction of the light reected by the object (5.8(a)). However, the phenomenon was not understood until when A.J. Fresnel described the light in terms of waves including its polarization, more than one century later. Today the phenomenon is widely exploited in various applications, from optical microscopy to medical diagnostics and liquid crystal displays. However, in some elds birefringence can create problems. For example, it is critical for the fabrication of high-resolution UV optics used for semiconductors lithography, for the transparency of ceramic materials and for the spatial resolution obtained when birefringent scintillators are employed for imaging applications. Line patterns along two perpendicular directions are observed through a calcite crystal:
depending on its orientation, the image of the chart is doubled in a certain direction.
For a certain position of the crystal, that we take as 0
• reference, the doubling of the image is along the horizontal direction, therefore the resolution is highly degraded along this direction and not aected along the vertical one. When the crystal is rotate 180
• , the situation is opposite. For every position between 0
• and 180 • , the doubling and the resolution degradation will somehow aect both the directions. However, an ob ject is more complex than a resolution chart and contains details in every direction, as for example the y in gure 5.5. The birefringence will aect some directions more than others, but the overall image quality will be degraded. The example of calcite is well known and often shown as example, since the dierence in optical path between the ordinary and the extraordinary rays is well visible by eye for a suciently thick crystal.
To compare this example with the case of GdLuAP scintillators considered in this work, we should rst compare the birefringence B, evaluated as
B = n max -n min (5.1)
where n min and n max are respectively the lowest and highest refractive index of the material. B is approximately 0.2 for calcite and 0.02 for YAP crystals [START_REF] Marvin | Handbook of optical materials[END_REF]. The dierence in optical paths D depends not only on B but also on the distance to traverse, i.e. the thickness t of the crystal:
D = B • t .
(5.2) D is typically a few mm for a cm thick calcite crystal. Therefore a double image is easily visible by eye (see for example g.5.8). Thin lm scintillators considered in this work are only 10-20 μm thick, but the visible light image produced in the scintillator also traverses the substrate. Therefore, a total thickness of approximately 170 μm should be considered. YAP is a biaxial crystal, it has therefore three refractive indexes n α , n β and n γ along three optical directions, corresponding to the three crystallographic axes in the case of an orthorhombic crystal structure. By convention, the three refractive indexes n α , n β and n γ are named from the lowest to the highest value to avoid confusion in the case of the monoclinic and triclinic crystal structures, where the optical directions are not parallel to the crystallographic ones. Hence each of the indexes n α , n β and n γ can be associated to each of the lattice parameters a, b, and c. The three refractive indexes at 600 nm for the YAP crystal structure [START_REF] Kw Martin | Indices of Refraction of the Biaxial Crystal YAIO 3[END_REF] are reported in table 5.1, after correctly associating them to the three crystallographic directions a, b and c. The birefringence B, varies from 0.009 to 0.024 depending on the crystallographic orientation. D is therefore in the order of few micrometers, non-negligible when compared to the spatial resolution we are aiming for.
To study the eect of the scintillator's birefringence on the quality of the images obtained using high-resolution X-ray detectors, the contrast and the spatial resolution were measured for dierent angles of the scintillator around its surface normal. These measurement were performed using both using both the resolution chart and the slanted edge method. The experiments were performed on the ESRF beamline BM05, at an X-ray energy of 16 keV. The setup of the expriment is the same as the one presented in section 3.3. The high-resolution detector is equipped with microscope optics of numerical aperture 0.4 and 10X magnication, a 3.3X eyepiece and a PCO2000 camera. The pixel size of the whole setup is 0.22 μm. The measurement performed using the JIMA-C006-R:2006 resolution chart is similar to the example reported in gure 5.8(b). However, for X-ray imaging, the image is not formed by the light reected by the object. The X-ray ux partially absorbed in the object irradiate the scintillator and produces a visible light image which traverses the scintillator and substrate and it is projected on the CCD camera. Hence, the image quality does not only depend on the crystal's birefringence, but also on the X-ray beam divergence, the geometry of the investigated object and the spread of the energy deposited in the scintillator. However, by evaluating the contrast along two perpendicular directions while varying the scintillator's rotation angle around its surface normal, we can determine if the birefringence plays a role in the detector's performance. Figure 5.9: X-ray images of the 1.5 μm horizontal and vertical line patterns of the JIMA-C006-R:2006 resolution chart and extracted proles in the vertical (V) and horizontal (H) direction, for three dierent angles of the scintillator around its normal. The calculated contrast C = (I max -I min )/(I max + I min ) is reported in the legend. The X-ray energy is 16 keV, the scintillator is a (110)-oriented 11.5 μm thick GdLuAP:Eu SCF on YAP, the microscope optics numerical aperture is 0.4 and the nal pixel size is 0.22 μm (10X/0.4 + 3.3X + PCO2000).
In gure 5.9, the images of the 1.5 μm line patterns, as well as the extracted proles along the vertical (V) and horizontal (H) direction in the image are reported for three dierent positions (0 • , 45 • and 90 • ) of the scintillator around its surface normal. The scintillator is (011)-oriented GdLuAP:Eu on a YAP substrate. The in-plane orientation of the scintillator was not measured. Therefore, the reported angles are relative to the reference 0 • , but they should not be associated with a specic crystallographic orientation. It is important to underline that the X-ray beam presents a larger divergence along the horizontal direction than along the vertical, therefore, higher contrast and higher resolution are expected along the vertical direction. However, for a certain scintillator angle (0 • ), the contrast in the H direction is higher than in the V direction. When the scintillator is rotated 90 • , the highest V contrast is obtained. At 45 • , an intermediate situation was observed. The contrast measured along V is still higher than along H due to the beam divergence, but the dierence between the two proles H an V is reduced with respect to the 90 it is not birefringent, hence the CTF does not vary with the angle and the dierence between the CTF measured along V and H remains constant because it is only due to the dierent beam divergence. The GdLuAP:Eu (100)-oriented scintillator (a) is similar to GGG:Eu, although a slightly higher dispersion of the CTF values is observed. The CTF values obtained along the H or V direction for the (011)-oriented GdLuAP:Eu strongly depend on the rotation of the scintillator. Additionally, as already shown in gure 5.9, at 0 • the contrast measured along the H direction is higher than along the V one, while when the scintillator is rotated 90 • the higher CTF along the vertical direction is measured. The same eect was observed for the (110)-oriented GdLuAP scintillator. At 45 • the CTF along H is higher than along V, and the maximum CTF along V was measured at 135 • . A smaller spread of the CTF values is observed when compared to the (011)-oriented scintillator. Finally, in the case of the (001)-oriented GdLuAP:Eu, the measured contrast is inuenced by the angle, but a position where the H contrast is higher than the V contrast was not found.
To quantify the results obtained with the CTF measurement, in gure 5.10(f) we reported the standard deviation on the CTF obtained for the average contrast of the three measurements at three dierent scintillator angles. In the same plot, the birefringence values calculated from literature (table 5.1) for dierent YAP orientations are also reported. The results obtained for the CTF measurement are in good agreement with the birefringence values. Among the considered orientations, the (011) is expected to show the highest birefringence and the (100) the lowest one. The (010) should in principle present even a higher birefringence than the (011), but no scintillators were obtained at higher X-ray energy, or the out-of-focus light when a thicker scintillator is selected, the eect of the birefringence may become negligible. On the contrary, by selecting a lower X-ray energy and using a detector conguration for higher spatial resolution, the eect may become even more important.
Conclusions
The feasibility of sub-micrometer resolution X-ray imaging using GdLuAP:Eu have been demonstrated. The light yield depends on the substrate and lm crystallographic orientation. For the (011)-oriented samples a light yield higher than the light yield of the GGG:Eu state-of-the-art SCF scintillators was obtained. The gure of merit, obtained from the eciency of the scintillator and its MTF response, shows that the new GdLuAP:Eu SCF could compete with the existing SCFs, especially in the range 52-64 keV. However the birefringence eect of the (011)-oriented aluminum perovskite crystals is non-negligible when sub-micrometer spatial resolution is required. Consequently, further investigations are required to optimize the LPE process on the (100)-oriented YAP substrates and increase the light yield up to the value obtained for the (011)oriented ones. The last reason for being a good candidate for high resolution imaging is found in the cubic crystal structure of Lu 2 O 3 and its optically isotropic properties. Therefore, the resolution is not expected to be degraded due to the birefringence (see section for a detailed explanation).
Lutetium oxide liquid phase epitaxy growth
For the LPE growth of lutetium oxide SCFs a bulk SC with the same structure and a lattice parameters close to the ones of the lm are required. Many sesquioxide materials as Lu 2 O 3 , Y 2 O 3 or Gd 2 O 3 are dicult to growth as bulk single crystals due to their high melting point, which is above 2400 . However, much progress has been observed recently. Promising results have been obtained using techniques which lower the growth temperature using solvents as for example the hydrothermal or the ux methods [START_REF] Veber | Flux growth and physical properties characterizations of Y 1.866 Eu 0.134 O 3 and Lu 1.56 Gd 0.41 Eu 0.03 O 3 single crystals[END_REF][START_REF] Mcmillen | Hydrothermal single-crystal growth of Lu2O3 and lanthanide-doped Lu2O3[END_REF]. Up to now, however, the production of optically good crystals with a volume of a few cubic centimeters has only been reported using a modied version of the Bridgman technique, the so-called heat exchanger method (HEM) [START_REF] Veber | Flux growth and physical properties characterizations of Y 1.866 Eu 0.134 O 3 and Lu 1.56 Gd 0.41 Eu 0.03 O 3 single crystals[END_REF], which is still in a development stage. In this work the Lu 2 O 3 lms were grown on SC Lu 2 O 3 substrates produced by FEE GmbH using the HEM technique. Due to the development state of the bulk growth, the substrates were not oriented along a preferential direction and in some of them the presence of grains with a dierent orientation than the rest of the substrate was observed through X-ray Laue diraction.
The solvent used for the growth of the lm was composed of PbO and B 2 O 3 5N pure powders with an atomic ratio Pb/B 5.1-5.5.
[%] R melt Eu [%] T [ • C] G.R. [ μm min ]
Th.
[μm] L.Y. The solute/solvent ratio in the rst melt (A) was varied around the value R melt s =4.2.
Good quality lms were obtained, but a growth rate below 1 μm/min was not obtained, because the saturation temperature was approximately 1250 , higher than the maximum working conditions of the LPE furnace (maximum temperature of growth ≈1100). Low light yields, about 5% of that obtained using the reference YAG:Ce SC were obtained. The europium concentration in the melt R melt Eu was gradually increased up 5.6%; no variation of the L.Y. in this range was obtained. In the second melt (B) the R melt s ratio was reduced to study lower growth rates. Compared to the YAG:Ce reference the maximum obtained light yield was 20%. For R melt Eu in the range 3.5-22.3%, no dependence of the L.Y. on R melt Eu was observed.
Secondly, the platinum content is comparable with the (011)-oriented GdLuAP SCF, which shows the lowest Pt content of all studied orientations and the highest light yield. Thirdly, the europium segregation in the Lu 2 O 3 lms is only 0.11-0.15, while it is approximately 1 for the GdLuAP SCFs. The Eu content in the lm increases linearly with the Eu content in the melt. The maximum value of R film Eu was found to be 3.4%, for R melt Eu = 22.3%. The supersaturation temperature range of the melt was observed to reduce for increasing R melt Eu . This meant that Lu 2 O 3 lms could not be grown for R melt Eu above 22.3%. However, no signicant variations of the light yield were observed while the europium in the melt was increased. Therefore, the low europium segregation in the lm is probably not responsible for the low light yield. Lastly, the most surprising result is the high contents of zirconium measured in the lm. The Zr contents originates from the Pt crucible in which it is used as a reinforcement and the concentration in the crucible is approximately 200 ppm. The ratio R film Zr increases with the Eu contents. This can be explained since the samples with higher Eu contents are produced later and therefore more of the crucible content is incorporated in the melt. The Zr contents can not be compared with the GdLuAP SCFs since they were grown using a Y-reinforced crucible (Zr-free), which is not produced anymore. The lead and zirconium contaminations may be responsible for the low light yield of the lutetium oxide lms. A dierent solvent, with a lower lead component, needs to be studied to clarify this point. XRD measurements were performed to conrm the growth of Lu 2 O 3 single crystal lms with cubic phase. The experiment was performed at 15 keV on the reectometer on the ESRF beamline BM05 (see experimental in chapter 4). The XRD pattern is reported in gure 6.2 for a 3.5 μm thick (111)-oriented Lu 2 O 3 : Eu SCF, grown from melt B. The europium ratio R melt Eu was 10%, corresponding to approximately 1% in the lm. Omega-2theta scans were performed around the (222), ( 444) and (666) symmetric Bragg reections (gure 6.2 a,b,c). The substrate peak is located at higher angles than the lm peak. For the (222)-Bragg reection, the substrate peak is not completely separated from the lm peak due to the lower angle separation and to the higher absorption in the lm at low angles. The mismatch between the lm and substrate lattice parameters Δ = ( film -substrate )/( substrate ) was found to be 0.04 %. The complete omega-2theta diraction pattern is reported in gure 6.2d. Additional peaks were not observed. The rocking curves around the (444) Bragg reections of the lm and of the substrate are reported in gure 6.3. To compare the FWHM, the two curves are normalized and shifted to zero. Similar rocking curves are obtained for the lm and the substrate, which The low light yield requires further investigations, but it is probably linked to the high contents of lead and zirconium in the lm. The rst is coming from the solvent, the latter from the Pt crucible. The low solute concentration which is needed to keep the growth temperature below 1100 results in a melt which strongly corrodes the platinum crucible and sample holder and contaminates the lms.
In our case, the next steps are the investigation of dierent solvent compositions with a reduced percentage of lead and the test of dierent Pt crucibles, possibly Zr-free. Additionally, the scintillation properties of the Lu 2 O 3 SCFs, activated using dierent dopants and co-dopants, should be investigated. Of course, it depends on the balance between defocus and scintillator response and, therefore, on the numerical aperture of the optics, the composition of the scintillator and the X-ray energy.
Based on the results of the simulations, lutetium oxide (Lu 2 O 3 ) and gadolinium or lutetium based aluminum perovskites (GdAP, GdLuAP) have been selected as candidate materials for the liquid phase epitaxy based development of thin SCF scintillators.
Perspectives
The Geant4 developed application can be now used both to chose the detector conguration and to guide the development of new scintillators. However, to run a simulation the user should have a basic knowledge of Matlab and C++ programming languages. If an user-friendly interface is created, the application could be released to the beamlines to help in the choice of the detector conguration.
Moreover, Geant4 includes the possibility to simulate the scintillation and track the optical photons. A few preliminary test have been performed, the results are not included in this work. By additionally tracking the optical photons, more congurations could be evaluated. For example, the MTF degradation and the light collection improvement could be evaluated for optical coatings at the surfaces of the scintillator or for modied geometries (curved substrate, structured scintillators). GdAP and GdLuAP SCF scintillators were grown on YAP SC substrates. The optical quality of the lms needed for high-resolution imaging was obtained after optimizing the GdLuAP lm composition to reduce the lattice mismatch with the substrate. The Eu-doped GdLuAP lms show a scintillation LY which is higher than the stateof-the-art GGG scintillators. The LY, however, depends on the substrate orientation, probably due to the a dierent amount of platinum impurities incorporated in the lm. X-ray images obtained using the newly developed lms show a slightly better contrast at low energy (15 keV). It was observed that the image quality is also aected by the crystallographic orientation. This became apparent due to the birefringence of the perovskite crystals, since this phenomenon degrades the resolution in the sub-micrometer range. The orientation presenting the highest LY is also strongly aected by the birefringence.
Perspectives
GdLuAP:Eu SCFs can compete with the state-of-the-art SCFs as scintillators for highspatial resolution detectors. However, more investigations are required to clarify the role of the substrate orientation on the light yield. The growth process needs to be modied to improve the LY of the lms grown on the orientations least aected by the birefringence. Moreover, the YAP substrates present a luminescence in the UV and visible range that can degrade the resolution at high energies. This luminescence can be only partially suppressed using an optical lter. Therefore, a YAP growth process in collaboration with companies or laboratories that produce bulk SC YAP should be foreseen in order to reduce or suppress the unwanted luminescence. Finally, the scintillation properties of other dopants than europium in GdLuAP host, as for example cerium or terbium, should be investigated. Selon la combinaison des diérentes parties du détecteur, c'est-à-dire le scintillateur, les lentilles de microscopie et la caméra, la résolution spatiale peut être, au nal, limitée par diérents phénomènes. Premièrement, l'élargissement de la région dans le scintillateur ou l'énergie du photon X incident est déposée. L'énergie n'est pas localisée dans un seul point, mais se propage due à la diusion par eet Rayleigh et Compton et à la diusion des photons X ainsi que des électrons secondaires. Deuxièmement, les lentilles de microscopie agissent comme un trou circulaire vis à vis de la lumière émise. En conséquence, la meilleure image d'un point source qui peut être projetée est limitée par la largeur de la première frange de diraction. Une telle largeur dépend de la longeur d'onde de la lumière et de l'ouverture numérique des optiques. Troisièmement, les optiques de microscope ont une profondeur de champ, qui correspond à l'épaisseur maximum de la source le long de l'axe optique (i.e. l'épaisseur du scintillateur) peuvent être projetée en focus. La lumière produite en dehors de cette profondeur dégrade la résolution spatiale. Enn, la taille du pixel de la caméra peut limiter la résolution spa- Sur la gure 7.2, le facteur de mérite FoM, calculé à partir de l'ecacité d'absorption de la couche E dep et de la valeur de la MTF à 500 lp/mm, est tracé en fonction de l'énergie. En principe il faudrait égalment tenir en compte le rendement lumineux pour pouvoir calculer la vraie ecacité de chaque scintillateur, mais ce paramètre ne peut pas être prévu avec précision avant que le matériau soit développé, donc il n'était pas inclus dans le calcul. On remarque que selon l'énergie des photons, la FoM varie selon la composition des scintillateurs. Le rôle le plus important est joué par les seuils d'absorptions K des éléments présents dans le scintillateur et le substrat. Si l'énergie dépasse le seuil de production de uorescence du substrat, la valeur de la courbe de MTF à basse fréquence est réduit due aux photons de uorescence produites dans le substrat qui reviennent dans la couche. Cet eet est mieux illustré sur la gure 7. est déposée sur un substrat de pérovskite d'yttrium et d'aluminium: au-dessus de 17 keV, c'est à dire au-dessus de seuil d'absorption K de l'yttrium, le contraste décroit à 80% aux basses fréquences spatiales. Si les électrons secondaires produits dans le substrat ne sont pas pris en compte, le résultat est identique, mais lorsque les photons X secondaires produits dans la couche sont retirés de la simulation, la baisse brutale de la MTF à basse fréquence disparait. Les courbes de MTF ont été comparées à des mesures faites sur la ligne de lumière BM05 à l'ESRF. Quelques résultats sont présentés sur la gure 7.4. Le contraste dans la courbe de MTF est augmenté à cause du contraste de phase, mais la variation avec les diérents scintillateurs et l'énergie des rayons X est bien visible. Des calculs analytiques ont été ajoutés pour prendre en compte la diraction de la lumière et la profondeur de champ de l'objectif. Les courbes de MTF calculées à diérentes positions dans l'épaisseur du scintillateur (MTF j ) sont modiées par les variations de l'ouverture numérique, la longueur d'onde du scintillateur ainsi que la distance du plan focal des optiques 7.1. La MTF totale est donnée par la moyenne des MTF j pondérées correspondant à la couche se rapproche du pic correspondant au substrat en ajoutant du lutetium, jusqu'à un optimum pour un rapport R Lu = Lu Lu+Gd ≈ 0.5. Simultanément, la largeur du pic de la couche se réduit pour un écart de maille inférieur. L'amélioration de la qualité de la surface entre les couches de GdAP et GdLuAP sont illustrées dans la gure 7.7, eectuées par microscopie électronique (SEM). Les couches de GdLuAP:Eu ainsi obtenues sont très prometteuses pour l'imagerie à Gd . [START_REF] Desy | [END_REF] Lu .90 AlO 3 (haut écart de maille avec le substrat Δ = 1.12 %) et une couche de Gd . [START_REF] Desy | [END_REF] Lu .90 AlO 3 (faible écart de maille, Δ = -0.04 %).
d'europium an d'optimiser le rendement de scintillation. Un rendement lumineux de ≈90% par rapport au rendement d'un monocristal de YAG :Ce, utilisé comme référence, a été mesuré. Le rendement ne dépend pas fortement de la concentration d'europium (dans la gamme mesurée), mais il dépend de manière plus surprenante de l'orientation du substrat de YAP. Une explication possible est la ségrégation de platine qui rentre de manière diérente dans les couches, mais l'origine de cette diérence n'est pas encore complètement claire. Les couches de GdLuAP :Eu ont été testées comme scintillateurs pour l'imagerie à haute résolution, et comparées avec des couches minces de GGG :Eu. Un contraste plus élevé a été mesuré pour le GdLuAP :Eu (gure 7.8).
Conclusion
Un modéle pour évaluer la résolution spatiale des détecteurs à haute résolution pour l'imagerie à rayons X a été mis en place. Le modèle a été validé avec des mesures expérimentales, et peut être maintenant utilisé pour prévoir la résolution spatiale des nouvelles couches à développer, ainsi qu'aider dans le choix de la meilleur conguration du détecteur à rayon X. Deux nouveaux types de scintillateurs monocristallins en couche minces on été développés et caractérisés. Les scintillateurs basés sur une combinaison de perovskite de gadolinium et lutétium, dopés avec de l'europium, ont un bon rendement lumineux et une bonne qualité optique, qui dépend fortement de l'orientation cristallographique. L'orientation présentant le rendement le plus élevé n'est pas adaptée pour l'imagerie à très haute résolution due à la biréfringence qui dégrade la qualité de l'image. Par consequence, le procédé de croissance par LPE doit être amélioré pour les orientations presentant une biréfringence réduite, avec pour objectif de réduire les contaminations dans la couche et d'améliorer le rendement lumineux. Les scintillateurs à base d'oxide de lutetium ont une très haute ecacité d'absorption et une très bonne qualité optique, mais un rendement très bas comparé aux autres scintillateurs en couche mince. La grande quantité de plomb nécessaire pour abaisser susamment la température de croissance donne un bain très corrosif pour le creuset en platine, et donc de très fortes contaminations dans la couche, qui probablement sont la raison du faible rendement de scintillation observé. Un nouveau type de bain moins riche en plomb doit donc être étudié pour réduire ou soupprimer ce problème de contamination.
Faculté
Techniques des Activités Physiques et Sportives Observatoire des Sciences de l'Univers de Lyon Polytech Lyon Ecole Supérieure de Chimie Physique Electronique Institut Universitaire de Technologie de Lyon 1 Ecole Supérieure du Professorat et de l'Education Institut de Science Financière et d'Assurances Directeur : M. F. DE MARCHI Directeur : M. le Professeur F. THEVENARD Directeur : Mme C. FELIX Directeur : M. Hassan HAMMOURI Directeur : M. le Professeur S. AKKOUCHE Directeur : M. le Professeur G. TOMANOV Directeur : M. le Professeur H. BEN HADID Directeur : M. le Professeur J-C PLENET Directeur : M. Y.VANPOULLE Directeur : M. B. GUIDERDONI Directeur : M. le Professeur E.PERRIN Directeur : M. G. PIGNAULT Directeur : M. le Professeur C. VITON Directeur : M. le Professeur A. MOUGNIOTTE Directeur : M. N. LEBOISNE
Figure 1 . 1 :
11 Figure 1.1: Multi-scale 3D X-ray imaging of fossil owers. (a) X-ray synchrotron microtomography showing the spatial organization of the inorescence. Gray, sediment; purple, inorescence receptacle and perianth units; orange, pollen sacs; green, staminal laments. Data recorded at the ESRF beamline BM05, 25 keV. (b, c, d) X-ray nano-tomography of (b) a pollen sac, (c) virtual dissection of a pollen grain (d) sub-micrometer structures inside a pollen grain. Data recorded at the ESRF beamline ID22-NI (today ID16A-NI), 29.5 keV. For more details see (1).
1. 2
2 Detectors for synchrotron imaging applications 1.2.1 Indirect 2D detectors
Figure 1 . 4 :
14 Figure 1.4: A Periodic grating is a series of objects separated by a nite distance. Their images are larger than the real object sizes due to the nite resolution of the detector.The images start to overlap when the distance between two objects is close to the spatial resolution limit, reducing the contrast.
Figure 1 . 5 :
15 Figure 1.5: Scheme of the scintillation process in a wide band-gap material. The process is divided in three step: conversion, transport and luminescence (25).
Figure 1 . 6 :
16 Figure 1.6: Dependence of the detector performances on the scintillator properties.
Figure 1 . 7 :
17 Figure 1.7: Light is transmitted dierently depending on the structure of the scintillator. (a) In powder scintillators it is scattered at the grains boundaries, resulting in a spreading of the light distribution and a degradation of the spatial resolution. Transparent ceramic scintillators are aected by a similar phenomenon. (b) In single crystals, the light travels up to the surface without being scattered. Only the fraction of light below the critical angle can exit the scintillator. (c) Structured scintillators act as a light guide, enhancing the light collection outside the scintillator (32).
Figure 1 . 8 :
18 Figure 1.8: Comparison between the image of an X-radia resolution chart obtained using a 5 μm Gadox powder scintillator and a thicker single crystal LSO:Tb scintillator (38).
Figure 1 . 9 :
19 Figure 1.9: Comparison between the image of a grid pattern (detail size 0.9 μm) obtained with (left) a sub-micrometer structured 6 μm thick Lu 2 O 3 :Eu scintillator and (right) a single crystal 8 μm thick GGG:Eu scintillator. In the inserts the at eld images are reported.
Figure 2 . 2 :
22 Figure 2.2: Geometry and axis convention in the Geant4 application.
Figure 2 . 3 :
23 Figure 2.3: Geant4 output: the energy distribution in the scintillator is a matrix where each line corresponds to the LSF calculated at a specic z in the depth of the scintillator (a). The matrix of the MTFs calculated at every depth z can be calculated by the Fourier transform of the LSF(b).
not fully optimized.At low energy (15 keV, gure 2.4(a)) no signicant dierences among the response of LSO, GGG, GdAP and Lu 2 O 3 scintillators are observed, but these materials all present high density, from GGG with a density of 7.1 g/cm 3 up to Lu 2 O 3 with a density of 9.4 g/cm 3 . As comparison the CsI scintillator which has a density of 4.5 g/cm 3 , has also been reported in gure 2.4(a). It shows a broader PSF and a contrast in the MTF at least 10% worse than the other denser considered materials, for spatial frequencies above 500 lp/mm. At 20 keV (gure 2.4(b)), the MTFs are lower than at 15 keV. Additionally, the curve obtained for GdAP presents a sharp decrease at the low spatial frequencies which, compared with the other investigated materials, leads to a contrast reduction of ≈ 20%.
Figure 2 . 4 :
24 Figure 2.4: PSFs and MTFs calculated from the simulation of the deposited energy in a
Figure 2 . 5 :
25 Figure 2.5: Contrast at 500 lp/mm as a function of the incident X-ray energy, for dierent scintillators. The values are extracted from the MTFs calculated using the MC model. The thickness of the scintillating lm and substrate are 5 and 150 μm, respectively.
Figure 2 . 6 :
26 Figure 2.6: (a) Percentage of the incident energy attenuated by 5 μm thick scintillators, calculated using the NIST database (68). (b) Percentage of the incident energy deposited in 5 μm thick scintillators on 150 μm thick substrates, calculated tracking all the secondary
Figure 2 . 7 :
27 Figure 2.7: Attenuation of the scintillator calculated using NIST database and compared with the energy deposited in the scintillator which was calculated using Monte Carlo calculations, for a 11.4 μm thick Gd-LuAP lm on a YAP substrate.
Figure 2 . 9 :
29 Figure 2.9: (a)-top Energy deposited, (a)-bottom MTF and (b) number of interactions
Figure 2 .
2 Figure 2.10: PSF and MTF curves at dierent z in the depth of the scintillator, for a free-standing 5 μm thick GdAP scintillator at 49 and 55 keV.
Photoelectric
Figure 2 . 12 :
212 Figure2.12: Energy spectra of the (a) secondary electrons and (b) secondary X-rays generated per incident X-ray in 25 μm of free-standing GdAP. The average energy of the secondary electrons and X-rays is reported in the legend.
Figure 2 . 13 :
213 Figure 2.13: Spatial distribution of the (a) X-ray and (b) electron interactions, calculated for 5 μm of GdAP on a YAP substrate at 15 and 20 keV.
Figure 2 . 14 :
214 Figure2.14: MTF calculated for 5 μm of GdAP on YAP at 18 keV, tracking all the particles (black) or killing the secondary electrons (dashed red) or X-rays photons (dashed green) produced in the substrate.
Figure 3 . 1 :
31 Figure 3.1: Depth of eld and Rayleigh frequency as a function of the numerical aperture and of the wavelength. The magnication M associated with the NA is reported in the top Xaxis.
Figure 3 . 2 :
32 Figure 3.2: Spatial resolution limit as a function of NA calculated including light diraction and defect of focus due to the scintillator thickness, for λ = 550 nm. A similar result was already published by Koch et al. in (4).
Figure 3 . 3 :
33 Figure 3.3: Scintillator MTF calculated using Monte Carlo (MTF scint ), and total MTF with NA = 0.4 (MTF tot ), calculated at 15 keV for dierent thicknesses of free-standing GdAP scintillators.
Figure 3 . 4 :
34 Figure 3.4: Contributions to MTF tot (scintillator + optics) of the dierent phenomena: energy spread in the scintillator (MTF scint ), light diraction (MTF diffraction ) and defocus (MTF opt ). NA = 0.8, scintillator and energy indicated in the plots.
3. 2 Figure 3 . 5 :
235 Figure 3.5: MTF tot of 0.4 and 5 μm thick LSO:Tb lms on YbSO substrate, a 5 μm thick GdAP:Eu lm on YAP substrate and a 25 μm thick LuAG:Ce free-standing crystal, evaluated for dierent NA and X-ray energies.
Figure 3 . 8 :
38 Figure 3.8: Setup for the high-spatial resolution measurement using the slanted edge method.
Figure 3 . 9 :
39 Figure 3.9: Experimentally measured (continuous lines) and simulated (dashed lines) MTFs at (a) 15 keV and (b) 30 keV. Dierent scintillators are combined with microscope optics of NA 0.45 and the PCO2000 camera. The total magnication is 66X, corresponding to a pixel size of 0.11 μm.
Figure 3 . 10 :
310 Figure 3.10: Experimentally measured (continuous lines) and simulated (dashed lines) MTFs at (a) 16 keV and (b) 18 keV, for a high-resolution detector equipped with optics of NA 0.4, total magnication 33X, pixel size 0.22 μm.
Figure 3 .
3 Figure 3.11: (a) LY as function of the X-ray energy, corrected by the X-ray ux and (b) the attenuation of the scintillator (NIST) and (c) the energy deposited (MC, G4).
samples) and no spontaneous crystallization at the surface of the melt or on the stirrer is observed. The black crosses indicate the melt concentration where the crystallization of islands (with a dierent composition with respect to the lm) was observed (gure 4.2e-4.2f ). Along the vertical orange dashed line the atomic concentration of Al and Gd+Lu in the melt agreed with the stoichiometry of the perovskite phase (i.e. R Al = 1).
Figure 4 . 1 :
41 Figure 4.1: Concentration triangle of the pseudo-ternary system Gd + Lu, Al, Pb + B studied for the LPE growth of Gd x Lu 1-x AlO 3 on YAlO 3 . The color of the round marks indicated the Lu Gd+Lu ratio. The black crosses indicate when the crystallization of islands is preferred to the lm growth.
Figure 4 . 2 :
42 Figure 4.2: SEM images of dierent surface morphologies obtained for dierent conditions. (a),(b): Gd .45 Lu .55 AlO 3 lm (low lattice mismatch with the substrate) for substrates from dierent suppliers. (c),(d): Gd .10 Lu .90 AlO 3 lm (high lattice mismatch with the substrate) for substrates from dierent suppliers. (e),(f): Island growth in the case of excess Al concentration in the melt.
Figure 4 .
4 Figure 4.4 shows the omega-2theta scans at 15 keV around the (400) symmetric reection for the (100) oriented samples (a) and around the (002) symmetric reection for the (001) oriented samples (b). The ratio between the diracted intensities of the substrate and the lm is not constant among the dierent samples, due to dierences in lm thickness, composition and crystal structure. The lattice mismatch has been evaluated
Figure 4
4 Figure 4.6:Rocking curves around the (002) reection for GdLuAP lms and YAP substrates,(001)-oriented. The spectra have been shifted to set the maximum of the peak at 0 • . Red triangles correspond to the case of the low lattice mismatch (Gd 0.45 Lu 0.55 AlO 3 ) and the blue squares to the high lattice mismatch case (Gd 0.9 Lu 0.1 AlO 3 ).
Figure 4 . 7 :
47 Figure 4.7: Reciprocal space map around the (212) reection for Gd 0.45 Lu 0.55 AlO 3 lm on YAP substrate (001)-oriented, recorded at 24 keV. To enhance the substrate and the lm contribution the maps have been recorded at incident angle 0.2 • (left) and at incident angle 0.05 • (right).
Figure 4 . 9 :
49 Figure 4.9: Scheme of the geometry and crystallographic orientations for a (001)oriented GdLuAP on YAP substrate. Two single crystal lms are grown on the top and bottom polished surfaces (area = 10 × 10 mm 2 ). Lateral growth can be observed on the edges, not polished (area = 10 × 0.5 mm 2 )
Figure 5.1: Emission spectra of GdAP:Ce, GdAP:Tb and GdLuAP:Eu SCFs under UV excitation (excitation wavelength reported in the plot).
Figure 5 . 2 :
52 Figure 5.2: (a) Eu concentration in the lm R film Eu as a function of the Eu concen-
Figure 5 . 3 :
53 Figure 5.3: Figure of merit including the total eciency (Deposited energy × LY) and the scintillator spatial resolution (contrast at 500 lp/mm), calculated as in equation 2.2 for 5 μm thick GGG, LSO and GdLuAP scintillators.
Figure 5 . 4 :
54 Figure 5.4: Flat eld images at 15 keV using 20X/0.45 optics (Field of view 0.7 mm) (a) GdAP on YAP (b) Gd 0.9 Lu 0.1 AlO 3 on YAP (c) Gd 0.45 Lu 0.55 AlO 3 on YAP and (d) GGG on GGG.
Figure 5 . 5 :
55 Figure 5.5: Image of a y with 2X/0.08 microscope objective and (a) GdAP:Tb 17.1 μm, (b) LSO:Tb 5.6 μm, (c) GdLuAP 11.4 μm (d) GGG:Eu 11.2 μm.
Figure 5 . 8 :
58 Figure 5.8: (a) Example of the double image of an object when observed through a calcium carbonate (calcite) crystal. (b) Qualitative examples of the eect of the rotation of the crystal on the image, obtained using the Nikon on line tutorial (89).
Figure 5 .
5 Figure 5.10: (a-e) CTF curves measured using a JIMA-C006-R:2006 resolution chart at 16 keV along two perpendicular directions H and V, varying the angle of the scintillator around its surface normal. (f) Birefringence value and standard deviation calculated for the CTF measurement reported in (a-e). The detector was equipped with 10X NA=0.4 microscope optics, 3.3X eyepiece and CCD camera PCO2000. The nal pixel size is 0.22 μm.
6
6 Both the compositions A and B lead to a stable melt and to the formation of a lm with homogeneous thickness, good optical quality and homogeneous surface. A SEM image of the surface of a Lu 2 O 3 lm and the cross sectional image of that lm are shown in gure 6.1. The thicknesses measured from cross sectional SEM images agree with the values calculated from the weight gain.
Figure 6 . 4 :
64 Figure 6.4: Emission spectra of a Lu 2 O 3 : Eu SCF scintillator under Xray irradiation at 8 keV.
Figure 6 . 5 :
65 Figure 6.5: Figure of merit including the total eciency (Deposited energy × LY) and the scintillator response (contrast at 500 lp/mm), calculated as in equation 2.2 for 5 μm thick GGG, LSO and GdLuAP scintillators.
Figure 6 .
6 Figure 6.5 compares the gure of merit (absorption × light yield × MTF at 500 lp/mm), calculated using the simulation results from chapter 2, for Lu 2 O 3 : Eu lms as compared to GGG:Eu, LSO:Tb and GdLuAP:Eu SCFs. Although the absorption of Lu 2 O 3 is higher, its eciency is reduced because of the low
Chapter 7 Conclusions 7 . 1
771 Modelling of the high-resolution detectorIndirect detectors are today often preferred for absorption and phase contrast imaging experiments at synchrotrons. Some of the main advantages over direct semiconductor detectors are the possibility of managing high X-ray uxes, the lower price and the resistance to radiation damage. Indirect detectors using thin SCF scintillators and microscope optics are capable of sub-micrometer spatial resolution. Additionally, the detector's resolution and eld of view can be adapted to suit the demands of the experiment. The thickness of the scintillator and its composition play a crucial role in the delicate compromise between spatial resolution and eciency of the detector, especially at high X-ray energy.In the rst part of this work a model to simulate the MTF of high-spatial resolution detectors was presented. Using Monte Carlo simulations, the contribution of the scintillator to the MTF of the detector was evaluated from the distribution of the energy deposited in the scintillator. To take the microscope optics into account, the distribution of the deposited energy was corrected for the light diraction and for the defocus, as a function of the distance between the focal and any other parallel plane in the scintillator. The total MTF response of the detector was evaluated as the sum of superimposing images produced from the dierent planes in the scintillator.The model was experimentally validated. It showed good capability to predict both the detector's MTF and the amount of energy deposited in the lm, as a function of the scintillator material, the microscope optics and the X-ray energy.Dierent compositions of scintillating lm and substrate were simulated for energies ranging from 5 to 80 keV. The MTF response was found to depend mainly on the K-7.1 Modelling of the high-resolution detector edge uorescence of the lm and the substrate. The MTF values decrease with the X-ray energy, but a signicant improvement was observed above the K-edge of the lm, due to the higher cross-section of the photoelectric eect and due to the lower energy of the photoelectrons. Therefore, Gd-based lms outperform Lu-based lms for energies ranging from 51 to 64 keV, while the Lu-based lms are more performant between 64 and 80 keV. On the contrary, the K-edge of the substrate degrades the MTF since the created uorescence X-rays interact with the lm and reduce the contrast at low spatial frequencies. Compared to scintillators on a Y-based substrate, the ones on a Gd or Lu-based substrate are more performant in the 17 to 50 keV range and less performant in the range from 50 to 80 keV.Without taking the scintillator response into account, the best scintillator thickness equals the depth of eld. The model introduced in this work does take the scintillator response into account and allows one to nd the detector conguration needed to obtain the best MTF and higher absorption eciency. In fact the model shows that a thicker scintillator can outperform a thinner one, if the energy distribution is sharper.
123 7 . 2
72 Gadolinium and lutetium aluminum perovskite SCF scintillators 7.2 Gadolinium and lutetium aluminum perovskite SCF scintillators
7. 3
3 Lutetium oxide SCF scintillatorsUndoped and Eu-doped Lu 2 O 3 SCFs were grown on SC Lu 2 O 3 substrates. Homogeneous lms were obtained, showing high crystalline and optical quality. The imaging performance is comparable with the state-of-the-art SCF scintillators. However, the 7.3 Lutetium oxide SCF scintillators conversion eciency is unexpectedly low when compared to Lu 2 O 3 :Eu scintillators developed using other techniques than LPE. The scintillation in LPE based crystals is probably quenched by lead and zirconium impurities in the lm which originate from the solvent and the platinum crucible, respectively. The amount of lead used to keep the growing temperature within the limitation of the furnace was in fact signicantly higher than for other materials (e.g. GdLuAP, LSO and GGG), resulting in a melt that corroded the crucible. If the melt composition can be modied to reduce the luminescence quenching, Lu 2 O 3 could become a very welcome addition to the SCF scintillator family. Due to its absorption eciency being higher than most other known scintillators, and due to the high-optical quality that can be obtained through LPE growth, Lu 2 O 3 remains one of the best candidates for high-resolution imaging at high X-ray energies.7.3.1 PerspectivesThe reason of the low light yield observed in Lu 2 O 3 :Eu SCFs should be investigated. On the one hand, the role of the traps could be claried for example using thermo stimulated luminescence experiments. On the other hand, the solvent used for the LPE should be modied to reduce the corrosion of the crucible and thus the melt contamination. Other kind of Zr-free platinum crucibles have to be tested to clarify the role of the zirconium in the luminescence quenching. Moreover, dierent dopants, as for example Tb, are known as good activators in the Lu 2 O 3 host, and should be investigated in the case of SCFs grown by LPE.Résumé IntroductionLes détecteurs de rayon-X utilisés pour l'imagerie à haute résolution spatiale (micromètrique ou sub-micromètrique) utilisés aux synchrotrons sont pour la plupart basés sur un système de détection indirect. Les rayons X ne sont pas directement convertis en signal électrique, mais ils sont absorbés par un scintillateur, un matériau qui émet de la lumière à la suite de l'absorption d'un rayonnement ionisant. L'image émise sous forme de lumière visible est ensuite projetée par des optiques de microscopie sur une caméra 2D, de type CCD ou CMOS. Diérents types des scintillateurs sont disponibles aujourd'hui: en poudre compactée, micro structuré, sous forme céramique polycristalline et monocristalline. Pour obtenir une résolution spatiale au dessous d'un micromètre avec une très bonne qualité d'image, une couche mince (1-10 μm) monocristalline doit être privilégiée.
tiale. La taille physique du pixel est réduite grâce au grossissement de l'image visible produit par les optiques. Pour un grossissement susant et un scintillateur plus mince que la profondeur de champ des optiques, le système est limité soit par la diraction de la lumière, soit par la diusion de l'énergie déposée dans le scintillateur.Pourtant, la profondeur de champ est inférieur à 10 μm pour une ouverture numérique supérieure à 0.3, donc l'ecacité du détecteur est limitée par l'absorption dans la couche, surtout pour des énergies au dessus de 20 keV.Le travail qui est présenté dans cette thèse est centré sur l'évaluation de la résolution spatiale des détecteurs et sur le développement de nouveaux matériaux monocristallin en couche mince, déposées par épitaxie en phase liquide sur un substrat.Calcul de la résolution spatialeLa première partie de la thèse décrit le modèle qui a été développé pour pouvoir prédire la résolution spatiale du détecteur selon l'énergie des rayons X, les paramètres du scintillateur (épaisseur, matériau, longueur d'onde d'émission) et l'ouverture numérique des optiques. Ce modèle est basé sur une combinaison de calculs Monte Carlo et d'équations analytiques. Le schéma du model est présenté sur la gure 7.1.
Figure 7 . 1 :
71 Figure 7.1: Schéma du model développé pour la simulation de la résolution spatiale. Le modèle inclut la réponse du scintillateur et l'eet des optiques de microscopie.
Figure 7 . 2 :
72 Figure 7.2: Facteur de mérite (contraste à 500 lp/mm × énergie déposé) calculées pour des couches de 5 μm d'épaisseur, en fonction de l'énergie et de la composition de la couche.
3 .
3 La couche scintillatrice (GdAP) lp/mm] GdAP on YAP, 18 keV No electrons 2 from the substrate No X-rays 2 from the substrate
Figure 7 .
7 Figure 7.3: MTF calculées pour une couche de 5 μm du GdAP sur un substrat de YAP à 18 keV, en considérant tout les électrons et photons ou en supprimant les particules secondaires qui sont créés dans le substrat.
Figure 7 . 6 :
76 Figure 7.6: Mesures de diraction (Omega-2theta) pour des couches de Gd x Lu 1-x AlO 3 sur un substrat de YAP. (a) Substrats orientés (100), réexion de Bragg 400, substrats à 18.56 • (b) substrats orientés (001), réexion de Bragg 002, Substrats à 6.43 • .
Figure 7 . 7 :
77 Figure 7.7: Images SEM de la morphologie de surface pour (a) une couche de
Figure 7 . 8 :
78 Figure 7.8: Images d'une mire en tungsten pour la résolution et les valeurs du contraste déduites en fonction de la fréquence spatiale. Optiques 20X/0.4 et PCO 200 caméra, 15 keV.
Figure 7 . 10 :Figure 7 . 11 :
710711 Figure 7.10: Facteur de mérite (MTF à 500 lp/mm ×E dep × LY ) de scintillateurs de 5 μm d'épaisseur, en fonction de l'energie des rayons X incidents.
2
SiO 5 ), LYSO (Lu x Y 1-x SiO 5 ) and GSO(Gd 2 SiO 5 )[START_REF] Szupryczynski | Thermoluminescence and scintillation properties of rare earth oxyorthosilicate scintillators[END_REF][START_REF] Ishibashi | Cerium doped GSO scintillators and its application to position sensitive detectors[END_REF]. It is worth noting that many eorts have been made in the development of the technologies to produce lutetium oxide (Lu 2 O 3 ), one of the most dense known phosphors[START_REF] Dujardin | Synthesis and scintillation properties of some dense X-ray phosphors[END_REF]. This material shows good luminescence properties when doped with Eu or Tb activators, but the growth of single crystal Lu 2 O 3 presents many Al 5 O 1 2) or Lu 2 O 3 :Eu
oriented micro grains. Compared with their single crystal scintillator counterparts, the
density is almost as high, the cost is inferior and larger samples can be produced. Many
cubic materials can be prepared as transparent ceramic and they show good homogene-
ity, as it is reported for Ce or Nd-doped YAG (Y 3
problems due to its high melting point, above 2400 . However, progresses has been
reported (36, 37).
Transparent ceramic scintillators are polycrystalline materials made of tight randomly
Al 5 O 12 : Eu) and LuAG:Eu (Lu 3 Al 5 O 12 : Eu) on undoped YAG substrates. There-
have been recently under investigation: Ce-doped materials to exploit the faster decay 2.2 A mixed approach to simulate indirect detection
time (52), UV-emitting materials, to increase the resolution limit due to light dirac-
tion, aluminum perovskite (53) and lutetium oxide scintillators, of which the results are
presented in chapters 4, 5 and 6 of this work, to improve the stopping power at high energies. Chapter 2
Modelling of the scintillator's spatial response
Tous et
al. (50) as well as Zorenko et al. (51) have studied the eect of dierent uxes on garnet 2.1 Introduction
SCFs. The lms obtained with a BaO-based ux show better conversion eciency with High spatial resolution detectors used at synchrotrons exploit single crystal thin lms. respect to the lms obtained using a PbO-based ux. However, when a BaO-based ux Few scintillating materials are today available in this form, mainly because of the high is used, the optical quality and surface morphology are not as good as compared to a development and production cost as well as the small market. LuAG:Ce and YAG:Ce PbO-based ux. Secondly, LPE requires the availability of a non luminescent substrate bulk scintillators polished down to a few micrometers are produced by Crytur, while with the same crystalline structure and low lattice mismatch compared to the lm. LSO:Tb, GGG:Tb and GGG:Eu are grown on a substrate by liquid phase epitaxy The rst commercially available single crystal thin lms for imaging were YAG:Ce at the ESRF. New scintillators optimized for various applications are required, as for example fast scintillators with low afterglow for time resolved micro-tomography or (Y 3 after, the technology to produce GGG:Eu (Gd 3 Ga 5 O 12 : Eu) on GGG substrates was denser materials to improve the spatial resolution at high X-ray energies. The light developed at the CEA (Commissariat à l'énergie atomique et aux énergies alternatives) yield and the afterglow are the most dicult parameters to predict when developing a which was followed by the development of GGG:Tb (Gd 3 Ga 5 O 12 : Tb) at the ESRF(2). new scintillator, since these parameters often depends on the technique which is used LSO:Tb (Lu 2 SiO 5 : Tb) was developed during the ScintTax project, an european col-to produce the scintillator. On the contrary, the distribution of the energy deposited by laboration for the development of new thin lm scintillators (3). LSO:Tb SCFs are X-ray photons in the scintillator, which limits the spatial resolution at high energy, can grown on YbSO or LYSO:Ce substrates. In the case of LYSO:Ce, an optical lter has be accurately predicted down to a sub-micrometer scale thanks to the advancements of to be used to cut the Ce luminescence from the substrate. Monte Carlo (MC) techniques. A model to evaluate the detector's Modulation Transfer LSO:Tb and GGG:Eu are today the state-of-the-art scintillators used at synchrotrons Function (MTF) and guide the development of new scintillating materials is presented for sub-micrometer spatial resolution detectors. At the ESRF, a laboratory for the LPE based production of LSO:Tb, GGG:Eu and GGG:Tb SCFs scintillators has been in the next two chapters.
operational since 2010. The customers are mainly the ESRF imaging beamlines, other
synchrotrons and a few companies. Next to the production activity, other materials
[START_REF] Douissard | A novel epitaxially grown LSO-based thin-lm scintillator for micro-imaging using hard synchrotron radiation[END_REF] Monte Carlo Geant4 toolkit selected by maximizing the MTF of the full detector.2.3 Monte Carlo Geant4 toolkitMonte Carlo refers to a broad class of algorithms that use random sampling to nd a quantitative solution to a problem. The method is widely applied in many elds (physics, nance, engineering, etc.) to solve problems not trivial to study with other techniques. The development of the Monte Carlo methods started in the 1940s as part of the Manhattan project at the Los Alamos National Laboratory and since then knew a fast development thanks to the increase of computing power.
Geant4
[START_REF] Agostinelli | Geant4a simulation toolkit[END_REF][START_REF]Geant4 website[END_REF]
is a Monte Carlo toolkit developed at the European Organization for Nuclear Research (CERN) to simulate the tracking of particles generated in high-energy experiments. Afterwards, it was extended to include low energy physics (down to 250
Table 2 .
2 1: Materials and their used names, chemical formula and single crystal densities used in the Geant4 simulations (AP = aluminum perovskite, G = garnet)
name Yttrium AP Gadolinium AP Lutetium AP Gadolinium lutetium AP Lutetium orthosilicate Ytterbium orthosilicate Gadolinium gallium G Lutetium oxide Yttrium aluminum G Gadolinium aluminum G Lutetium aluminum G Cesium iodide short name YAP GdAP LuAP GdLuAP LSO YbSO GGG Lu 2 O 3 YAG GdAG LuAG CsI chemical formula YAlO 3 GdAlO 3 LuAlO 3 Gd 0.5 Lu 0.5 AlO 3 Lu 2 SiO 5 Yb 2 SiO 5 Gd 3 Ga 3 O 12 Lu 2 O 3 Y 3 Al 5 O 12 Gd 3 Al 5 O 12 Lu 3 Al 5 O 12 density [g/cm 3 ] 5.35 (61) 7.50 (62) 8.40 (61) 8.00 7.40 (63) 7.40 7.10 (64) 9.50 (65) 4.55 (66) 5.97 (66) 6.73 (66)
CsI 4.51
Lu 2 O 3 at 500 lp/mm increases from 20 % at 55 keV up to 40 % at 68 keV (g.2.4(e)). on GGG substrate GdAP on YAP substrate GdAG on YAG substrate LSO on YbSO substrate Lu 2 O 3 on Lu 2 O 3 substrate LuAP on YAP substrate LuAG on YAG substrate GdLuAP on YAP substrate GdAP on GdAP substrate
1.0
Contrast at 500 lp/mm 0.5
0.0
0 20 40 60 80
Energy [keV]
GGG t scintillator = 5 m
Lu 2 O 3 attenuation increases of approximately a factor 4 from 61 keV to 64 keV, the simulated deposited energy only increases of a factor 2.5. The substrate can also play a role. In gure 2.7, the percentage of attenuated energy calculated by NIST (black line), the energy deposited simulated by Geant4 (black dotted line) and the ratio E dep /attenuation (blue line) are reported for a 11.4 μm thick GdLuAP scintillator on a YAP substrate.
Figure of merit dened as value of the MTF at 500 lp/mm by the energy deposited in the scintillator, calculated for dierent 5 μm scintillators on a substrate.The results are plotted in gure 2.8. This gure allows us to evaluate the best material to use in every energy range, while considering the spatial resolution, contrast and eciency. The assumption that LuAG and LuAP are comparable above 63 keV, arising from gure 2.5 is wrong as we now know that LuAP is a better compromise 2.4 Results due to the higher absorption. LSO and Lu 2 O 3 have a slightly lower MTF due to the substrate uorescence background, but their FoM is better than the other materials considered. We can therefore conclude that below 50 keV and above 64 keV, Lu 2 O 3 is the most performant material, while between 50 keV and 64 keV GdAP is the best. A mixed composition as GdLuAP can compete both with the existing GGG and LSO in the 50-75 keV range.
1)
1
dep ) FoM ( MTF at 500 lp/mm * E 0.01 0.1 t scint = 5 m GGG on GGG GdAP on YAP GdAG on YAG LSO on YbSO Lu 2 O 3 on Lu 2 O 3 LuAP on YAP LuAG on YAG GdLuAP on YAP
1E-3
0 20 40 60 80
Energy [keV]
Figure 2.8:
Table 2 .
2
2: Attenuation in GdAP calculated us-ing the NIST database.
material E att at 49 keV E att at 55 keV
GdAP 5 μm GdAP 50 μm 1.1% 10.1% 3.7% 31.4%
Table 3 . 1 :
31 Degradation of the spatial resolution R from 15 keV to 30 keV. R is evaluated from the spatial frequency where the contrast is 50%, ΔR = R 15 keV -R30 keV
R 15 keV .
, produced by the Czochralsky method by MaTeck GmbH, Neyco and Scientic Materials Corp. Several series of samples of undoped and Ce, Tb or Eu doped lms were grown from a PbO-B 2 O 3 ux using Gd 2 O 3 , Lu 2 O 3 , Al 2 O 3 , Eu 2 O 3 , Tb 4 O 7
in the next one.
4.2 GdAP and GdLuAP liquid phase epitaxy
Experimental
GdAP and GdLuAP epitaxial single crystalline lms were grown using LPE on YAP
substrates of crystallographic orientation (001), (100) and (011) (dened in the Pbnm
space group)
Table 4 .
4 2: Atomic ratios between Gd, Lu and Al in the melt (R Lu = Lu Gd+Lu and
Table 4 .
4
4: Lattice parameters of GdAP, LuAP and YAP single crystals from literature
(84, 85, 86) and calculated lattice mismatch. Lattice parameters in Å.
a b c cell vol.
GdAP (84) 5.2537 5.3049 7.4485 207.5923
LuAP (85) 5.0967 5.3294 7.2931 198.0957
YAP (86) 5.1803 5.3295 7.3706 203.4895
GdAP-YAP +1.417 % -0.462 % +1.057 % +2.016 %
LuAP-YAP -1.614 % -0.002 % -1.051 % -2.650 %
Gd .5 Lu .5 AP -YAP -0.098 % -0.232 % 0.003 % -0.317 %
• measurement. The same measurement was
0.20
(a) GdLuAP 8.2 m (100) (b) GdLuAP 11.5 m (011)
0.15
CTF 0.10
0 V 0 V
0.05 0 H 0 H
45 V 90 V 45 V 90 V
0.20 0.00 45 H 90 H 45 H 90 H
(c) GdLuAP 8.5 m (001) (d) GdLuAP 5.2 m (110)
0.15
CTF 0.10
0 V 0 V
0.05 0 H 0 H
45 V 90 V 45 V 135 V
0.20 0.00 45 H 90 H 250 300 45 H 350 135 H 400 450 500
(e) GGG 8 m (111) Spatial frequency [lp/mm]
CTF 0.05 0.10 0.15 0.00 250 300 0 V 0 H 45 V 45 H 350 90 V 400 90 H 450 500 Birefringence 0.000 0.005 0.010 0.015 0.020 G G G (f) ( 1 0 0 ) YAP orientation ( 0 0 1 ) ( 1 1 0 ) ( 0 1 1 ) 0.000 0.005 0.010 0.015 0.020 CTF standard deviation
Spatial frequency [lp/mm]
Table 5 .
5 1: Lattice parameters and refractive indexes of YAP at 600 nm as reported in[START_REF] Kw Martin | Indices of Refraction of the Biaxial Crystal YAIO 3[END_REF].
Since the crystal structure is orthorhombic the three optical directions correspond to the
crystallographic axes. The birefringence (B) is calculated for dierent crystal orientations
(c.o.). GdLuAP:Eu SCFs on the (010) and (110) orientated substrates, reported in gray,
were not obtained in this work. The last column contains a quantitative evaluation of the
eect of the birefringence as observed from the MTF measurement.
lattice parameter n c.o. B Impact on the MTF
a = 5.1803 Å n γ =1.9505 (100) 0.009 negligible
b = 5.3295 Å n β =1.9413 (010) 0.024 not evaluated
c = 7.3706 Å n α =1.9268 (001) 0.014 weak
(011) 0.019 strong
(110) 0.016 not evaluated
(d) (110) oriented GdLuAP:Eu thin lm scintillators. As reference in gure 5.10(e) the
result for a GGG:Eu scintillator is also shown. GGG has a cubic crystal structure and
Table 6 .
6 1: Total energy deposited (E dep ) in 5 μm thick scintillators calculated for dierent X-ray energies using the Monte Carlo simulations described in chapter 2. Murillo et al. (99) reported a light yield of 22 photons/keV for Lu 2 O 3 :Eu sol-gel polycristalline lms. All these results show that Lu 2 O 3 :Eu can surely compete in terms of light yield with many currently commercially available scintillators. However, not many results are available for the single crystal Lu 2 O 3 :Eu scintillators. Recently, Veber et al. (37) reported a light yield for Lu 1.56 Gd 0.41 O
GGG LSO GdAP GdLuAP LuAG Lu 2 O 3
on GGG on YbSO on YAP on YAP on YAG on Lu 2 O 3
E dep at 15keV 21.11 % 27.85 % 20.13 % 24.73 % 21.02 % 38.15%
E dep at 52keV 1.28 % 0.94 % 1.40 % 1.38 % 0.76 % 1.47%
E dep at 64keV 0.87 % 1.14 % 1.02 % 1.09 % 0.75 % 1.83%
which have a conversion eciency of approximately 9-10 ph/keV, while Seeley at al.(92)
reported an eciency 3 times higher than commercially available scintillating glasses
(IQI-301). Garcia-
3 : Eu single crystals up to 2 times higher than that of YAG:Ce.
Lu 2 O 3 and Eu 2 O 3 5N pure powders were dissolved in the solvent. The experimental details are the same as described in chapter 4.So far two dierent melt compositions, here called A and B, were studied, and approximately[START_REF] Lempicki | A new lutetia-based ceramic scintillator for X-ray imaging[END_REF] Lu 2 O 3 lms with thicknesses in the range 0.5-22 μm were grown. The parameters for the growth are reported in table 6.2. As comparison, the parameters for the optimized growth of GdLuAP SCFs on YAP substrates are also reported.
Table 6 .
6 2: Growth parameters studied for the liquid phase epitaxy development of Lu 2 O 3 thin lms on Lu 2 O 3 : R melt Eu =(Eu)/(Eu+Lu) and R melt s =(Eu+Lu)/(Lu + Eu + Pb + Eu), range of growth temperature (T), thickness (Th.), growth rate (G.R.) and average light yield (L.Y.) compared to the LY of a bulk YAG:Ce S.C.. As reference, R melt s, T and G.R. are also reported for the optimized melt composition for the GdLuAP lm growth.
melt R melt s
Rocking curves around the (444) Bragg reection for 3 μm thick Lu 2 O 3 lm on a Lu 2 O 3 substrate. The lm and the substrate are (111)-oriented. The curves were recorded at 15 keV and they are shifted at 0 • for comparison: the FWHM of the substrate is 0.0027 • and the one of the lm is 0.0029 • . Lu 2 O 3 : Eu emission spectra under X-ray irradiation at 8 keV is reported in gure 6.4. The emission spectra, typical of the Lu 2 O 3 : Eu cubic-phase, conrming the XRD results. Several peaks between 575 and 725 nm were observed, corresponding to the 5 D 0 → 7 F j (j=0,1,2,3,4) transitions. The strongest emission, located at 611 nm, corresponds to the 5 D 0 → 7 F 2 transition.
7.94 7.95 7.96 7.97 7.98 0.0 0.5 1.0 6 81 0 1 2 0.0 0.5 1.0 (d) (a) 6.3 X-ray imaging using lutetium oxide SCFs 16.02 16.03 16.04 16.05 0.0 0.5 1.0 24.43 24.44 24.45 24.46 24.47 0.0 0.5 1.0 1 4 1 6 1 8 2 0 2 2 2 4 2 6 (c) (b) (666) (444) (222) Intensity [a.u.] Theta [deg] Theta [deg] Theta [deg] Intensity [a.u.] Theta [deg] 1000 2000 3000 4000 5000 6000 intensity The 500 550 600 650 700 0 750 Wavelength [nm]
1.0 Lu 2 O 3 substrate
Lu 2 O 3 film
0.8
Intensity [a.u.] 0.4 0.6
0.2
0.0
-0.004 -0.002 0.000 0.002 0.004
Theta [deg]
Figure 6.3: indicates a similar crystallinity.
Figure 6.2: Omega-2Theta scans for a (111)-oriented 3 μm thick Lu 2 O 3 lm on a Lu 2 O 3 substrate. The scan was recorded at 15 keV around the (a) (222) (b) (
444
) and (c) (666) bragg reection. The lm-substrate mismatch measured from (b) and (c) is 0.04. In (a) the substrate peak is not visible due to the complete X-ray absorption in the lm. In (d) the full Omega-2Theta scan from the (222) to the (666) Bragg reection is reported.
Lu 2 O 3 : Eu lms as compared to GGG:Eu, LSO:Tb and GdLuAP:Eu SCFs. Although the absorption of Lu 2 O 3 is higher, its eciency is reduced because of the low yield of the grown lms. To compete with LSO:Tb SCFs, a light yield of 60% compared to YAG:Ce SC is required. For a light yield higher than 60%, Lu 2 O 3 would outperform the other SCFs, except in the 51-64 keV range, where the Gd-based materials are still more performant even if a LY equal to the one of LSO:Tb is obtained. For example, if the LY for Lu 2 O 3 scintillator is equal to the one of LSO:Tb (LY=1), the image quality obtained using the new Lu 2 O 3 : Eu SCFs for high resolution detectors was also tested. In gure 6.6, the MTF is calculated from the image of a sharp edge, using dierent SCFs combined with 20X/0.45 microscope optics, 3.3X eyepiece and PCO2000 camera. The results obtained with GGG:Eu, LSO:Tb and Lu 2 O 3 :Eu are comparable. In the inset, the at eld image for the Lu 2 O 3 :Eu is reported. It is shown that the light emission from the Lu 2 O 3 :Eu lm does not show inhomogeneities which may reduce the light yield. Some radiographies of the JIMA resolution chart and a styrofoam obtained using the Lu 2 O 3 :Eu SCFs are shown in gure 6.7. The details can be clearly identied and distortions are not observed in the images, conrming the good optical quality of the lms.Lu 2 O 3 :Eu SCFs with high optical quality were successfully grown on Lu 2 O 3 SC substrates. The quality of the images obtained using these new SCFs are already comparable to the existing LSO and GGG SCFs, but the Lu 2 O 3 :Eu SCFs are less ecient due to the low light yield, even if their absorption is higher.
0 6.4 Conclusions 500 0.2 0.4 0.6 0.8 1.0 MTF GGG:Eu 8 m LSO:Tb 8 m Lu 2 O 3 :Eu 8 m 15 keV; NA 0.45 0.0 Spatial frequency [lp/mm] 1000 1500 flat field image 2000 Figure 6.6: MTF curves recorded at 15 keV using the slanted edge method. 8 μm thick LSO:Tb, GGG:Eu and Lu 2 O 3 :Eu lms were combined in the high resolution de-tector with 20X/0.45 microscope optics, 3.3X eyepiece and PCO2000 camera.
FoM is 1.5 times higher at 68 keV.
a b c
Figure 6.7: Radiography images of a (a,b) JIMA resolution chart, for several detail sizes and of (c) a plastic foam, obtained with 8 μm thick Lu 2 O 3 :Eu SCF combined with 20X/0.45 microscope optics, 3.3X eyepiece and PCO2000 camera.
light
The
Figure 7.4: Courbes de MTF calculées et mesurées expérimentalement à (a) 15 keV et (b) 30 keV, pour diérents scintillateurs combinés avec une optique de microscopie (ouverture numérique 0.45) et une caméra PCO2000. Grossissement total 66X, taille pixel du 0.11 μm.par l'énergie déposée à chaque profondeur. La position du plan focal des optiques est choisie en évaluant la meilleure MTF totale. 'eet du substrat prévu par les simulations a donc bien été conrmé expérimentalement. Les courbes de MTF calculées et mesurées sont comparées à 18 keV sur la gure 7.5. Si le substrat contient de l'yttrium, comme c'est le cas du GdLuAP:Eu et du LuAG:Eu, la réduction de contraste à basse fréquence prévue par les simulations est eectivement observée expérimentalement. Par contre, pour les scintillateurs LSO:Tb et GGG:Eu, qui sont déposés sur un substrat sans yttrium, cet eet n'est pas observé.Couches minces de perovskite de gadolinium et lutetiumUn procèdé pour la croissance par épitaxie en phase liquide des couches de GdAP et GdLuAP sur des substrats monocristallins de YAP a été développé. Un solvant composé de B 2 O 3 et PbO a été utilisé pour abaisser la température du bain liquide jusqu'à ≈1000.A cause de la diérence des paramètres de maille entre la couche et le substrat (mismatch), la qualité cristalline et optique des couches du GdAP n'est pas susante pour l'imagerie à haute résolution, en comparaison des couches de GGG :Eu ou LSO :Tb. L'écart de maille a ainsi été réduit en introduisant du lutétium dans le bain et donc dans la couche. Figure7.6, les courbes de diraction (omega-2theta) autour des réections de Bragg (400) et (002) pour des échantillons orientés (100) et (001) sont tracés. Le pic 18.3 18.4 18.5 18.6 18.7
LSO:Tb 1.6 m Exp Sim
LSO:Tb 8 m Exp Sim
GGG:Eu 8 m Exp Sim
LuAG:Ce 25 m Exp Sim
Spatial frequency [lp/mm]
GdAlO 3 (a) (b)
MTF normalized intensity [a.u] Gd 0.35 Lu 0.65 AlO 3 Gd 0.45 Lu 0.55 AlO 3 Gd 0.7 Lu 0.3 AlO 3 normalized intensity [a.u] Figure 7.5: Courbes de MTF calculées et mesurées expérimentalement à 18 keV, pour diérents scintillateurs, con-tenant ou pas de l'yttrium dans le sub-strat, combinées avec les optiques de mi-
croscopie (ouverture numérique 0.4) et
une caméra PCO2000. Grossissement
0 400 800 Theta [deg] 1200 6.35 6.40 total 33X, taille pixel 0.22 μm. 6.45 6.50 6.55 Theta [deg]
Spatial frequency [lp/mm]
L
1.2 Detectors for synchrotron imaging applications
1.4 Scintillators for X-ray area detectors
μm thick scintillator by a 1D X-ray source. The scintillator is supported by a 150 μm thick substrate. The scintillator and substrate materials, as well as the X-ray energies, are indicated in the gures.
Acknowledgements
my research. Thank you Paul-Antoine, for teaching me everything about the liquid phase epitaxy. And thank you both, for the enjoyable company and great help during the often-too-long days (and nights)
Results
The MTF calculated from the energy distribution in dierent thicknesses of free-standing GdAP and GdAP on a YAP substrate is reported in gure 2.15 for dierent X-ray energies. Increasing the thickness has only a negligible eect on the MTF if we consider freestanding scintillators. For the free-standing scintillator, at low energy (15-20 keV) no signicant dierences can be observed for thicknesses in the range 3-50 μm. At 49 keV, the MTF is degraded if the thickness is increased from 3 to 10 μm, but remains almost unchanged from 10 to 50 μm. These results can be explained using the discussion in section 2.4.2 about the MTF variation as a function of the depth z along the thickness in the scintillator: the MTF is degraded by the ux of secondary electrons. However, only the electrons produced at a distance smaller than the attenuation length contribute to the ux. For X-ray photons at 15-20 keV, the electrons ejected from the gadolinium M and L shells have an energy in the range 8-18 keV, corresponding to an attenuation length shorter than 2 μm. Therefore, the MTF is not degraded when the thickness is larger than 2 μm. On the other hand, at 49 keV, the L and M electrons will be ejected for spatial frequencies below 400 lp/mm, due to the strong defocus.
The role of the material becomes crucial, when the energy is increased. At 30 keV the Y K-edge degrades the response of the GdAP scintillator. It is important to note, for NA=0.45, that the contrast calculated for the 25 μm thick LuAG:Ce and the 5 μm thick GdAP lms are comparable.
At 49 keV all the MTFs are strongly degraded by the energy spread in the material.
Since at high energy MTF scint is degraded by the thickness of the scintillator (see for example gure 2.15), the thinnest scintillator is signicantly more performant, even comparing thicknesses lower than the DoF. This is true especially at high NA, where both the defocus and the energy spread degrades the MTF of the thicker scintillators.
As discussed in chapter 2, at high-energy the main role in MTF scint is played by the K-edge of the scintillator. For example, at the 60 keV the MTF obtained for the 5 μm GdAP:Eu scintillator is higher than both the MTFs of the 5 and 0.4 μm thick LSO:Tb lms.
To summarize, the plot in gure 3.2 was recalculated including the response of the scintillators. The curves are reported for the 5 μm thick LSO:Tb lm on YbSO and GdAP:Eu lm on YAP (g. Rare-earth aluminum perovskites are good candidates to improve the eciency of the scintillators while keeping the same spatial resolution because of the high densities and the high eective Z number. In particular, GdAlO 3 (GdAP) and LuAlO 3 (LuAP) are good scintillator candidates for imaging experiments at relatively high X-ray energies (50-75 keV) due to the position of their absorption K-edges. In chapter 2, Monte Carlo calculations to estimate the absorption eciency and the MTF response of dierent SCF scintillators as function of the X-ray energy have been presented. The percentage of the energy deposited by incident X-rays into thin lm scintillators has been shown in gure 2.6. The values at 15, 52 and 64 keV are also reported in table 4.1 for perovskites SCFs as well as GGG, LSO and LuAG, highlighting the potential improvement of GdAP based lm detectors in the energy range of Gd K-edge. The MTF response has been summarized in gure 2.5. As compared to GGG, GdAP
The acceleration voltage of the cathode was 22 kV. In gure 4.3(b), the R film Lu (in the lm) with respect to R melt Lu (in the melt) is reported for dierent samples; to highlight deviations, the line corresponds to the case R melt Lu equal to R film Lu . A dependence of the
R film
Lu ratio on the substrate orientation has been observed. For example, when R melt Lu is equal to 0.55, the Lu concentration in the lm is considerably lower for the (001)oriented samples than for (100) and (011) oriented samples, respectively. We assign this eect to the growth temperature: in order to have a growth rate of 0.3 μm/min, the required temperature for the (001)-oriented substrates is ≈ 12 • C lower than for the (011)-oriented samples. In addition, for all the studied samples, R film Lu is always lower than R melt Lu . This eect need to be taken into account in order to control the lattice mismatch and therefore, the lm optical quality and crystal morphology. The concentration of Pb and Pt impurities is close to the EPMA sensitivity, therefore was measured using the X-ray uorescence technique (XRF), for three dierent GdLuAP samples grown on YAP substrates (011), ( 100) and (001) oriented. The three samples were grown from the optimized melt composition (i.e. R Lu = 0.55, Pb/B = 5.20-5.30). The measure was performed using a Rigaku Primus II wavelength dispersive XRF system equipped with Rhodium X-ray tube. The results are reported in table 4.3. The Pb content was found to be lower than the XRF sensitivity for the (100) oriented sample, while for the other two samples was 0.01 and 0.04%. The Pt contamination is comparable with the Pb content for the (011)-oriented sample and signicantly higher for the other two. The highest Pt contamination was found for the (001)-oriented sample, which also corresponds to the orientation showing the lowest light yield eciency (see next chapter). This eect is conrmed by the evaluation of the rocking curve (RC): in gure 4.6, the RC for the substrate and the lm is reported for two dierent samples, grown on the on this orientation. The (110) and the (001) oriented scintillator show an intermediate eect, in agreement with the calculated B values. The general trend described by the standard deviation well reproduces the increase of the birefringence values from the (001) to the (011) orientation. To conrm the results obtained with the CTF we also measured the full MTF using the slanted edge method described in section 3.3 varying the angle of the scintillator around the surface normal, for the (011) and (100)-oriented GdLuAP:Eu and for the GGG:Eu SCFs. The condition of the experiment and the detector conguration are the same as used for the measurement using the JIMA resolution chart. The results are reported in gure 5.11. The MTF curves obtained obtained for three measurement at 0 • , 45 • and 90 • using a GGG:Eu SCF are reported to show the uncertainty of the measurement. At 500 lp/mm the average contrast is 0.26 ± 0.02. In the case of the (011)-oriented GdLuAP:Eu scintillator, the MTF curve strongly depends on the rotation of the scintillator. The contrast at 500 lp/mm varies from 0.1 to 0.3 and the average value is 0.19 ± 0.12, lower than the one measured using GGG. However, the best MTF obtained for this orientation shows higher contrast than GGG. For the (100)-orientated SCF the MTF slightly depends on the angle, but the average contrast (0.34 ± 0.05) is higher than the value obtained using the GGG:Eu scintillator. The eects of the birefringence on the image quality has been experimentally measured and correlated with the YAP birefringence values from literature. The scintillator crystal structure and orientation have to be taken into account for the estimation of the MTF of the detector. It is, however, important to underline that these results are strictly connected with the conditions of the experiment. When the spatial resolution is already limited by other phenomena, as for example the energy distribution in the scintillator Chapter 6
Single crystal lutetium oxide scintillating lms
The rst results about the development of Eu-doped lutetium oxide (Lu 2 O 3 ) single crystal lms are reported in this chapter.
Introduction
In the last twenty years, Lu 2 O 3 has been studied extensively since it showed promising properties as a laser material and as a scintillator for radiation detection. Over the years the development has focused on many dierent crystalline forms, e.g. transparent ceramic [START_REF] Seeley | Transparent Lu 2 O 3: Eu ceramics by sinter and HIP optimization[END_REF], bulk single crystal [START_REF] Peters | Growth of high-melting sesquioxides by the heat exchanger method[END_REF][START_REF] Veber | Flux growth and physical properties characterizations of Y 1.866 Eu 0.134 O 3 and Lu 1.56 Gd 0.41 Eu 0.03 O 3 single crystals[END_REF][START_REF] Mcmillen | Hydrothermal single-crystal growth of Lu2O3 and lanthanide-doped Lu2O3[END_REF], polycrystalline thin lm [START_REF] De | Structural and Luminescence Properties of Lu2O3: Eu3+ F127 Tri-Block Copolymer Modied Thin Films Prepared by Sol-Gel Method[END_REF], microstructured material [START_REF] Marton | Ecient high-resolution hard x-ray imaging with transparent Lu2O3: Eu scintillator thin lms[END_REF] and micro-and nano-particles [START_REF] Yang | Sm, Er, Ho, Tm) microarchitectures: ethylene glycol-mediated hydrothermal synthesis and luminescent properties[END_REF][START_REF] Yang | Uniform hollow Lu2O3: Ln (Ln= Eu3+, Tb3+) spheres: facile synthesis and luminescent properties[END_REF]. Lutetium oxide is a good candidate for high-resolution imaging for three main reasons. Firstly, the remarkable high density (9.5 g/cm 3 ) and high eective Z number of To keep the supersaturation temperature range below the furnace limit of 1100 , it was needed to lower percentage of solute dissolved in the PbO-B 2 O 3 solvent (as compared, for example, to the GdLuAP lm growth). As a consequence, the platinum crucible became strongly corroded by the lead based solvent and contaminated the melt. The light yield obtained for Lu 2 O 3 : Eu was unexpectedly low, despite the good light yield reported for this material. Hence, to understand the possible origin of the low light yield, the amount of unwanted impurities in the lms, as well as the amount of Eu dopant, were investigated using XRF.
The results are reported in table 6.3 for three dierent samples, which are compared to the values obtained for GdLuAP. Several remarks about these data should be made. Firstly, the lead content is approximately one order of magnitude higher than in the perovskite lms. author = Nikon microscopy website, title = http://www.microscopyu.com/tutorials/java/polarized/calcite/ . 106 Pourtant, la biréfringence des cristaux de YAP et GdLuAP peut dégrader la qualité de l'image (gure 7.8). Cet eet a été évalué pour les diérentes orientations. Sur la gure 7.9, la MTF est évaluée pour diérents angles du scintillateur autour de la normale à la surface. L'eet est très fort pour les scintillateurs orientés (011), mais beaucoup moins important pour l'orientation (100). Pour référence, la mème mesure a été eectuée aussi avec un scintillateur GGG :Eu, qui ne présente pas de biréfringence. To obtain detectors with micrometer and sub-micrometer spatial resolution, thin (1-20 μm) single crystal lm (SCF) scintillators are required. These scintillators are layers grown on a substrate by liquid phase epitaxy (LPE). The critical point for these layers is their weak absorption, especially at energies exceeding 20 keV. At the European Synchrotron radiation Facility (ESRF), X-ray imaging applications can exploit energies up to 120 keV. Therefore, the development of new scintillating materials is currently investigated. The aim is to improve the contradictory compromise between absorption and spatial resolution, to increase the detection eciency while keeping a good image contrast even at high energy.
Couches minces de
The rst part of this work presents a model describing high-resolution detectors, which was developed to calculate the modulation transfer function (MTF) of the system as a function of the X-ray energy. The model can be used to nd the optimal combination of scintillator and visible light optics for dierent energy ranges and guide the choice of the materials to be developed as SCF scintillators. In the second part, two new kinds of scintillators for high-resolution are presented: the gadolinium-lutetium aluminum perovskite (Gd 0.5 Lu 0.5 AlO 3 : Eu) and the lutetium oxide (Lu 2 O 3 : Eu) SCFs. | 231,729 | [
"760235"
] | [
"224716",
"2568"
] |
01461730 | en | [
"phys",
"spi",
"math"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01461730/file/Issanchou_contactmodalJSV.pdf | Clara Issanchou
email: [email protected]
Stefan Bilbao
Jean-Loïc Le Carrou
Cyril Touzé
Olivier Doaré
A modal-based approach to the nonlinear vibration of strings against a unilateral obstacle: simulations and experiments in the pointwise case
Keywords: Numerical methods, 3D string vibration, experimental study, unilateral contact, sound synthesis, tanpura
This article is concerned with the vibration of a sti linear string in the presence of a rigid obstacle. A numerical method for unilateral and arbitrary-shaped obstacles is developed, based on a modal approach in order to take into account the frequency dependence of losses in strings. The contact force of the barrier interaction is treated using a penalty approach, while a conservative scheme is derived for time integration, in order to ensure long-term numerical stability. In this way, the linear behaviour of the string when not in contact with the barrier can be controlled via a mode by mode tting, so that the model is particularly well suited for comparisons with experiments. An experimental conguration is used with a point obstacle either centered or near an extremity of the string. In this latter case, such a pointwise obstruction approximates the end condition found in the tanpura, an Indian stringed instrument. The second polarisation of the string is also analysed and included in the model. Numerical results are compared against experiments, showing good accuracy over a long time scale.
Introduction
The problem of vibrating media constrained by a unilateral obstacle is a longstanding problem which has been under study for more than a century [START_REF] Brogliato | Nonsmooth Mechanics[END_REF][START_REF] Pfeier | Multibody Dynamics with Unilateral Contacts[END_REF]. Indeed, the rst important developments can be attributed to Hertz with his formulation of a general law for the contact between elastic solids in 1881 [START_REF] Hertz | On the contact of elastic solids[END_REF]. Since then, applications of contact mechanics can be found in such diverse elds as e.g. computer graphics [START_REF] Bara | Fast contact force computation for nonpenetrating rigid bodies[END_REF], for instance for simulating the motion of hair [START_REF] Bertails-Descoubes | A nonsmooth Newton solver for capturing exact Coulomb friction in ber assemblies[END_REF]; to human joints in biomechanics [START_REF] Donahue | A nite element model of the human knee joint for the study of tibio-femoral contact[END_REF] or component interactions in turbines [START_REF] Pfeier | Contact in multibody systems[END_REF][START_REF] Batailly | Numerical-experimental comparison in the simulation of rotor/stator interaction through blade-tip/abradable coating contact[END_REF]. A particular set of applications is found in musical acoustics, where collisions are of prime importance in order to fully understand and analyse the timbre of musical instruments [START_REF] Boutillon | Model for piano hammers: Experimental determination and digital simulation[END_REF][START_REF] Chaigne | Numerical modeling of the timpani[END_REF][START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF]. Within this framework, the problem of a vibrating string with a unilateral constraint, as a key feature of numerous instruments, is central and is particularly important to the sound of Indian instruments [START_REF] Raman | On some indian stringed instruments[END_REF][START_REF] Valette | The tampura bridge as a precursive wave generator[END_REF][START_REF] Siddiq | A physical model of the nonlinear sitar string[END_REF], and also in the string/fret contact in fretted instruments [START_REF] Trautmann | Multirate simulations of string vibrations including nonlinear fretstring interactions using the functional transformation method[END_REF][START_REF] Bilbao | Numerical simulation of string/barrier collisions: the fretboard[END_REF].
The rst studies on a vibrating string with a unilateral constraint were restricted to the case of an ideal string with a rigid obstacle in order to derive analytical and existence results [START_REF] Amerio | Continuous solutions of the problem of a string vibrating against an obstacle[END_REF][START_REF] Cabannes | Cordes vibrantes avec obstacles (Vibrating strings with obstacles)[END_REF][START_REF] Cabannes | Presentation of software for movies of vibrating strings with obstacles[END_REF][START_REF] Schatzman | A hyperbolic problem of second order with unilateral constraints: the vibrating string with a concave obstacle[END_REF][START_REF] Burridge | The sitar string, a vibrating string with a one-sided inelastic constraint[END_REF]. In particular, solutions to the cases of a centered point obstacle, a plane obstacle and a few continuous obstacles have been obtained explicitly. Existence and uniqueness of the solution to the non-regularised problem has been shown in the case of a string vibrating against point obstacles [START_REF] Schatzman | Un problème hyperbolique du 2ème ordre avec contrainte unilatérale : La corde vibrante avec obstacle ponctuel (A hyperbolic problem of second order with unilateral constraints: the vibrating string with a point obstacle)[END_REF] and of a concave obstacle if conservation of energy is imposed [START_REF] Schatzman | A hyperbolic problem of second order with unilateral constraints: the vibrating string with a concave obstacle[END_REF]. There are no general results when the obstacle is convex. Moreover, Schatzman proved that the penalised problem with a point obstacle converges to the non-regularised problem [START_REF] Schatzman | Un problème hyperbolique du 2ème ordre avec contrainte unilatérale : La corde vibrante avec obstacle ponctuel (A hyperbolic problem of second order with unilateral constraints: the vibrating string with a point obstacle)[END_REF]. The pointwise case is thus well-understood theoretically, and various interesting properties have been demonstrated as mentioned above.
In addition, numerical studies have been undertaken to simulate collisions for more realistic string models by including various eects, such as dispersion. Existing numerical methods include digital waveguides [START_REF] Rank | A waveguide model for slapbass synthesis[END_REF][START_REF] Evangelista | Player-instrument interaction models for digital waveguide synthesis of guitar: Touch and collisions[END_REF][START_REF] Kartofelev | Modeling a vibrating string terminated against a bridge with arbitrary geometry[END_REF], sometimes coupled with nite dierences [START_REF] Krishnaswamy | Methods for simulating string collisions with rigid spatial objects[END_REF] in the case of an ideal string, and for a sti damped string interacting with an obstacle located at one end of the string [START_REF] Siddiq | A physical model of the nonlinear sitar string[END_REF]. Other models are based on a modal description of the string motion, as in [START_REF] Vyasarayani | Modeling the dynamics of a vibrating string with a nite distributed unilateral constraint: Application to the sitar[END_REF] where an ideal string vibrating against a parabolic obstacle at one boundary is considered, under the assumption of perfect wrapping of the string on the bridge as in [START_REF] Mandal | Natural frequencies, modeshapes and modal interactions fot strings vibrating against an obstacle: Relevance to sitar and veena[END_REF]. However, the existence of multiple contacts as a necessary condition for simulating the sound of sitar has been established in [START_REF] Vyasarayani | Modeling the dynamics of a vibrating string with a nite distributed unilateral constraint: Application to the sitar[END_REF][START_REF] Vyasarayani | Transient dynamics of continuous systems with impact and friction, with applications to musical instruments[END_REF]. Contacts between a string and point obstacles are modelled with a modal approach in [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF] for a dispersive lossy string against a tanpura-like bridge. The functional transformation method (FTM) is used in [START_REF] Trautmann | Multirate simulations of string vibrations including nonlinear fretstring interactions using the functional transformation method[END_REF] for a string interacting with frets. In the latter study, damping model is controlled by a few parameters only. Interaction between a continuous system and a point obstacle is also modelled in [START_REF] Vyasarayani | Transient dynamics of continuous systems with impact and friction, with applications to musical instruments[END_REF], using a modal coecient of restitution (CoR) method [START_REF] Wagg | A note on coecient of restitution models including the eects of impact induced vibration[END_REF][START_REF] Vyasarayani | Modeling impacts between a continuous system and a rigid obstacle using coecient of restitution[END_REF], assuming innitesimal contact times.
More recently, energy-based methods have been developed, allowing the simulation of sti lossy strings against an arbitrarily shaped obstacle. Hamilton's equations of motion are discretised in [START_REF] Chatziioannou | Energy conserving schemes for the simulation of musical instrument contact dynamics[END_REF], and the case of the tanpura bridge is derived in [START_REF] Van Walstijn | Numerical simulation of tanpura string vibrations[END_REF]. Finite dierence methods are used in [START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF] and the special case of the interaction between a string and a fretboard is detailed in [START_REF] Bilbao | Numerical simulation of string/barrier collisions: the fretboard[END_REF]. In these latter models, eigenfrequencies and damping parameters cannot be arbitrary, but follow a distribution tuned through a small number of parameters. In addition, these studies consider only one transverse motion of the string, and numerical dispersion eects appear due to the use of nite dierence approximations.
The inclusion of the two transverse polarisations in the modeling of vibrating strings with contact is also seldom seen in the literature. A rst attempt has been proposed in [START_REF] Desvages | Two-polarisation nite dierence model of bowed strings with nonlinear contact and friction forces[END_REF] for the case of the violin, where nite dierences are employed to model a linear bowed string motion, including interactions between the string and ngers as well as the ngerboard. Early developments are also shown in [START_REF] Bridges | Investigation of tanpura string vibrations using a two-dimensional time-domain model incorporating coupling and bridge friction[END_REF], extending the study presented in [START_REF] Chatziioannou | Energy conserving schemes for the simulation of musical instrument contact dynamics[END_REF]. However, numerical results are not compared to experimental measurements of the string motion.
Whereas an abundant literature exists on numerical simulations of a string vibrating against an obstacle, only a few experimental studies have been carried out. Research on isolated strings is detailed in [START_REF] Astashev | Experimental investigation of vibrations of strings interaction with point obstacles[END_REF][START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF], and measurements on complete instruments are presented in [START_REF] Taguti | Acoustical analysis on the sawari tone of chikuzen biwa[END_REF][START_REF] Weisser | Investigating non-western musical timbre: a need for joint approaches[END_REF][START_REF] Siddiq | A physical model of the nonlinear sitar string[END_REF], highlighting the inuence of the obstacle shape and position on the timbral richness of sounds. However a detailed comparison of experiments with numerical results in order to understand the relative importance of modeling features such as e.g. dispersion, nonlinearity and damping due to contact has not been carried out.
The aim of this paper is twofold. First, an accurate and exible numerical method is developed in Section 2. The distinctive feature of the approach is that it relies on a modal description, in order to take into account any frequency dependence of the losses, and also in order to eliminate any eect of numerical dispersion. The contact law is formulated in terms of a penalty potential and an energyconserving scheme is derived, adapted to the modal-based approach. The convergence of the outcomes of the models is then thoroughly studied in Section 3 for a pointwise obstacle, with a comparison to an analytical solution. The second main objective of the study is to compare simulations with experiments. For that purpose, the experimental protocol is presented in Section 4. The versatility of the numerical method is illustrated with a mode by mode tting of the measured linear characteristics (eigenfrequencies and modal damping factors). Comparisons with experiments are conducted in Section 5 for two dierent point obstacles, located either at the string centre or near one extremity of the string. The second polarisation is also measured and compared to the outcomes of a simple model incorporating the horizontal vibration in Section 5.2.3.
Theoretical model and numerical implementation
Continuous model system
The vibrating structure considered here is a sti string of length L (m), tension T (N• m -1 ), and with linear mass density µ (kg• m -1 ). The stiness is described by the Young's modulus E (Pa) of the material and the moment of inertia associated with a circular cross-section I = πr 4 /4, where r is the string radius (m). The string is assumed to vibrate in the presence of an obstacle described by a xed prole g(x), x ∈ [0, L], located under the string at rest (see Fig. 1). The obstacle is assumed to be of constant height along (Oy). In the remainder of the paper, it is said to be a point obstacle when it is a point along (Ox), however it still has a constant height along (Oy).
In this section we restrict ourselves to the vertical (Oz)-polarisation. The second, horizontal polarisation is taken into account in Section 2.7.
The transverse displacement u(x, t) of the string along (Oz) is governed by the following equation, under the assumption of small displacements:
µu tt -T u xx + EIu xxxx = f, (1)
where the subscript t (respectively x) refers to a partial derivative with respect to time (respectively space). The right-hand side term f (x, t) refers to the external contact force per unit length exerted by the barrier on the string. Simply supported boundary conditions are assumed, which are commonly used for musical strings having a weak stiness [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF][START_REF] Cuesta | Théorie de la corde pincée en approximation linéaire (Theory of the plucked string using the linear approximation)[END_REF]:
u(0, t) = u(L, t) = u xx (0, t) = u xx (L, t) = 0, ∀t ∈ R + . ( 2
)
No damping is included so far; a detailed model of loss will be introduced once modal analysis has been performed, see Section 2.3.1.
u(x,t) g(x)
x 0 L z y The contact force density f (x, t) vanishes as long as the string does not collide with the barrier. The model to be employed here relies on a penalty approach where a small amount of interpenetration is modeled. Penalisation methods are to be viewed in contrast with nonsmooth methods for which no penetration is allowed [START_REF] Brogliato | Numerical Methods for Nonsmooth Dynamical Systems[END_REF][START_REF] Studer | Numerics of unilateral contacts and friction[END_REF]. In order to derive a general framework, a two-parameter family of power-law expression for the contact force is used:
f (x, t) = f (η(x, t)) = K [η(x, t)] α + , (3)
where η(x, t) = g(x) -u(x, t) is a measure of interpenetration of the string into the barrier, and
[η] + = 1 2 (η + |η|)
is the positive part of η. This formulation allows the representation of a Hertz-like contact force, where the coecients K and α can be tuned depending on the material in contact [START_REF] Goldsmith | Impact[END_REF][START_REF] Machado | Compliant contact force models in multibody dynamics: Evolution of the Hertz contact theory[END_REF][START_REF] Banerjee | Historical origin and recent development on normal directional impact models for rigid body contact simulation: A critical review[END_REF]. This interaction model has already been used in the realm of musical acoustics for various interactionssee e.g. [START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF][START_REF] Bilbao | Numerical Sound Synthesis: Finite Dierence Schemes and Simulation in Musical Acoustics[END_REF][START_REF] Chaigne | Numerical simulations of piano strings. I. a physical model for a struck string using nite dierence methods[END_REF][START_REF] Chaigne | Numerical modeling of the timpani[END_REF][START_REF] Chatziioannou | Energy conserving schemes for the simulation of musical instrument contact dynamics[END_REF]. In [START_REF] Chaigne | Numerical simulations of piano strings. I. a physical model for a struck string using nite dierence methods[END_REF] for example, it is used to model the hammer-string interaction in the piano, where the contact model describes the compression of the hammer felt. In the present case of the string colliding with a rigid body, the force expression represents a penalisation of the interpenetration that should remain small; as such, large values of K as compared to the tension and shear restoring forces has to be selected. In the literature, current values used in numerical simulations for this problem are in the range of 10 7 to 10 15 , see e.g. [START_REF] Chatziioannou | Energy conserving schemes for the simulation of musical instrument contact dynamics[END_REF][START_REF] Bilbao | Numerical simulation of string/barrier collisions: the fretboard[END_REF][START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF].
Energy balance
The continuous total energy of the system is detailed here. Energy considerations will be useful in the remainder of the study in order to derive an energy-conserving and stable numerical scheme.
The contact force density (3) derives from a potential ψ:
f = dψ dη , with ψ(η) = K α + 1 [η] α+1 + . (4)
The total energy of the system may be obtained by multiplying the equation of motion (1) by the velocity u t . Employing integration by parts, one obtains the expression of the continuous energy:
H = L 0 µ 2 (u t ) 2 + T 2 (u x ) 2 + EI 2 (u xx ) 2 + ψ dx. (5)
It satises H ≥ 0 and the following equality:
dH dt = 0, (6)
implying that energy is conserved. The rst three terms in [START_REF] Bertails-Descoubes | A nonsmooth Newton solver for capturing exact Coulomb friction in ber assemblies[END_REF] correspond respectively to the stored energy due to inertia, tension and stiness. The nal term denotes the energy stored in the contact mechanism under compression.
Modal approach
The eigenproblem related to Eq. ( 1) without contact force f consists of nding the functions φ j (x) which are the solutions of:
-T φ j + EIφ j -µω 2 j φ j = 0,
where designates the spatial derivative, together with the boundary conditions given in Eq. 2. The normal modes are thus:
φ j (x) = 2 L sin jπx L for j ≥ 1, (8)
and are orthogonal and normalised such that L 0 φ j (x)φ k (x)dx = δ jk . The unknown displacement u(x, t), when truncated to N m modes, may be written as û(x, t), dened as
û(x, t) = Nm j=1 q j (t)φ j (x), (9)
where q j (t) is the j th modal amplitude. For simplicity, the hat notation is dropped in the remainder of the paper. Writing u as its expansion in (1) and using the orthogonality, one obtains:
µ(q + Ω 2 q) = F, (10)
where the vector q = [q 1 , ..., q Nm ] T contains modal coecients, q is its second time derivative and Ω is a diagonal matrix such that Ω jj = ω j = 2πν j .
Eigenfrequencies are given by ν j = j c0 2L 1 + Bj 2 , where c 0 = T µ is the wave velocity and B = π 2 EI T L 2 describes the inharmonicity created by taking into account the stiness of the string. Finally the righthand side vector F represents the modal projection of the contact force, with
F j = L 0 f (x, t)φ j (x)dx.
Losses
In this section, a standard model of string damping mechanisms is reviewed. Damping due to air friction and internal losses are rst presented, then losses due to contact phenomena are modeled.
Air friction and internal losses
One advantage of using a modal approach is that damping parameters can be tuned at ease, as recently used in [START_REF] Ducceschi | Modal approach for nonlinear vibrations of damped impacted plates: Application to sound synthesis of gongs and cymbals[END_REF] for the nonlinear vibrations of plates with the purpose of synthesising the sound of gongs. A lossless string is described in equation [START_REF] Chaigne | Numerical modeling of the timpani[END_REF], where the linear part corresponds to the description of a lossless oscillator for each mode. Damping can therefore be introduced by generalising each mode to a lossy oscillator. Eq. ( 10) thus becomes:
µ(q + Ω 2 q + 2Υ q) = F, ( 11
)
where Υ is a diagonal matrix such that Υ jj = σ j ≥ 0. A damping parameter σ j is thus associated to each modal equation.
In this contribution, the frequency dependence of losses is taken into account using the theoretical model proposed by Valette and Cuesta [30]. This model is especially designed for strings, and shows a strong frequency dependence that cannot be expressed easily in the time domain. It describes the three main eects accounting for dissipation mechanisms in strings, namely friction with the surrounding acoustic eld, viscoelasticity and thermoelasticity of the material. The following expression of the quality factor Q j = πν j /σ j is introduced:
Q -1 j = Q -1 j,air + Q -1 j,ve + Q -1 te , (12)
where subscripts air, ve and te refer, respectively, to frictional, viscoelastic and thermoelastic losses.
The rst two terms are dened as [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF]:
Q -1 j,air = R 2πµν j , R = 2πη air + 2πd √ πη air ρ air ν j , Q -1 j,ve = 4π 2 µEIδ ve T 2 ν 2 j .
In these expressions, η air and ρ air are, respectively, the air dynamic viscosity coecient and density.
In the rest of the paper, usual values are chosen [START_REF] Paté | Predicting the decay time of solid body electric guitar tones[END_REF]: η air = 1.8 × 10 -5 kg m -1 s -1 and ρ air = 1.2 kg m -3 . To complete the model, two parameters remain to be dened: the viscoelastic loss angle δ ve , and the constant value Q -1 te characterising the thermoelastic behaviour. As shown later in Section 4.2 (see also [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF][START_REF] Paté | Predicting the decay time of solid body electric guitar tones[END_REF]), these values can be tted from experiments in order to nely model the frequency dependence of a real isolated string.
Damping in the contact
The model presented here may be complemented by nonlinear losses due to the contact, as described e.g. in [START_REF] Hunt | Coecient of restitution interpreted as damping in vibroimpact[END_REF][START_REF] Machado | Compliant contact force models in multibody dynamics: Evolution of the Hertz contact theory[END_REF][START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF][START_REF] Banerjee | Historical origin and recent development on normal directional impact models for rigid body contact simulation: A critical review[END_REF]. To this end, the contact force given by (3) may be augmented according to the Hunt and Crossley model [START_REF] Hunt | Coecient of restitution interpreted as damping in vibroimpact[END_REF]:
f = dψ dη - ∂u ∂t Kβ[η] α + , (13)
with β ≥ 0.
Spatial discretisation
To circumvent the diculty associated with the expression of the contact force with modal coordinates, a spatial grid is introduced, together with a linear relationship between modal coordinates and the displacement at points in the grid. The grid is dened as x i = i∆x, where ∆x = L N is the spatial step and i ∈ {0, ..., N }. Since u(x 0 , t) = 0 and u(x N , t) = 0 ∀t ∈ R + , only the values of u on the grid with i ∈ {1, 2, ..., N -1} are examined in the following. In the remainder of the paper we select N m = N -1 such that the number of interior grid points will match the number of modes in the truncation. Then the modal expansion of u(x, t) can be written at each point i ∈ {1, 2, ..., N -1} of the selected grid as:
u(x i , t) = u i (t) = N -1 j=1 q j (t)φ j (x i ) = N -1 j=1 q j (t) 2 L sin jπi N . (14)
In matrix form, these relationships may be written as u = Sq, where the vectors u = [u 1 , ..., u N -1 ] T and q = [q 1 , ..., q N -1 ] T have been introduced. The matrix S has entries S ij = φ j (x i ), ∀(i, j) ∈ {1, ..., N -1} 2 , and its inverse satises the following relationship: S -1 = L N S T .
Time discretisation
Let u n represent an approximation to u(t) at t = n∆t, for integer n, and where ∆t is a time step, assumed constant.
Dierence operators may be dened as follows:
δ t-u n = u n -u n-1 ∆t , δ t+ u n = u n+1 -u n ∆t , δ t. u n = u n+1 -u n-1 2∆t δ tt u n = u n+1 -2u n + u n-1 ∆t 2 .
The temporal scheme for discretising the equations of motion considers separately the modal linear part and the nonlinear contact force. For the linear part (the left-hand side of Eq. ( 11)), an exact scheme is proposed in [START_REF] Bilbao | Numerical Sound Synthesis: Finite Dierence Schemes and Simulation in Musical Acoustics[END_REF] for a single oscillator equation. This scheme is here generalised to an arbitrary number N -1 of oscillator equations. This choice is justied by the accurate description of the frequency content and the stability property of this exact scheme, which perfectly recovers the oscillation frequencies irrespective of the time step. For the contact force, in order to circumvent the diculty linked to the modal couplings in the right-hand side of Eq. ( 11), the relationship between u and q given in the previous section is used in order to treat the contact in the space domain.
The temporal scheme for the oscillatory part of Eq. ( 11) may be written as:
µ ∆t 2 (q n+1 -Cq n + Cq n-1 ) = 0, (15)
where the right-hand side has been neglected for the moment. C and C are diagonal matrices with entries
C ii = e -σi∆t e √ σ 2 i -ω 2 i ∆t + e - √ σ 2 i -ω 2 i ∆t , Cii = e -2σi∆t .
If there is no collision, and thus no contact force, so that the right-hand side equals zero, then the scheme is known to be exact [START_REF] Bilbao | Numerical Sound Synthesis: Finite Dierence Schemes and Simulation in Musical Acoustics[END_REF], thus ensuring the most accurate discrete evaluation of the linear part. To determine the contact force, Eq. ( 11) is rewritten for the vector u thanks to the relationship u = Sq:
µ ∆t 2 (u n+1 -Du n + Du n-1 ) = f n , (16)
with D = SCS -1 and D = S CS -1 . Following [START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF], the discrete approximation of the contact force is chosen as:
f n = δt-ψ n+ 1 2 δt.η n , where ψ n+ 1 2 = 1 2 (ψ n+1 + ψ n ) and ψ n = ψ(η n ).
The resulting scheme is conservative if there is no loss, and dissipative otherwise (see Section 2.6).
The nonlinear equation to be solved at each time step is thus:
r + b + m ψ(r + a) -ψ(a) r = 0, (17)
where r = η n+1 -η n-1 is the unknown (with
η n = g -u n ), a = η n-1 , m = ∆t 2 µ and b = Du n -( D + I N -1 )u n-1 .
The Newton-Raphson algorithm may be used to this end. This equation has a unique solution [START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF], note however that the convergence of the Newton-Raphson algorithm is not guaranteed and depends on the initial guess. Note also that in the specic case of a linear restoring force (i.e. α = 1), an analytical solution is available as detailed in [START_REF] Bilbao | Numerical modeling of string/barrier collisions[END_REF].
The additional damping term du dt Kβ[η] α + due to collisions (see Section 2.3.2) may be discretised as follows [START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF]: [START_REF] Amerio | Continuous solutions of the problem of a string vibrating against an obstacle[END_REF], the equation to be solved at each time step is then [START_REF] Bilbao | Numerical modeling of collisions in musical instruments[END_REF]:
δ t. u n Kβ[η n ] α + . Instead of
(I N -1 + L)r + b + m ψ(r + a) -ψ(a) r = 0, ( 18
)
where L is a diagonal matrix such that
L ii = ∆t 2µ Kβ[η i n ] α + .
Stability analysis
This section is devoted to the stability analysis of the numerical scheme. To this end, it is more convenient to rewrite [START_REF] Bilbao | Numerical simulation of string/barrier collisions: the fretboard[END_REF] with an explicit use of temporal discrete operators.
The equivalent representation may be written as:
µ Č1 δ tt q n + Č2 q n + Č3 δ t. q n = F n , (19)
with diagonal matrices Č1 , Č2 and Č3 with the following entries:
Č1ii = 1 + (1 -γ i ) ω 2 i ∆t 2 2 1 + (1 -γ i ) ω 2 i ∆t 2 2 + σ * i ∆t , Č2ii = ω 2 i 1 + (1 -γ i ) ω 2 i ∆t 2 2 + σ * i ∆t , Č3ii = 2σ * i 1 + (1 -γ i ) ω 2 i ∆t 2 2 + σ * i ∆t
.
The coecients γ i and σ * i may be written as:
γ i = 2 ω 2 i ∆t 2 - A i 1 + e i -A i , σ * i = 1 ∆t + ω 2 i ∆t 2 -γ i ω 2 i ∆t 2 1 -e i 1 + e i ,
where
A i = e -σi∆t e √ σ 2 i -ω 2 i ∆t + e - √ σ 2 i -ω 2 i ∆t
and
e i = e -2σi∆t . (20)
The equivalent scheme for the displacement u may thus be written as:
µ Ď1 δ tt u n + Ď2 u n + Ď3 δ t. u n = f n , (21)
where Ď1 = S Č1 S -1 , Ď2 = S Č2 S -1 and Ď3 = S Č3 S -1 are symmetric matrices. The force term is expressed as in Section 2.5.
Let us introduce the inner product as:
u, v = ∆x N -1 j=1 u j v j ,
where ∆x is the spatial step.
Taking the inner product between equation ( 21) and δ t. u n , the following discrete energy balance is obtained:
δ t-H n+ 1 2 = -µ δ t. u n , Ď3 δ t. u n , (22)
where
H n+ 1 2 = µ 2 δ t+ u n , Ď1 δ t+ u n + µ 2 u n+1 , Ď2 u n + ψ n+ 1 2 , 1 . (23)
Because Ď3 is positive semi-denite (see the proof in AppendixA, Property 3), the scheme is thus strictly dissipative. Therefore it is stable if the energy is positive.
The force potential being positive, and given Properties 1 and 3 demonstrated in AppendixA, the stability condition can be rewritten as:
δ t+ u n , ( Ď1 - ∆t 2 4 Ď2 )δ t+ u n ≥ 0. ( 24
)
It is therefore sucient to show that
( Ď1 -∆t 2 4 Ď2 ) is positive semi-denite to obtain stability, which is true if ( Č1 -∆t 2 4 Č2
) is positive semi-denite. Consequently, the sucient condition reads, ∀i ∈ {1, ..., N -1}:
γ i ≤ 1 2 + 2 ω 2 i ∆t 2 . ( 25
)
After a straightforward manipulation, the condition is easily expressed as:
1 + e i + A i 1 + e i -A i ≥ 0, ∀i ∈ {1, ..., N -1}. (26)
Eq. ( 26) is satised if 1 + e i ± A i > 0, which is always true (see Property 2 in AppendixA for the proof), and hence the scheme is unconditionally stable. The limiting case σ i = 0 corresponds to the lossless string. Then γ i reduces to:
γ i = 2 ω 2 i ∆t 2 - cos(ω i ∆t) 1 -cos(ω i ∆t) ,
and unconditional stability is obtained, as in the lossy case.
Considering contact losses (see Eq. ( 18)) leads to the following discrete energy balance:
δ t-H n+ 1 2 = -µ δ t. u n , Ď3 δ t. u n -δ t. u n , δ t. u n Kβ[η n ] α + . (27)
Since δ t. u n , δ t. u n Kβ[η n ] α + ≥ 0, the dissipation in the system is then increased, and the stability condition is not aected.
Second polarisation
-s s A -A f f v t Figure 2: Friction force
In this section, the model is extended to include motion in the second polarisation of the string. The two unknown displacements along (Oz) and (Oy) are respectively denoted as u(x, t) and v(x, t). Assuming small displacements, the equations of motion for u and v are assumed uncoupled as long as no contact arises. In particular, no coupling is included at boundaries. As soon as a contact point is detected for the vertical displacement u, it is assumed that the horizontal displacement v undergoes a friction force f f . The continuous equation for the displacement v may be written as:
µv tt -T v xx + EIv xxxx = δ xc (x)f f , (28)
where δ xc is a Dirac delta function centered at x c . Since the obstacle is assumed to be located at a point along (Ox) in this study, contact is assumed to arise at the location of the point obstacle x c only; it could, however, be extended to a larger contact surface. f f is a simple regularised Coulomb friction law dened as (see Fig. 2):
f f (v t ) = A 1 if v t < -s and u < g v t /s if |v t | ≤ s and u < g -1 if v t > s and u < g 0 if u ≥ g, (29)
where v t is the velocity of the string, and A (N), s > 0 (m.s -1 ) are the two constant parameters that dene the friction law. In particular, as shown in Section 5.2.3, these values can be tted from experiments.
The expression for the stored energy associated with ( 28) is given by:
H = µ 2 L 0 (v t ) 2 dx + T 2 L 0 (v x ) 2 dx + EI 2 L 0 (v xx ) 2 dx ≥ 0, (30)
and satises
d H dt = Q, where Q = v t (x c )f f (v t (x c )). (31)
Applying the same method as for the contact between a string and a bow described in [START_REF] Bilbao | Numerical Sound Synthesis: Finite Dierence Schemes and Simulation in Musical Acoustics[END_REF] and using a rst order interpolation operator, one obtains:
v n+1 -Dv n + Dv n-1 = ∆t 2 µ J(x c )f f (ξ n ), (32)
where J(x c ) is a vector consisting of zeros except at the obstacle position x c where its value is 1/∆x. ξ n = δ t. v n c is the velocity of the string point interacting with the obstacle, which is the solution of the following equation:
(-Dv n + ( D + I N -1 )v n-1 ) c + 2∆tξ n - ∆t 2 µ∆x f f (ξ n ) = 0,
where the subscript c designates the element corresponding to the obstacle position. This equation depends on u n through the force term f f , see Eq. ( 29). The discrete energy may be written as:
Hn+ 1 2 = µ 2 δ t+ v n , Ď1 δ t+ v n + µ 2 v n+1 , Ď2 v n . ( 33
)
It satises:
δ t- Hn+ 1 2 = -µ δ t. v n , Ď3 δ t. v n + δ t. v n c f f (δ t. v n c ). (34)
Since δ t. v n and f f (δ t. v n ) are of opposite signs, the scheme is again strictly dissipative.
Validation test
Convergence study
The necessity of oversampling to avoid aliasing and obtain trustworthy results is mentioned in [START_REF] Chatziioannou | Energy conserving schemes for the simulation of musical instrument contact dynamics[END_REF], due to the nonlinear contact which generates high frequencies. In this part, a detailed study of convergence is presented in order to x the sampling rate that will be used for simulations. The particular string under study here is an electric guitar string manufactured by D'Addario, the properties of which are detailed in Table 1. Under a tension of 180.5 N, it has a fundamental frequency of approximately 196 Hz (G3). The initial condition is set to a symmetric triangular shape of height u 0,max = 1.8 mm with a smooth corner obtained by considering the 50 rst modes, without initial velocity. The observed signal is taken at 10 mm from the extremity x = L. Simulations are conducted with corresponding to the duration used to compare experimental and numerical results in the following section. Convergence tests are thus more strict than for short durations. The relative L 2 error is dened as:
t∈τ (s ref (t) -s cur (t)) 2 t∈τ (s ref (t)) 2 1 2 , ( 35
)
where s cur is the current signal with F s < 4 MHz and s ref the reference signal with F s ≈ 4 MHz. Both are drawn from the string displacement at 10 mm from the boundary x = L. Sums are computed over the set τ of discrete times at which the signal having the lowest sampling rate (about 2 kHz) is evaluated. As is to be expected, the addition of losses leads to faster convergence. Also as is to be expected, the smoother the contact is (which corresponds to larger values of α and / or smaller values of K), the faster the convergence is, since less high frequency content is generated due to the contact. The slope 2 corresponding to the order of the scheme is visible after a threshold sampling rate is reached. For the rst sampling rates, in most of the cases presented here, a plateau can be observed. This may be due to the fact that for lower sampling rates, all the physical spectral content of the signal is not yet fully represented, so that the expected convergence speed cannot appear. This would explain that the stier the contact is, the larger the threshold sampling rate is, since higher frequencies are generated. In the case N -1 = 1, the rst signals dier from the 4 MHz signal mostly because of a phase dierence which increases at each contact, as illustrated in Fig. 4. Table 2 details maximum penetration of the string in the obstacle for F s = 2 MHz. The smoother the contact is, the larger the penetration is. A correlation can be made between convergence behaviour and maximum penetration, since penetration is directly linked to stiness of the contact and therefore to the amount of generated high frequencies. When penetration is greater than the string diameter, the relative error for F s = 2 MHz is smaller than 1 × 10 -2 . For smaller penetration however, convergence is signicantly slower.
In the context of our study, rigid contact is intended so that values α = 1.5 and K = 10 13 are selected by empirical comparisons to experiments. In the case of a point obstacle, it appears that smoothing the contact makes numerical signals dier more from experimental ones, whereas making it stier does not signicantly improve results. In order to conduct simulations, in the sequel, the sampling rate is chosen as corresponding to a relative L2 error smaller than about 1 × 10 -1 . Then a sampling rate of at least F s = 1 MHz is necessary. For an extra degree of safety, F s = 2 MHz is selected in the following.
F s N α K β 2 MHz 1002 1.5 10 13 0 Table 3: Numerical parameters
Comparison to the analytical solution
The outcomes of the numerical scheme [START_REF] Bilbao | Numerical simulation of string/barrier collisions: the fretboard[END_REF] are rst compared to an analytical solution presented by H. Cabannes [START_REF] Cabannes | Cordes vibrantes avec obstacles (Vibrating strings with obstacles)[END_REF][START_REF] Cabannes | Mouvements presque-périodiques d'une corde vibrante en présence d'un obstacle xe, rectiligne ou ponctuel (Almost periodic motion of a string vibrating against a straight or point xed obstacle)[END_REF], for the case of an ideal string, with a centered point obstacle in contact with the string at rest. The analytical solution assumes contact with no interpenetration, corresponding to a perfectly rigid point obstacle. Consequently the numerical parameters for the contact law are selected as α = 1.5 and K = 10 13 . The initial condition in displacement has the shape of a triangle, with an initial velocity of zero everywhere. In order to facilitate comparisons with the analytical solution, results are dimensionless in the present section. To this end, dimensionless values are used in simulations. N -1 = 1001 interior points and a dimensionless sampling rate F s,d = 5000 (corresponding to a sampling rate of about 2 MHz, see Section 3.1) are selected. Subscript d stands for "dimensionless". Stiness and damping parameters are chosen as B d = 2 × 10 -5 and σ d,j = j × 5 × 10 -3 , ∀j ∈ {1, ..., N -1}.
For a qualitative comparison, Fig. 5 shows successive snapshots of the prole of the ideal string during its vibration. As already noted in [START_REF] Cabannes | Cordes vibrantes avec obstacles (Vibrating strings with obstacles)[END_REF], the contact is persistent and occurs as long as the string is under its rest position. With this view the numerical solution perfectly coincides with the analytical one.
A more quantitative comparison is shown in Fig. 6, by focusing on the very beginning of the motion. Also, in order to get a better understanding of the individual eects of damping and stiness terms in the model, they are incorporated step by step to investigate how the solution departs from that of the ideal string. The time series shown in Fig. 6 represents the output u(x m , t), where the point x m is located at 9L/100. Fig. 6(a) and (b), for an ideal string, respectively without and with obstacle, show that the numerical solution closely matches the analytical one. In particular, one can observe that the fundamental frequency in the case with obstacle is equal to the one without obstacle multiplied by the ratio 4/3, as theoretically predicted [START_REF] Cabannes | Mouvements presque-périodiques d'une corde vibrante en présence d'un obstacle xe, rectiligne ou ponctuel (Almost periodic motion of a string vibrating against a straight or point xed obstacle)[END_REF]. Fig. 6(c) shows the eect of damping on the numerical simulation. Small unevennesses appears, specied by arrows in Fig. 6(c). They are most probably due to the rounding of traveling corners. Finally, adding a small stiness value in the string creates Comparison at six dierent times of the rst period between the analytical solution (blue circles) and the numerical one (modal approach, red line). Simulation conducted with F s,d = 5000, K = 10 13 , α = 1.5 and β = 0. Presented variables are dimensionless.
dispersive waves, which in turn produce precursors since high frequencies arrive before lower ones at the measurement point. Previously mentioned unevennesses also appear in this case because stiness causes rounding of corners. Finally, energy variations of the numerical sti string are presented in Fig. 7, with and without obstacle. As no damping is included in this numerical simulation, the energy is conserved: the normalised energy variations from one time step to the next are small, and of the order of 10 -10 . One can also observe that during the time interval where the contact occurs (indicated with a bold blue line in Fig. 7b), small oscillations in the contact energy appear which are due to very small oscillations of the string at the contact point. The behaviour of the string at this point will be further addressed in Section 5.2.2. Consistency of numerical results compared to the analytical solution has thus been highlighted, as well as eects of the string damping and stiness. Energy considerations have also been presented. In the following, an experimental set up is presented, which will be exploited to compare numerical results against experiments.
. Measurement frame
The string to be considered here is described in Section 3.1. The vibration of this string, isolated from any surrounding structure, is studied on a measurement frame designed to this end [START_REF] Cuesta | Evolution temporelle de la vibration des cordes de clavecin (Temporal evolution of harpsichord strings vibration)[END_REF][START_REF] Paté | Predicting the decay time of solid body electric guitar tones[END_REF] (see Fig. 8, where two congurations are presented: the rst with a centered obstacle, the second with an obstacle near a boundary). The string is plucked with a 0.05 mm diameter copper wire that breaks at the initial time [START_REF] Paté | Predicting the decay time of solid body electric guitar tones[END_REF] at the middle of the string. The maximal initial displacement is about u 0,max = 1.5 mm in the rest of the paper. The vertical and horizontal displacements are measured with optical sensors described in [START_REF] Le Carrou | A low-cost high-precision measurement method of string motion[END_REF]. They are located near the string end at x = L, respectively at 1 cm (vertical) and 2 cm (horizontal). In the present study, the obstacle touches the string at rest. The point obstacle is realised with a metal cuboid edge. It is mounted on a vertical displacement system with a micron-scale sensitivity.
Contact detection
In order to detect contact between the string and the obstacle, an electrical circuit is installed on the measurement frame (see Fig. 9). The switch links the string and the obstacle, which are both conductive. Voltage at its terminals is measured as an indicator of contact, being null when the string touches the obstacle. In order to avoid electric arcs for small distances between the string and the obstacle, components inside the acquisition card (R N I = 300 kΩ and C N I = 10.4 pF) must be taken into account. A 10 kHz alternative current has been employed and R = 100 kΩ has been chosen.
Identication of linear characteristics
In order to identify linear parameters of the string, i.e. eigenfrequencies and modal damping ratios, free vibrations of the string in the absence of the obstacle are measured and analysed with the ESPRIT method [START_REF] Roy | Esprit -a subspace rotation approach to estimation of parameters of cisoids in noise[END_REF]. This method is applied to 4 seconds of the signal, starting 0.2 seconds after the string is plucked in order to avoid the transitory regime. Modes are treated one by one, according to the procedure described in [START_REF] Le Carrou | Sympathetic string modes in the concert harp[END_REF][START_REF] Paté | Predicting the decay time of solid body electric guitar tones[END_REF]. The linear characteristics of 36 modes have been recovered with the method, which covers a frequency range up to 7200 Hz. Beyond this frequency, modes are not excited strongly enough in the measured signals and the signal to noise ratio becomes too small to enable identication. In order to determine the remaining values, theoretical models are employed. The eigenfrequencies are then given by ν j = j c0 2L 1 + Bj 2 (see Section 2.2), where the inharmonicity factor B (see Table 1) is determined by tting the model to measurements. Damping parameters are obtained from the damping model presented in Section 2. This representation depends on two parameters, δ ve and Q -1 te , which are determined from experimental tting. Selected values are δ ve = 4.5 × 10 -3 and Q -1 te = 2.03 × 10 -4 . These parameters will be used in the rest of the paper. Measured values together with uncertainties (obtained over nine measurements, covering repeatability measurement errors and ESPRIT method uncertainties), theoretical model results and errors between them are shown in Fig. 10 and11. One can observe that the inharmonicity of the string (and thus its stiness) is very small. The damping model gives a very accurate representation of the measured losses. Uncertainties on frequencies are around 0.1 %, therefore not visible in Fig. 10. Errors in the frequencies and quality factors are respectively smaller than 0.2 % and 25 %.
Finally, a highly controlled set up has been presented and linear parameters of the string have been accurately determined. Obtained parameters can therefore be employed in the numerical model described in Section 2 and a comparison between numerical and experimental results is possible.
Numerical vs experimental results
In this section, numerical and experimental signals are compared over long durations, in the time and frequency domains. Three cases are considered: the vibrating string without obstacle, or with a point obstacle either centered or near a boundary, the latter constituting a two point bridge.
Selected string and numerical parameters are presented in Tables 1 and3. In all experimental results presented here except in Section 5.2.3, the initial condition is located in the (xOz) plane, so that almost no initial energy is communicated to the horizontal polarisation. It has indeed been observed in all measurements that with this type of initial condition the horizontal oscillations were negligible. We thus focus on the vertical motion only in these cases. Associated
Q -1 te = 2.03 × 10 -4 .
sounds are available on the companion web-page of the paper1 . They correspond to the displacement along (Oz) at x = 992 mm, resampled at 44.1 kHz.
No obstacle
Fig. 12 shows the comparison between experimental and numerical results, when there is no obstacle, at the location of the optical sensor, i.e. at 1 cm from the edge x = L. The parameters of the numerical simulation are specied in Tables 1 and3. Dispersion eects are clearly visible in the rst periods where the waveform is close to a rectangular function, then losses make the waveform evolve with the same progression numerically and experimentally. A minor error in the amplitude of the response is noticeable. The spectrograms of experimental and numerical signals are compared in Fig. 13, underlining the similarity of the frequency content of both time series. Due to the nature of the initial condition (even function with respect to the centre point), odd modes should not be excited. However one can observe the trace of these modes in the experimental spectrogram, even though their amplitudes are more than 60 dB below the amplitude of the rst mode. This should be attributed to small imperfections in the string or boundary conditions, or to a small deviation of the experimental initial condition from the perfect symmetric triangle. In the simulation, the odd modes are completely absent. Finally, one can also observe that the damping of the upper modes seems to be slightly underestimated in the numerical simulation since their energy remains visible approximately 0.1 s longer on the spectrograms.
Centered point obstacle
In this section, the vibration of a string against a centered point obstacle is examined. First, the string is excited in the (xOz) plane. The contact is investigated in detail and the second polarisation is observed. Expanded uncertainty at 95 % (gray). Bottom shows the temporal decrease of the energy numerically computed. Fig. 14 presents numerical and experimental signals in the case of a centered point obstacle and initial excitation along (Oz). As in Section 5.1, similarities can be observed, in the global shape of the signal as well as in its detailed behaviour. The ratio between numerical frequencies without obstacle f 1 and with the centered obstacle f 2 satises f1 f2 ≈ 195.7 261.3 ≈ 3 4 , as expected from the theory (see Section 3.2). Fine features of the experimental signal are reproduced numerically, as can be seen in enlarged views of the results. The dynamics including the contact is well-reproduced, and the numerical waveform evolves similarly to the experimental one. However, a signicant error in amplitude appears, which may be due to uncertainty in the obstacle position and height, non-ideal experimental boundaries and initial conditions or dissipation as contact occurs. It could also be due to an imperfect rigid obstacle. Note that adding losses in the contact as described in Section 2.3.2 reduces the amplitude, so that the global shape better ts the experimental one. Nevertheless, this is at the cost of the local waveform shape, as illustrated in Fig. 15. Therefore no contact damping is included in the following (i.e. β = 0). Spectrograms of experimental and numerical signals are presented in Fig. 16, they once again show strong similarities. Since modes are coupled through the contact, there is no missing mode, contrarily to the case without obstacle. A peculiar feature is a spectral resurgence zone around 8 kHz, underlined by a brace in Fig. 16b, which clearly appears on both numerical and experimental spectrograms, showing that energy can be transfered thanks to the contact up to these high frequencies. It is also a signature of the dispersion since cancelling the stiness term makes this zone disappear. A second distinctive feature of the spectrograms is the appearance of spectral peaks with larger amplitudes, around 1306 Hz, 2090 Hz, 2874 Hz, 3658 Hz, ..., see Fig. 16b where arrows indicate their presence.
The dierence between two of each of these successive peaks is equal to 784 Hz, indicating that a rule governs their appearance. This value of 784 Hz is related to the ratio 3/4 observed previously between the fundamental frequency of the string (196 Hz) and the fundamental frequency of the oscillations in the case with contact (261 Hz), since one has: 784 = 196 × 4 ≈ 261 × 3. Moreover, the dimensionless value of the period associated to 784 Hz is equal to 0.5. Returning back to Fig. 6, one can observe that in Fig. 6(c) and 6(d), an event appears each 0.5 time unit, the event could be either a change of sign or the appearance of the unevenness marked by an arrow. This could explain why this frequency is important in the spectrograms. Finally, one can also observe that the fundamental frequency of the discussed behaviour is equal to 522 Hz (1306 -784), which corresponds to the second partial of the signal.
Contact times
In the centered obstacle conguration and according to the theoretical solution (see Section 3.2), persistent contact arises as the string is under its rest position. This section aims at confronting this result to experimental and numerical ones. To this end, the set up described in Section 4.1.2 is employed. Fig. 17 shows experimental and numerical results. During the rst periods, the contact is clear and persistent experimentally. However as time progresses, it becomes more confused, certainly because of dispersion which makes the waveform more complex. Numerical results have been obtained with the same string parameters as previously, K = 10 13 , α = 1.5 and F s = 2 MHz. According to values of the contact indicator function, which equals 0.5 if contact arises and 0 otherwise, the contact is persistent. In fact, at the contact point, the string oscillates, however it does not make the contact indicator function switch since oscillations have an amplitude around 10 -8 m, which remains smaller than the penetration of the string in the obstacle, around 10 -7 m. The string thus oscillates under the rest position and contact detection remains positive. This behaviour is strongly related to the choice of α and K. Higher stiness parameters aect the contact persistence by decreasing the allowed penetration. For instance with α = 1.3 and K = 10 13 , the penetration is about 10 -8 m and rst oscillations arise in the neighborhood of 0, such that the indicator function oscillates at the beginning of each crenel.
3D string motion
So far, the initial condition is given in the (Oz) direction only. An initial condition combining (Oz) and (Oy) polarisations is now considered in the case of a centered point obstacle. Numerical and experimental results are compared in Fig. 18, where numerical friction force parameters (see Section 2.7) are empirically determined as s = 10 -5 m.s -1 and A = 0.12 N. Other parameters are unchanged.
The initial condition is similar to that used in the previous section except that approximately the same amplitude (about 1 mm) is imposed along (Oz) and (Oy). The oscillation plane resulting from this initial condition is thus at 45 degrees in (yOz).
Since the observation points are slightly displaced from one polarisation to the other, the displayed displacement v has a larger amplitude. The main observation reported from Fig. 18 is the very fast decay of oscillations along (Oy), since the motion cancels out after 0.025 s while the motion along (Oz) continues during several seconds. The second comment is that the numerical scheme well reproduces details of the decay of the displacement along (Oy), excepted small disturbances (of a few µm) when the string touches the obstacle, which also slightly aect the displacement along (Oz) and may be due to asperities on the obstacle which are not included in the model. As expected from displacement signals, the energy along (Oy) decreases rapidly. The largest amount of energy decrease arises when u is negative, which corresponds to contact times, so that friction is applied on v.
Two point bridge
In this part, the bridge of a tanpura is modelled using a two point bridge constituted by a point obstacle near a boundary, as explained in [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF][START_REF] Valette | The tampura bridge as a precursive wave generator[END_REF]. The distance between the point obstacle and the string boundary x = 0 is chosen as x b = 6 mm according to the range of values given in [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF] (5 to 7 mm for a string of length 1 m). Fig. 19 presents numerical and experimental signals in the case of a two point bridge. Again, the global shape of the signal as well as detailed oscillations are nely reproduced numerically. Eect of dispersion is faithfully described as can be seen on extended views at 0 and 0.2 s in particular. A slight amplitude error appears, smaller than in the centered obstacle case, as well as a slight delay (20 degrees after 1.5 s). Possible reasons are the same as in the previous case (see Section 5.2). Besides, the total energy decreases faster than in the centered point obstacle case, itself decreasing faster than when there is no obstacle. This could be explained by an improved transfer of energy to the high-frequency range in the case of the two point bridge, where damping factors are larger.
Let us now focus on spectrograms (see Fig. 20). Note that only frequencies up to 4.8 kHz are shown, contrarily to the case of the centered point obstacle. In the present case, no particular behaviour can be seen for higher frequencies, and the presented spectrograms focus on the zone of interest. As in the centered obstacle case, no missing mode is observed, due to the coupling of modes at the contact point. A descending formant can be observed which follows a time evolution as described in [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF] (experimental study) and [START_REF] Van Walstijn | Numerical simulation of tanpura string vibrations[END_REF] (numerical study) where a string vibrating against a tanpura bridge is considered. Its evolution is accurately reproduced by the numerical result, although some dierences appear after 2 s, when the signal amplitude has become very small. The essential role of dispersion highlighted in [START_REF] Valette | Mécanique de la corde vibrante (Mechanics of the vibrating string)[END_REF][START_REF] Siddiq | A physical model of the nonlinear sitar string[END_REF] is again demonstrated through the spectrogram in Fig. 21 where dispersion is cancelled. Comparing Fig. 20b and 21, one observes a similar ascending behaviour for lowest frequencies and during the rst 1 s of the signal. However spectrograms substantially dier after in time as well as in the high frequency range. This shows the essential role of dispersion in the rich and complex behaviour of the signal.
Conclusion
In this paper, the motion of a sti damped string against an obstacle has been studied numerically and experimentally in both transverse polarisations. The present investigations focus on point obstacles, but the scheme allows the consideration of arbitrarily shaped obstacles along (Oz). It is based on a modal approach, allowing a exible adjustment of numerical behaviour in the linear regime (i.e., the eigenfrequencies and frequency-dependent damping coecients). In particular, measured values can be employed so that very realistic results can be obtained, which constitutes a major advantage of the method. While having a intrinsically modal nature, the scheme operates in the spatial domain. It could therefore be interpreted as a spectral method [START_REF] Trefethen | Spectral Methods in MATLAB[END_REF] combined with a time-stepping method. It is unconditionally stable, so that no bound on space and time steps is required for stability. Moreover, it is exact when the collision force is not present, contrary to other existing methods such as Hamiltonian methods [START_REF] Falaize | Guaranteed-passive simulation of an electro-mechanical piano: a port-Hamiltonian approach[END_REF][START_REF] Chatziioannou | Energy conserving schemes for the simulation of musical instrument contact dynamics[END_REF] and nite dierences [START_REF] Bilbao | Numerical Sound Synthesis: Finite Dierence Schemes and Simulation in Musical Acoustics[END_REF]. The necessity of a high sampling rate has been highlighted in order to obtain reliable results for simulations over a long duration. This aects the computation time, which could be improved by dening a variable spatial step, ner around the obstacle, and a variable time step, ner around contact events. Such renements should however be carefully handled, since a variable space step would change the structure of involved matrices, and a variable time step should be completed together with a sampling rate conversion without introducing additional artefacts.
The relevance of numerical results with regards to experiments has been demonstrated in Section 5.
To this end, a highly controlled experimental set up has been presented, as well as a reliable measure of the string linear features. Then a ne comparison between numerical and experimental results has been completed over a long duration, with an obstacle either at the middle of the string or near one boundary. In both cases, comparisons show an almost perfect agreement, without adding losses in the contact law. Results thus demonstrate both the accuracy of the numerical method and its ability to recover the most important physical features of the experiment. To the knowledge of authors, such a detailed comparison is absent from the literature.
In the present study, no realistic excitation mechanism in relation to musical gesture is included. The next step may thus be to incorporate the dynamics of the musician's ngers [START_REF] Chadefaux | A model of harp plucking[END_REF]. Moreover, dierently shaped obstacles may be considered, including distributed barriers in order to simulate a wider range of musical instruments. In addition, the coupling between transverse motions of the string is limited and unilateral. A more complex model may be considered. In order to complete the model, coupling to the structure could also be included as well as possible sympathetic strings [START_REF] Weisser | Shaping the resonance. sympathetic strings in hindustani classical instruments[END_REF][START_REF] Le Carrou | Modelling of sympathetic string vibrations[END_REF]
AppendixA. Identities and inequalities for stability analysis
In this appendix, useful properties of the matrices involved in the numerical scheme are demonstrated. Three main properties, directly used in the proof of the stability of the scheme, are shown.
The following identities are also recalled for an N × N positive semi-denite symmetric matrix M, and u n ∈ R N a vector. Proof. Because M is symmetric, we have the following equality for any two vectors u, v in R N :
u, Mv = ∆x i,k m ik v i u k = ∆x i,k m ik v k u i = ∆x v, Mu .
The inequality then results from the following equality:
u n+1 , Mu n = 1 4 u n+1 + u n , M(u n+1 + u n ) - ∆t 2 4 δ t+ u n , Mδ t+ u n .
Assuming that M is semi-denite positive gives the result.
Property 2. Using the notations dened in Section 2.6, 1 + e i ± A i > 0, ∀i.
Proof. Let us rst consider the case 0 < σ i < ω i .
Introducing X = σ i ∆t, Y = ω i ∆t, and Z = √ Y 2 -X 2 , one has:
1 + e i -A i = 1 + e -2X -2e -X cos(Z) > 1 + e -2X -2e -X = f 1 (X).
f 1 (X) is positive since f 1 (X) > 0 and f 1 (0) = 0, therefore 1 + e i -A i > 0. The same reasoning leads to 1 + e i + A i > 0.
Let us now study the case 0 < ω i < σ i . In this case, dening Z = √ X 2 -Y 2 :
1 + e i -A i = 1 + e -2X -2e -X cosh(Z) = f 2 (X, Z).
By assumption, 0 < ω i < σ i , so that 0 < Y < X, and therefore 0 < Z < X. On one hand, f 2,X > 2(e -Xe -2X ) > 0 and f 2 (0, Z) = 0 (since 0 < Z < X, the limiting case is X = Z = 0). On the other hand, f 2,Z = -2e -X sinh(Z) < 0 and f 2 (X, X) = 0. Finally, 1 + e i -A i > 0. Obtaining 1 + e i + A i > 0 is straightforward. Property 3. Ď1 , Ď2 and Ď3 (given in Section (2.6)) are symmetric and positive semi-denite. Proof. The proof is given for Ď2 and Ď3 . Given those, the proof for Ď1 is straightforward. Let us rst focus on Ď2 .
Since Č2 is diagonal and S-1 = ST , symmetry of Ď2 is obtained by construction of the matrix: Ď2 = S Č2 S -1 = ∆xS Č2 S T of which coecients are given by: Ď2ij = ∆x Let us now show that Ď2 is positive semi-denite. To do so, it is sucient to show that Č2 is positive semi-denite. Indeed: Let C be a square diagonal matrix and D = SC S-1 with S = √ ∆xS, where S is such that S-1 = ST . Consider q and u such that u = Sq. Then: q, Cq = q T Cq = q T ST SC ST Sq = u, Du . Therefore, if C is positive semi-denite, D is.
If diagonal coecients of Č2 are positive, then the proof is done. Since ω i > 0 ∀i, one has to show that 1 + (1 -γ i )
ω 2 i ∆t 2 2
+ σ * i ∆t > 0. Developing and rearranging this expression, one obtains:
1 + (1 -γ i ) ω 2 i ∆t 2 2 + σ * i ∆t = ω 2 i ∆t 2 2 1 + 1 -e i 1 + e i
1 + e i 1 + e i -A i .
(A.4) (A.4) is positive if 1+e i -A i > 0. This is satised (see Property 2), so that 1+(1-γ i )
ω 2 i ∆t 2 2
+σ * i ∆t > 0. Finally, Ď2 is semi-denite positive. In the lossless case, which is a limiting case to the lossy one, the demonstration is similar, starting from a reduced expression of coecients.
Let us now study Ď3 , the symmetry of which is obtained as previously.
As for Ď2 , it is sucient to show that Č3 is semi-denite positive. The denominator of Č3ii is positive as demonstrated above. Besides, one has:
σ * i = 1 -e i 1 + e i ω 2 i ∆t 2 1 + A i 1 + e i -A i . (A.5)
As previously, this quantity is positive. Finally, Ď3 is semi-denite positive. In the lossless case, this matrix does not appear in the problem since it is null.
Figure 1 :
1 Figure 1: A string of length L vibrating against an obstacle g(x).
m 0.43 mm 180.5 N 1.17 × 10 -3 kg.m -1 1.78 × 10 -5
4 × 1 × 5 Table 2 :Figure 3 :
41523 Figure 3: Relative L 2 error versus Fs, over 3 seconds. Left: α = 1, centre: α = 1.5, right: α = 2. K = 10 7 (blue), K = 10 9 (red), K = 10 11 (black) and K = 10 13 (magenta). (a) N = 2, lossless (solid lines) and lossy (dashed lines) sti string, B = 1.78 × 10 -5 (b) N = 1002, centered point obstacle, lossy sti string, B = 1.78 × 10 -5 .
Figure 4 :
4 Figure 4: Displacement with N -1 = 1, K = 10 13 , α = 1.5. Fs ≈ 8 kHz (dark dash-dot line), 64 kHz (red dashed line) and 4 MHz (blue line).
Figure 5 :
5 Figure 5: Snapshots of the motion of a dimensionless ideal string colliding with a point obstacle at centre (in black).Comparison at six dierent times of the rst period between the analytical solution (blue circles) and the numerical one (modal approach, red line). Simulation conducted with F s,d = 5000, K = 10 13 , α = 1.5 and β = 0. Presented variables are dimensionless.
4. Experimental study 4 . 1 .
41 Experimental set up 4.1.1
Figure 6 :
6 Figure 6: Time signal of the dimensionless string at xm = 9L/100, comparison of analytical solution for an ideal string (red dashed line), and numerical results (blue line). N = 1002, F s,d = 5000. Variables are dimensionless. (a) ideal string without losses or dispersion, and without obstacle. (b) ideal string with obstacle. (c) losses added in the numerical simulation, with obstacle. (d) dispersive lossless numerical string with obstacle, B d = 2 × 10 -5 .
Figure 7 : 2 H 1 / 2 (
7212 Figure 7: Energetic behaviour of the numerical ideal lossless vibrating string, F s,d = 5000. Variables are dimensionless. Top: energy of the numerical signal; kinetic energy (red dashed line and circles); potential energy (dark line and diamonds); total energy (blue dashed line). Bottom: relative energy variation H n+1/2 -H 1/2 H 1/2 (a) No obstacle. (b) Centered point obstacle. The contact energy (magenta line) is also presented. Bold blue lines indicate the time interval during which contact is persistent, resulting in an oscillatory pattern for the contact energy, shown in the upper inset.
Figure 8 :Figure 9 :
89 Figure 8: Schematic representation of the measurement frame.
Figure 10 :Figure 11 :
1011 Figure 10: Experimental (red crosses) and theoretical (blue circles) eigenfrequencies, νm and ν th respectively, expanded uncertainty (gray lines) and error indicator (orange diamonds) ν = |ν th -νm| νm , with B = 1.78 × 10 -5 .
Figure 12 :Figure 13 :
1213 Figure 12: Displacement of the string when vibrating without obstacle, B = 1.78 × 10 -5 . Comparison between measurement (blue line) and numerical simulation (red line), vertical displacement at 1 cm near the edge x = L. Expanded uncertainty at 95 % (gray). Bottom shows the temporal decrease of the energy numerically computed.
3 Figure 14 :
314 Figure14: Displacement of the string when vibrating with a centered point obstacle, B = 1.78 × 10 -5 . Comparison between measurement (blue line) and numerical simulation (red line), vertical displacement at 1 cm near the edge x = L. Expanded uncertainty at 95 % (gray). Bottom shows the temporal decrease of the energy numerically computed.
Figure 15 :
15 Figure 15: Numerical (red line) and experimental (blue line) displacement signals with a centered point obstacle, with B = 1.78 × 10 -5 , including contact losses with β = 500 (top) and β = 1000 (bottom).
Figure 16 :
16 Figure 16: Centered obstacle, B = 1.78 × 10 -5 . Spectrograms: (a) experimental, (b) numerical.
Figure 17 :
17 Figure 17: Top: experimental (blue line) and numerical (red dashed line) string displacement with a centered point obstacle, B = 1.78 × 10 -5 . Bottom: experimental tension between the string and the obstacle (blue line) and numerical contact indicator function (red dashed line).
Figure 18 :
18 Figure 18: Top and centre: experimental (blue line) and numerical (red line) string displacement along (Oz) (at 1 cm near the edge x = L) and (Oy) (at 2 cm near the edge x = L) with a centered point obstacle and B = 1.78 × 10 -5 . Bottom: numerical energy of u (blue line) and v (dark dashed line).
Figure 19 :Figure 20 :
1920 Figure 19: Displacement of the string when vibrating with a two point bridge, B = 1.78 × 10 -5 . Comparison between measurement (blue line) and numerical simulation (red line), vertical displacement at 1 cm near the edge x = L. Expanded uncertainty at 95 % (gray). Bottom shows the temporal decrease of the energy numerically computed.
Figure 21 :
21 Figure 21: Two point bridge. Numerical spectrogram, without dispersion.
.
supported by the European Research Council, under grant number StG-2011-279068-NESS. The authors also thank Laurent Quartier for contributing to the experimental set up realisation, as well as Benoît Fabre and Patrick Ballard for advices and discussions.
N - 1 k=1S 1 k=1S
11 ik Č2 kk S T kj = ∆x Nik Č2 kk S jk ,with i, j ∈ {1, ..., N }.
Table 1 :
1 Physical properties of the string
1 .
1 Mδ tt u n , δ t. u n = 1 2 δ t-δ t+ u n , Mδ t+ u n (A.1) 2. Mu n , δ t. u n = 1 2 δ t-u n+1 , Mu n (A.2)These identities are useful in the course of the computations for demonstrating stability. The proof is straightforward.Let us give the rst property.Property 1. Let M be a N × N positive semi-denite symmetric matrix and u n , u n+1 ∈ R N . Then one has:u n+1 , Mu n ≥ -∆t 2 4 δ t+ u n , Mδ t+ u n . (A.3)
Sounds are available in the companion web-page of the paper hosted by Elsevier as well as at http://www.lam.jussieu.fr/Membres/Issanchou/Sounds_vibrating_string_point_obstacle.html. | 71,703 | [
"969692",
"881908",
"12002",
"3131",
"2478"
] | [
"541887",
"227705",
"541887",
"421305",
"135264",
"421305",
"135264"
] |
01475674 | en | [
"sdu",
"info"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01475674/file/Article_ChannelMigration_Rongier.pdf | Guillaume Rongier
Pauline Collon
Philippe Renard
A geostatistical approach to the simulation of stacked channels
Keywords:
Turbiditic channels evolve continuously in relation to erosion-deposition events. They are often gathered into complexes and display various stacking patterns. These patterns have a direct impact on the connectivity of sand-rich deposits. Being able to reproduce them in stochastic simulations is thus of significant importance. We propose a geometrical and descriptive approach to stochastically control the channel stacking patterns. This approach relies on the simulation of an initial channel using a Lindenmayer system. This system migrates proportionally to a migration factor through either a forward or a backward migration process. The migration factor is simulated using a sequential Gaussian simulation or a multiple-point simulation. Avulsions are performed using a Lindenmayer system, similarly to the initial channel simulation. This method makes it possible to control the connectivity between the channels by adjusting the geometry of the migrating areas. It furnishes encouraging results with both forward and backward migration processes, even if some aspects such as data conditioning still need to be explored.
I
Heterogenities within turbiditic channel deposits can have a dramatic impact on fluid flow and reservoir production [e.g., [START_REF] Gainski | Turbidite reservoir compartmentalization and well targeting with 4D seismic and production data: Schiehallion Field[END_REF]. Mud-rich deposits such as margin drapes or slumps can obstruct fluid circulation and compartmentalize the reservoir depending on the stacking pattern [START_REF] Labourdette | Threedimensional modelling of stacked turbidite channels in West Africa: impact on dynamic reservoir simulations[END_REF], i.e., how channels position themselves in relation to each others. Significant changes in the stacking pattern can be observed even over short distances [START_REF] Mayall | Reservoir Prediction and Development Challenges in Turbidite Slope Channels[END_REF]. This represents a major source of uncertainty regarding the connectivity, and modeling the stacking can help to assess this uncertainty.
This stacking results from two main processes: channel migration and avulsion. Migration occurs either through the gradual erosion and accretion of sediments along the channel margins, called continuous migration [e.g., [START_REF] Abreu | Lateral accretion packages (LAPs): an important reservoir element in deep water sinuous channels[END_REF][START_REF] Arnott | Stratal architecture and origin of lateral accretion deposits (LADs) and conterminuous inner-bank levee deposits in a base-of-slope sinuous channel, lower Isaac Formation (Neoproterozoic), East-Central British Columbia, Canada[END_REF][START_REF] Nakajima | Outer-Bank Bars: A New Intra-Channel Architectural Element within Sinuous Submarine Slope Channels[END_REF], or through the incision and filling of a new channel, sometimes with a significant distance between the channels, called discrete or abrupt migration [e.g., [START_REF] Abreu | Lateral accretion packages (LAPs): an important reservoir element in deep water sinuous channels[END_REF][START_REF] Deptuck | Architecture and evolution of upper fan channel-belts on the Niger Delta slope and in the Arabian Sea[END_REF][START_REF] Maier | Punctuated Deep-Water Channel Migration: High-Resolution Subsurface Data from the Lucia Chica Channel System, Offshore California[END_REF]. Four patterns stand out (figure 1):
• A lateral channel bend migration or swing, which shifts the bend laterally and increases the channel sinuosity [START_REF] Peakall | A Process Model for the Evolution, Morphology, and Architecture of Sinuous Submarine Channels[END_REF][START_REF] Posamentier | Depositional elements associated with a basin floor channel-levee system: case study from the Gulf of Mexico[END_REF]].
• A downsystem channel bend migration or sweep, which shifts the bend downward [START_REF] Peakall | A Process Model for the Evolution, Morphology, and Architecture of Sinuous Submarine Channels[END_REF][START_REF] Posamentier | Depositional elements associated with a basin floor channel-levee system: case study from the Gulf of Mexico[END_REF]].
• A channel bend retro-migration, which decreases the channel sinuosity [START_REF] Nakajima | Outer-Bank Bars: A New Intra-Channel Architectural Element within Sinuous Submarine Slope Channels[END_REF].
• A vertical channel migration or aggradation, which shifts the channel upward [START_REF] Peakall | A Process Model for the Evolution, Morphology, and Architecture of Sinuous Submarine Channels[END_REF].
Avulsion occurs when the density currents exceed the channel capacity to contain them: the flow leaves the channel and forms a new course.
Simulating migration and avulsion is a central research subject to better model the channel stacking. In fluvial systems, the most widespread methods are two-dimensional physical simulations [START_REF] Lopez | Modélisation de réservoirs chenalisés méandriformes : une approche génétique et stochastique[END_REF], Pyrcz et al., 2009]. They link the migration to the asymmetry in the flow field induced by the channel curvature, which is responsible for bank erosion [START_REF] Ikeda | Bend theory of river meanders. Part 1. Linear development[END_REF]. These two-dimensional physical methods have been applied [START_REF] Mchargue | Architecture of turbidite channel systems on the continental slope: Patterns and predictions[END_REF] and adapted [START_REF] Imran | A nonlinear model of flow in meandering submarine and subaerial channels[END_REF] to turbiditic environments. But the physical processes behind submarine channels remain controversial. The main controversy concerns the rotation direction of the secondary flow and its controlling factors [e.g., [START_REF] Corney | The orientation of helical flow in curved channels[END_REF][START_REF] Imran | The orientation of helical flow in curved channels[END_REF][START_REF] Corney | Reply to discussion of Imran et al. on "The orientation of helical flow in curved channels[END_REF], which constrain channel migration. [START_REF] Dorrell | Superelevation and overspill control secondary flow dynamics in submarine channels[END_REF] argue that two-dimensional physical models are not accurate enough to capture the full three-dimensional structure of the flow field. This lack of accuracy was also pointed out in fluvial settings, especially with simplified physical models [START_REF] Camporeale | Hierarchy of models for meandering rivers and related morphodynamic processes[END_REF]. More complex two-dimensional models or three-dimensional models call for more parameters and a bigger computational effort, and their validity remains questioned [e.g., [START_REF] Sumner | Driven around the bend: Spatial evolution and controls on the orientation of helical bend flow in a natural submarine gravity current[END_REF]. Thus, their convenience in a stochastic framework is doubtful.
Another approach proposes to only reproduce some stratigraphic rules, for instance mimicking the migration without actually simulating the physical processes [START_REF] Pyrcz | Stratigraphic rule-based reservoir modeling[END_REF]. [START_REF] Viseur | Simulation stochastique basée-objet de chenaux[END_REF] and [START_REF] Ruiu | Modeling Channel Forms and Related Sedimentary Objects Using a Boundary Representation Based on Non-uniform Rational B-Splines[END_REF] defined migration vectors from a weighted linear combination of vectors for lateral migration, downsystem migration, and bend rotation. [START_REF] Teles | Sur une nouvelle approche de modélisation de la mise en place des sédiments dans une plaine alluviale pour en représenter l'hétérogénéité[END_REF], [START_REF] Labourdette | LOSCS' Lateral Offset Stacked Channel Simulations: Towards geometrical modelling of turbidite elementary channels[END_REF] and [START_REF] Labourdette | Element migration in turbidite systems: Random or systematic depositional processes[END_REF] Map view
Lateral migration
Abrupt migration
Downsystem migration
Section view
Aggration
Inner levees Channels
Figure 1 Example of channel migration patterns interpreted on seismic data from the Benin-major channel-belt, near the Niger Delta (modified from [START_REF] Deptuck | Architecture and evolution of upper fan channel-belts on the Niger Delta slope and in the Arabian Sea[END_REF]).
went one step further by defining empirical laws controlling the spatial structure of the migration from modern channels or channels interpreted on seismic data. All such methods derive from object-based approaches, which simulate a channel object. On the other hand, cell-based approaches [e.g., Deutsch andJournel, 1992, Mariethoz et al., 2010] paint the channels inside a grid based on a prior model. This prior model describes spatial structures and their relationships. Such methods can simulate almost any structure with few parameters. However, they have difficulties reproducing continuous channelized bodies and are not designed to model channel migration. Here we propose a different approach to channel migration, combining object-and pixel-based approaches as done with other geological structures [e.g., [START_REF] Caumon | Elements for stochastic structural perturbation of stratigraphic models[END_REF][START_REF] Zhang | Stochastic surface modeling of deepwater depositional systems for improved reservoir models[END_REF][START_REF] Rongier | Simulation of 3D karst conduits with an object-distance based method integrating geological knowledge[END_REF].
The proposed channel migration method uses a geostatistical simulation to reproduce the spatial structure resulting from the physical processes rather than modeling the physical processes themselves. We stochastically simulate the spatial evolution of a channel from one stage to the next using either a sequential Gaussian simulation (SGS) or a multiple-point simulation (MPS) method (section 2). Such a descriptive approach avoids the use of physical models that can be difficult to parameterize. After describing its capabilities (section 3), the method is applied to a synthetic case of confined turbiditic channels (section 4). This case includes a comparison of the connectivity from migrated channels and from randomly implanted channels. Finally, we discuss those results along with some perspectives for the method (section 5).
S -
We divide channel migration into two elements:
• The horizontal component (hereafter referred to as migration), which includes the lateral, downsystem, and retro-migrations.
• The vertical component (hereafter referred to as aggradation), which includes the vertical migration.
[ [START_REF] Labourdette | LOSCS' Lateral Offset Stacked Channel Simulations: Towards geometrical modelling of turbidite elementary channels[END_REF] proposed to initiate the process from the last channel of a system, so from the youngest channel, and migrate backward. Indeed, this last channel is often observable on seismic data due to its argileous fill. Then the migration divides in two processes:
• A forward migration, which is the normal or classical migration. It starts from the oldest channel in geological time, which migrates to obtain the youngest channel.
• A backward migration, which is a reverse migration.
It starts from the youngest channel in geological time, which migrates to obtain the oldest channel.
We propose a process to handle both forward and backward channel migrations.
Channel initiation
The simulation calls for an already existing channel to initiate the evolution process. This initial channel can be interpreted from seismic data with a high enough resolution. Otherwise, it must be simulated. Here we use a formal grammar, the Lindenmayer system (L-system) [START_REF] Lindenmayer | Mathematical models for cellular interactions in development I. Filaments with one-sided inputs[END_REF], for this simulation. The L-system rewrites an initial string using rules, which replace a set of letters by another one. The resulting string is then interpreted into an object. [START_REF] Rongier | Connectivity of channelized sedimentary bodies: analysis and simulation strategies in subsurface modeling[END_REF] defined some rules in a framework able to stochastically simulate a channel from a L-system. These rules tie channel bends together, controlling the bend morphology and the orientation change between each bend. It results in a channel centerline, i.e., a set of locations through which the channel passes. Non-Uniform Rational B-Spline (NURBS) surfaces dress the L-system to obtain the final channel shape [START_REF] Ruiu | Modeling Channel Forms and Related Sedimentary Objects Using a Boundary Representation Based on Non-uniform Rational B-Splines[END_REF].
This method simulates various meandering patterns, from straight to highly sinuous channels. It is suitable for both forward migration, which classically requires starting with a quite straight channel, and backward migration, which requires an initial channel with a high sinuosity.
Channel migration
Channel migration is deeply linked to the channel curvature. Other elements have to be considered, such as soil properties or flow fluctuations. However, the physical processes behind bend evolution are complex and still not completely understood. Moreover, they differ from turbiditic to fluvial environments. This is why we propose to rely on a more descriptive approach based on geostatistics.
General principle
In physical approaches, a migration factor is computed along the nodes of a channel centerline based on fluid flow equations. Then the nodes are moved based on that factor along the normal to the centerline. We rely on a similar approach based on moving the nodes along the normal to the centerline to migrate a channel (figure 2). Here the Euclidean distance d of displacement for a node is the length of a displacement vector v:
d = v (1)
The displacement vector divides into two components (figure 2):
• A vertical component for the aggradation. Aggradation is simply done by shifting the new channel vertically by an aggradation factor a , which is the same for all the channel nodes.
• A horizontal component defined by a migration factor m computed using a stochastic simulation method.
The stochastic simulation of the migration factor is done either with sequential Gaussian simulation (SGS) or multiple-point simulation (MPS). In both cases, the curvature becomes a secondary variable that influences the structuring of the migration factor. We detail some numerical aspects valid for both the SGS and MPS in the supplementary materials concerning:
• Curvature computation.
• Regridding to preserve a constant distance between the centerline nodes after migration (figure 2).
• Smoothing to eliminate undesired small-scale fluctuations of the migration factor, which can have a significant impact after some migration steps.
Migration through sequential Gaussian simulation
The SGS simulates a migration factor value for each node of the centerline in a sequential way [e.g., [START_REF] Deutsch | GSLIB: Geostatistical Software Library and User's Guide[END_REF]. Here we use an intrinsic collocated cokriging [START_REF] Babak | An intrinsic model of coregionalization that solves variance inflation in collocated cokriging[END_REF] to introduce the curvature as secondary variable:
1. A random path is defined to visit all the centerline nodes.
2. At a given node: a) If some nodes in a given neighborhood already have a value:
i. A kriging system determines the Gaussian complementary cumulative distribution function (ccdf) using the data, i.e., the nodes with a value given in input and the previously simulated nodes within the neighborhood.
ii. A simulated value for the given node is drawn within the ccdf.
b) Otherwise, the simulated value is drawn from an input distribution of migration factor.
3. Return to step 2 until all the nodes of the path have been visited.
The SGS requires the migration factor to be a Gaussian variable.
If not, a normal score transform of the input distribution and of the data is introduced before step 1. A back transform is done at the end of the simulation process. This simulation of the migration calls at least for four parameters (figure 3, see the supplementary materials for more details):
• Two distributions, one for the aggradation factor a and one for the migration factor m . They control the distance between two successive channels.
• A variogram range r. It controls the extension of the migrating area along a bend. The other variogram parameters get default values.
• A curvature weight γ c . It represents the correlation between the primary variable, i.e., the migration factor, and the secondary variable, i.e., the curvature. When this weight is positive, the channel tends to migrate, when it is negative, the channel tends to retro-migrate. Thus, simply by changing the curvature weight symbol the same workflow achieves both forward and backward migration processes.
Migration through multiple-point simulation
Simulation methods such as the SGS rely on a histogram and a variogram inferred from the data. Thus, they only catch the one-and two-point statistics and miss all the higher-order statistics. But higher-order statistics are difficult if not impossible to infer from data. Multiple-point simulation [START_REF] Guardiano | Multivariate Geostatistics: Beyond Bivariate Moments[END_REF] attempts to overcome such limitation by relying on an external representation of the structures of interest, the training image. Here the training image is a set of migrating channels, with the aggradation factor, migration factor and curvature values all known. Using MPS instead of SGS could lead to more realistic migrations by using real channels as training set.
The whole training set is not necessarily used to simulate the migration of a channel. The process relies on a training model, which can be (figure 4):
• The entire training set. In this case, each simulated migration step is influenced by all the migration steps within the training set.
• A single migration step within the training set:
-Drawn randomly among all the migration steps of the training set.
-That follows the migration order of the training set. In that case, each simulated migration step corresponds to a particular migration step within the training set. With that option, the number of migration steps in the training set limits the number of simulated migration steps.
The aggradation values are randomly drawn from the training set and attributed to all the nodes of the channel to migrate.
The migration factor is simulated using the Direct Sampling method (DS) [START_REF] Mariethoz | The Direct Sampling method to perform multiple-point geostatistical simulations[END_REF]:
1. A random path is defined to visit all the centerline nodes. ii. At a given node:
A. Two distances d D,P are computed between the current pattern N y and the data event N x , one for the migration factor and one for the curvature: C. If the distances are both lower than given thresholds, the scan stops.
d D,P (N x , N y ) = 1 n n i=1 Z(x i ) -Z( y i ) max y∈T M (Z( y)) -min y∈T M (Z( y)) ( 2
a) The saved value becomes the simulated value for this node.
3. Return to step 2 until all the nodes of the path have been visited.
This method has the advantage of easily handling continuous properties and secondary data, here the curvature. The curvature ensures the link between the spatial variations of the migration factor in the training model and in the simulation.
Besides the training model, this simulation of the migration calls for the classical DS parameters (see the supplementary materials for more details):
• The maximal number of nodes in a data event.
• The maximal portion of the training model to scan.
• A threshold for the migration factor.
• A threshold for the curvature.
Neck cut-off determination
As the channel sinuosity increases, the two extremities of a bend come closer one to the other until the flow bypasses the bend. This is a neck cut-off, leading to the abandonment of the bypassed bend.
As done in physical simulations [e.g., [START_REF] Howard | Modeling channel migration and floodplain sedimentation in meandering streams[END_REF][START_REF] Camporeale | On the long-term behavior of meandering rivers[END_REF][START_REF] Schwenk | The life of a meander bend: connecting shape and dynamics via analysis of a numerical model[END_REF], neck cut-offs are simply identified when two non-successive nodes of the centerline are closer than a given threshold. The lowest threshold is the channel width, as the margins of the bend come in contact. However, this threshold is quite restrictive and not so realistic [START_REF] Camporeale | On the long-term behavior of meandering rivers[END_REF]. Here the threshold is set to 1.2 times the maximal channel width.
The search for cut-off starts upstream and continues to the most downstream part of the channel. The distance between a given node and another non-successive node of the centerline is compared with the threshold. When the distance is lower, these two nodes and all the nodes in-between are suppressed. The cutting path is then symbolized by two nodes. A new node is added along that path [START_REF] Schwenk | The life of a meander bend: connecting shape and dynamics via analysis of a numerical model[END_REF], using a cubic spline interpolation. This method of neck cut-off determination is simple but rather time-consuming. More efficient methods exist to reduce the computation time [e.g., [START_REF] Camporeale | On the long-term behavior of meandering rivers[END_REF][START_REF] Schwenk | The life of a meander bend: connecting shape and dynamics via analysis of a numerical model[END_REF].
For now only the forward migration process handles the formation of neck cut-offs. Indeed, the cut-offs appear naturally as the sinuosity increases. In the backward process, the sinuosity decreases: introducing neck cut-offs calls for a different method, which is a perspective of this work.
Avulsion
Avulsion is a key event widely observed in both fluvial and turbiditic systems. When an avulsion occurs, the channel is abruptly abandoned at a given location (figure 6). Upstream, the flow remains in the old channel, whereas downstream a new channel is formed. However, its triggering conditions remain poorly understood due to the complexity of this process. Avulsion is often statistically handled in simulation methods: a probability of avulsion controls the development of a new channel. This process can be influenced by the curvature, as a high curvature tends to favor an avulsion.
The approach for global avulsion is similar to that defined by [START_REF] Pyrcz | ALLUVSIM: A program for eventbased stochastic modeling of fluvial depositional systems[END_REF]. The avulsion starts by computing the sum of the curvatures at each channel section. A threshold is randomly drawn between zero and the sum of curvatures. The channel is then scanned from its most upstream part to the downstream part. At each centerline node, the curvature is subtracted from the threshold. A section initiates an avulsion or not depending on two factors:
• An input probability of avulsion.
• The random curvature threshold, which should be lower than the section curvature to trigger an avulsion.
Thus, the avulsion initiation at a given section is a probabilistic choice influenced by the curvature at the section location.
Then, the upstream part of the channel is isolated. It becomes the initial string to simulate the new post-avulsion channel with a L-system (figure 6). This channel is based on the same parameters than the initial channel, but different parameter values may be used. A repulsion constraint [START_REF] Rongier | Connectivity of channelized sedimentary bodies: analysis and simulation strategies in subsurface modeling[END_REF] with the pre-avulsion channel can be set to avoid intersections between the two channels. Here the neighboring configurations contains the four nodes with a known value that are the closest to the node to simulate. When the distance D DP between the data event and a pattern is lower than a given threshold d t , the process stops and the node to simulate gets the value at the same location within the pattern.
A
The method was implemented in C++ in the Gocad plug-in ConnectO. The channel envelopes based on NURBS were implemented by Jérémy Ruiu in the Gocad plug-in GoNURBS [START_REF] Ruiu | Modeling Channel Forms and Related Sedimentary Objects Using a Boundary Representation Based on Non-uniform Rational B-Splines[END_REF].
The method was used to simulate two realizations: one following a forward migration process and one following a backward migration process (figure 7). Both processes are able to reproduce various migration patterns, from lateral to downsystem migration, and even areas of retro-migration (figure 8). Some bends also evolve to complex bends constituted by several bends: this leads to the formation of new meanders. These synthetic cases have been developed without any conditioning data. The range is chosen similar to the bend length. The other variogram parameters are those predefined (see the supplementary materials). The curvature weight is kept high, giving a dominant lateral migration. The forward process could keep migrating over more steps, with neck cut-offs keeping the channels within a restrained area. The backward process does not migrate much after a few steps, when the channel starts to miss significant bends. An avulsion and sometimes an abrupt migration can re-establish some migration.
Abrupt migrations are handled by introducing a second set of migration parameters. The channel centerline is scanned upstream to downstream. A probability of abrupt migration defines if an abrupt migration occurs. The appearance of an abrupt migration is also weighted by the channel curvature. When an abrupt migration occurs, an abrupt migration length is drawn from an input distribution. All the nodes along the drawn length migrate following the second set of migration parameters. Such abrupt migration process can introduce a spatial discontinuity from the previous channel (figure 8).
Avulsion momentarily stops the migration process, which restarts from the new channel. The continuity between the upstream part to the avulsion location and the newly simulated channel is finely preserved (figure 8). The use of the curvature to weight the abrupt migration and avulsion process tends to decrease their emergence in the backward process. This is due to the sinuosity decrease induced by such process. In this case, higher probabilities are used.
Neck cut-offs naturally appear during the forward process as the sinuosity increases (figure 8, forward migration). The proposed backward process is unable to generate cut-offs. As the migration advances, the channel just gets straighter. It does not evolve to a complete straight line, but continuing the process does not lead to an increase of the sinuosity: the channel remains in a steady-state.
To test the method with MPS, a training set was simulated with a SGS-based process (figure 9). This training set has 9 migrating steps. Lateral migration dominates the system, and several abrupt migrations perturb the channel stacking. Each migration step simulates the migration factor from the corresponding step in the training set, and not from the entire training set. The parameters for the MPS favor quality over speed, with the two thresholds at 0.01 and maximal scanned fraction of the training model of 0.75. But the simulated migration factors are noisy, because the process can not always find a pattern that meets the thresholds. We added two smoothing iterations (see the supplementary materials) at the end of each migration step to limit the noise.
In the end, the lateral migration is still dominant in the simulations. Abrupt migrations are also reproduced by the simulation process. But they tend to be less frequent. They also tend to be smaller, both in length and in migration factor, than in the training set. This comes from both the inability to find the right pattern in the training set and from the smoothing.
A
This section aims at highlighting the impact of the migration process on the simulated channel connectivity. To do so, we rely on a synthetic case study including realizations with different stacking patterns.
Case study
The case study is inspired by turbiditic systems and their nested channelized bodies [e.g., [START_REF] Abreu | Lateral accretion packages (LAPs): an important reservoir element in deep water sinuous channels[END_REF][START_REF] Mayall | Turbidite channel reservoirs -Key elements in facies prediction and effective development[END_REF][START_REF] Janocko | The diversity of deepwater sinuous channel belts and slope valley-fill complexes[END_REF]. In such settings, channels migrate within and gradually fill a master channel -a large incision that confines the channels:
• Lateral migration dominates the first phase of the filling.
The channels migrate within the whole master channel width, with a low aggradation and some abrupt lateral migrations. Sand-rich channel deposits occupy the entire bottom of the master channel.
• Aggradation dominates the second phase of the filling. The lateral migration is less significant, and no abrupt migration arises. Sand-rich deposits occupy a limited area within the top of the master channel. The rest of the master channel is filled with inter-channel deposits, in particular inner levees whose development induces the limited lateral migration.
A hexahedral grid aligned along its margins represents the master channel (figure 10). Three sets of 100 realizations are simulated within this grid, with each realization containing 40 channels (see the supplementary materials for the input parameters).
The first two sets rely on a traditional object-based procedure: the channels are randomly placed inside the grid. Here, each channel is simulated using a L-system, similarly to the initial channel for the migration. L-systems condition data thanks to attractive and repulsive constraints [START_REF] Rongier | Connectivity of channelized sedimentary bodies: analysis and simulation strategies in subsurface modeling[END_REF]: in the proposed application, the master channel margins repulse the channels to keep them confined. The first set (figure 11,a) is limited to this setting: its channels are free to occupy the whole grid without any constraint on their relative location. These channels display a disorganized stacking. The second set (figure 11,b) further uses the L-system ability to condition data to reproduce the two-phase evolution of the channels. The random channel placement and the channel development are both influenced by a sand probability cube that defines the sand-rich deposit distribution inside the grid (figure 10). It influences the relative positions of the channels without directly controlling the channel relationships. These channels display a conditioned disorganized stacking.
The last set (figure 11,c) also attempts to reproduce the two-phase channel evolution, but without using the probability cube. Instead, it directly simulates this evolution with a forward SGS-based migration. The first 27 migration steps simulate a high lateral migration, some abrupt migrations and little aggradation. This first phase is initiated with a channel simulated with a L-system, whose initial position is randomly drawn at a fixed vertical coordinate along the bottom of the grid. The next 12 steps simulate a small lateral migration with a significant aggradation. This second phase is initiated with Table 1 Set of indicators and associated weights used for the case study. Indicator descriptions are in [START_REF] Rongier | Comparing connected structures in ensemble of random fields[END_REF]. Three other indicators exist but are non-discriminant in this case, so not used: the facies adjacency proportions, because the realizations only contain two facies; the unit connected component proportion, because the rasterized objects do not lead to any connected component of one cell; the traversing connected component proportion, because all the channel objects go through the entire master channel and are all traversing.
Category Indicator Weight
Global indicators
Facies proportion 1
Facies connection probability 1
Connected component density 1
Shape indicators
Number of connected component cells 1
Box ratio 1
Faces/cells ratio 1
Sphericity 1
Skeleton indicators
Node degree proportions 1
Inverse branch tortuosity 1
the last channel of the first phase. If a channel node should migrate outside the master channel, its migration factor value is decreased so that the channel remains within the master channel, along its margin. In this set, the migration dictates the channel relationships. These channels have an organized stacking.
Connectivity analysis principle
The connectivity analysis helps to compare realizations by focusing on the connectivity of the sedimentary deposits [START_REF] Rongier | Comparing connected structures in ensemble of random fields[END_REF]. It relies on indicators based on the connected components of the different deposit types and their curveskeletons (table 1). These indicators give some information about the proportion of deposits, their connections or their shape. Here we only consider the channel deposits in the analysis, and not the inter-channel deposits within the master channel.
Dissimilarity values between the realizations facilitate the analysis. The dissimilarities compare and combine the indicators by means of a heterogeneous Euclidean/Jensen-Shannon metric. To analyze them, [START_REF] Rongier | Comparing connected structures in ensemble of random fields[END_REF] proposed to use the Scaling by MAjorizing a COmplicated Function (SMA-COF) [De [START_REF] Leeuw | Multidimensional scaling using majorization: SMACOF in R[END_REF], a multidimensional scaling (MDS) method. The purpose is to represent the realizations as points, so that the distances between the points are as close as possible to the dissimilarities between the realizations. It implies that the MDS may paint an erroneous picture of the dissimilarities. The Shepard diagram and the scree plot help to assess the dissimilarity reproduction: the lower the stress on the scree plot and the higher the linear regression coefficient on the Shepard diagram, the better the representation is.
Indicator results
The indicators and dissimilarities help to objectively analyze and compare the difference in connectivity between the three sets: organized stacking, conditioned disorganized stacking and disorganized stacking.
Global analysis on the dissimilarity values
The multidimensional scaling plots the dissimilarities in a twodimensional representation (figure 12). The Shepard diagram and the scree plot show that two-dimensions are sufficient to represent the dissimilarities without significant bias. Three dimensions would have been a bit better, but more difficult to analyze.
The dissimilarities clearly divide the realizations in two groups. The first group contains all the 100 organized stacking realizations, 47 conditioned disorganized stacking realizations and 58 disorganized realizations. The realizations of the different sets do not mix much, with three sub-groups, one per realization set. The conditioned disorganized stacking realizations are closer to the organized stacking realizations than the disorganized stacking realizations. The second group contains 53 conditioned disorganized stacking realizations and 42 disorganized stacking realizations. Compared to the first group, the realizations are a bit more mixed, with a significant variability between the realizations.
Visually, the difference between the realizations of the different sets is quite clear (figure 13). However, looking at realizations from the same set but in different groups does not show any significant difference.
Detailed analysis on the indicator values
Examining the indicators explains the separation into two groups (figure 14). The realizations of the first group all have a facies connection probability of one. These realizations have channel deposits that form a single connected component. The second group contains all the realizations with more than one connected component. This highlights the continuity of the migration process: having a complete non-connection between two successive channels requires an avulsion. Migration makes it easier to control the channel connectivity.
Within the first group, the disorganized stacking realizations are clearly different from the other realizations. This appears on the facies proportion and on the average number of component cells. With the same number of channels, the connected components of these realizations are larger than those of the other sets due to the disorganized stacking. On the other hand, the conditioned disorganized stacking and organized stacking realizations have similar facies proportions and average numbers of component cells. Their difference appears on the other indicators, such as the average faces/cells ratio or the average sphericity: even if the channels of the two sets occupy similar volumes within the grid, their shapes are different. The low faces/cells ratio of the organized stacking realizations highlights their structure: the channels are significantly stacked over long distances, which decreases more the number of faces of the components than their number of cells. The average sphericity of these realizations is higher than that of the conditioned disorganized realizations. This comes from their respect of the channel evolution: they occupy the whole width of the master channel bottom, and vertically they evolve to the top of the master channel. This also comes from the management of the channel margins: the migration is simply blocked by the channel margins, which is less constraining than the margin repulsion, especially at the bottom of the grid.
The difference between the realization sets within the first group is also visible on the skeletons (figure 15). The disorganized stacking realizations have higher proportions for the node degrees larger than 3 compared to the realizations of the other sets. This highlights channels that locally cross each other but are globally disconnected. This tends to generate many small branches all along the skeleton, with many loops (figures 16 and 13). The difference between the conditioned disorganized stacking and the organized stacking realizations of the first group is less significant than on the other indicators. However, the conditioned disorganized stacking realizations have higher proportions for the node degrees larger than 3. Again, this highlights the tendency of their channels to cross each other instead of stacking on each other (figure 16). This is visually striking on the skeletons (figure 13): the conditioned disorganized stacking realizations have many small branches forming loops, similarly to the disorganized stacking realizations. The organized stacking realizations have fewer small branches. The small branches also tend to be straight, with an inverse tortuosity close to one. From this perspective, the evolution from the disorganized stacking to the conditioned disorganized stacking and the organized stacking is clear on the average inverse tortuosity: as the stacking increases, the inverse tortuosity decreases due to less straight branches within small loops.
Thus, the difference in stacking directly impacts the shape of the connected components and their connectivity. Adding a sand probability cube helps to control the connectivity between the channel deposits. But the resulting channels do not stack as clearly as with the migration process, which prevents nonconnections if required.
D
The previous applications highlight the relevance of the migration approach. The following section discusses some aspects of the method.
About the migration pattern simulation
As defined, the process based on SGS leads to a dominant lateral migration through the influence of the curvature. Using a low curvature weight leads to the random emergence of other patterns. Some asymmetric bends can also randomly appear, with a random orientation of their asymmetry. It is not possible to choose another dominant migration pattern, such as a downsystem migration. This is not an issue for turbiditic channels, which tend to have little downsystem migration [e.g., [START_REF] Nakajima | Outer-Bank Bars: A New Intra-Channel Architectural Element within Sinuous Submarine Slope Channels[END_REF]. However, if another dominant pattern is required, the method must be adapted. A simple solution is to modify the vector of migration by adding a downsystem component, such as done by [START_REF] Teles | Sur une nouvelle approche de modélisation de la mise en place des sédiments dans une plaine alluviale pour en représenter l'hétérogénéité[END_REF] or [START_REF] Viseur | Simulation stochastique basée-objet de chenaux[END_REF].
Another solution is to change the secondary data influencing the migration. The curvature could be modified, or a different property could have to be used. From this point of view, the MPS has the advantage that the training set controls the migration pattern: any pattern on the training set can appear in the realizations.
Globally, the SGS remains able to reproduce various migration patterns. Most of the time, no smoothing is required. The MPS still needs more work to improve the reproduction of the migration pattern from the training set. It usually calls for a training set larger than the realizations to increase the repeatability of the patterns and improve the realization quality. From this point of view, the training set used in figure 9 is not optimal, because the channels have roughly the same length than those in the realizations. It leads to small-scale perturbations in the migration factor that deform the meanders after a number of migration steps. The smoothing helps to limit these perturbations and to obtain more realistic results.
The bends should also be compared with their counterparts from real cases to assess the method's ability to simulate a realistic migration. MPS should perform better when using a real case as training set, but this needs to be further tested. Statistics such as those of [START_REF] Howard | Multivariate characterization of meandering[END_REF] can be used to compare the migrating channels. But they are not directly defined to analyze the migration. Histogram and variogram of the migration factor can give a first insight, but further indicators should be developed to objectively analyze and compare migration patterns.
A comparison could be done with physical simulation methods, especially the stochastic ones [START_REF] Lopez | Modélisation de réservoirs chenalisés méandriformes : une approche génétique et stochastique[END_REF], Pyrcz et al., 2009]. The main uncertainty comes from the ability of the physical model to explore all the possible migration patterns. For instance, our method is able to simulate some retro-migrating areas. These areas form outer-bank bars, which are potential reservoir areas [START_REF] Nakajima | Outer-Bank Bars: A New Intra-Channel Architectural Element within Sinuous Submarine Slope Channels[END_REF]. Such bars have no equivalent in fluvial processes, whereas all the physical methods for the migration are developed for the fluvial environment. Thus, they may not be able to develop such migration patterns.
Comparing the results of the forward and the backward migration processes would be also an interesting development.
About the discrete process simulation
For the SGS process, abrupt migrations are introduced by a second set of migration parameters. Discontinuities can then develop between channels. However, they tend to follow a similar migration pattern. Abrupt migrations and even local avulsions could also be simulated using L-systems. The initiation process would be the same as for avulsion. The newly simulated part would be attracted to a downstream location of the initial channel. This process is similar to the one developed by [START_REF] Anquez | Stochastic simulations of karst networks with Lindenmayer systems[END_REF] to simulate anastomotic karst networks. It would possibly simulate bends completely independent from the previous channel.
MPS has shown its ability to reproduce abrupt migrations from the training set. Again, it makes the simulation easier once a training set is available. The only drawback is when avulsions or cut-offs are present in the training set. For now they are not handled, but simulating them based on their appearance in the training set could improve the process.
The appearance of neck cut-offs is not a problem in a forward process with SGS. With SGS in a backward process, neck cutoffs should be simulated during the process. This would let the migration continue over any number of migration steps.
About the parameterization
Using SGS does not call for an intensive parameterization, with only four parameters required for a simulation. The aggradation and migration factors are directly related to the vertical and horizontal distances between two successive channels. They are thus pretty easy to define. The curvature weight is a bit harder to infer. A weight of 1 gives a significant influence to the curvature. By default, a weight around 0.8 gives a dominant lateral migration but lets other migration patterns appear. The variogram range should be close to the desired length of the bends that form during migration.
No use of conditioning data has been done yet to find the parameter values. This could be done from channels interpreted on seismic data. Even if all the channels are often not discernible, some of them could inform about the possible values for the factor distributions, the variogram parameters and even the curvature weight by comparing the channel curvature with the migration distance. Analogs from outcrops or seismic data of similar settings can also help to define these values.
One possibility to reduce the number of parameters is to use the bend length as range. The range then varies following the channel bends and the migration step. However, the channels often tend to develop small-scale variations that perturb the bend identification and thus the bend length computation. No significant migration can be obtained with such parameterization. One possibility is to smooth the simulated migration factors or the bend lengths. A better solution would be to better identify the bends and avoid small-scale variability. The work of O'Neill and Abrahams [1986] for instance could be a first lead.
Compared to physical methods [e.g., [START_REF] Ikeda | Bend theory of river meanders. Part 1. Linear development[END_REF][START_REF] Parker | A new framework for modeling the migration of meandering rivers[END_REF][START_REF] Lopez | Modélisation de réservoirs chenalisés méandriformes : une approche génétique et stochastique[END_REF], the parameterization is far simpler when the purpose is to model the current aspect of the geology. This requires working on old channels that have been deformed. Thus, the physical parameters that lead to the channel formation are difficult if not impossible to infer. [START_REF] Pyrcz | ALLUVSIM: A program for eventbased stochastic modeling of fluvial depositional systems[END_REF] manage to reduce the number of parameters to a single maximum distance to reach by a standardization process. The impact of such standardization on the migration process and thus on the stacking patterns is not discussed. This parameterization is easier to infer, but less flexible if the migration patterns are not those desired. The parameters used here give a finer control to the user on the migration patterns. Furthermore, they are mainly descriptive and can be inferred from the available data.
The more processes are introduced, e.g., abrupt migration, the heavier the parameterization tends to be. The MPS approach is then pretty useful. It requires few parameters that are more related to the ratio between the simulation quality and the simulation speed. The training set dictates the geological considerations, such as the presence of abrupt migrations or the dominant migration patterns. The main issue is to find a training set. The most interesting option is to find one from an analog, either seismic data such as done by [START_REF] Labourdette | LOSCS' Lateral Offset Stacked Channel Simulations: Towards geometrical modelling of turbidite elementary channels[END_REF] or possibly an outcrop. Satellite images are also interesting sources of training sets in fluvial settings.
About small-scale variability and smoothing
Realization statistics often fluctuate around those of the prior model [e.g., [START_REF] Deutsch | GSLIB: Geostatistical Software Library and User's Guide[END_REF]. This can lead to some noise or short-scale variability. Too high MPS thresholds can also lead to a higher small-scale variability than within the training model. Noise or short-scale variability can form inflexions. Such inflexions prevent from using the bend length as range for the SGS, as discussed in section 5.3. They also tend to grow during the migration, forming new bends at a smaller-scale than initially desired.
Smoothing the migration factor controls the small-scale variability by eliminating its influence on the migration. However, the smoothing impact is quite significant, as discussed by [START_REF] Crosato | Effects of smoothing and regridding in numerical meander migration models[END_REF] when smoothing the curvature. Four to five smoothing steps can be enough to completely modify the mi-gration structure. It should then be used carefully. Another option would be to post-process the realizations to improve the reproduction of the prior model. Simulated annealing, for instance, makes it possible to better reproduce the histogram and variogram through the minimization of an objective function [e.g., [START_REF] Deutsch | GSLIB: Geostatistical Software Library and User's Guide[END_REF].
About the usefulness of the migration process
The comparison with randomly placed channels highlights the difference of static connectivity. Simulating the migration gives more control on the stacking pattern. This is especially useful due to the significant influence of the stacking pattern on the connectivity. Influencing the channel locations by a probability cube reduces the gap with the migration results. But the difference in connectivity remains significant.
The analysis of the connectivity could be further developed by introducing the channel fill. In particular mud drapes have a significant impact on the connectivity. And in such case controlling the stacking pattern is even more important.
About the simulation process with migration
L-system are interesting to simulate the initial channel, especially for their ability to develop channels with different sinuosities. In the MPS case, methods such as that of [START_REF] Mariethoz | Analog-based meandering channel simulation[END_REF] could also be interesting. They simulate channel centerlines based on MPS in a process similar to that used for migration. The initial channel could then be simulated based on the first channel of the training set.
Both SGS and MPS are able to simulate a forward or a backward migration. This backward process is particularly useful, as the last channel of a migrating sequence is far more often interpretable on seismic data than the first one [START_REF] Labourdette | LOSCS' Lateral Offset Stacked Channel Simulations: Towards geometrical modelling of turbidite elementary channels[END_REF]. This allows initiating the process from the real data, instead starting from an unknown state and trying to condition the process to the last channel.
For now the channel width and thickness are simulated at the end of each migration step. As the width has a particular impact on the migration, it could be interesting to simulate them earlier. With MPS, the channel width and thickness could also be simulated from the training set instead of using SGS. Other geological elements could be integrated, such as the channel fill [e.g., [START_REF] Labourdette | Integrated three-dimensional modeling approach of stacked turbidite channels[END_REF][START_REF] Alpak | The impact of fine-scale turbidite channel architecture on deep-water reservoir performance[END_REF]. This is especially important due to its impact on the connectivity. Levees also need to be introduced [e.g., [START_REF] Pyrcz | ALLUVSIM: A program for eventbased stochastic modeling of fluvial depositional systems[END_REF][START_REF] Ruiu | Modeling Channel Forms and Related Sedimentary Objects Using a Boundary Representation Based on Non-uniform Rational B-Splines[END_REF]. When the channels migrate within a confinement such as a canyon, they can erode that confinement. Thus, they modify the confinement morphology, which should be taken into account.
About data conditioning
Data conditioning of the migration process has not been explored yet. Both SGS and MPS can be used for data conditioning. If a datum is within a conditioning distance from the current channel, the migration process can be conditioned to that information. The conditioning distance corresponds to the area in which a channel node can migrate. This area is determined by the maximal migration and aggradation factors. To preserve the conditioning, smoothing cannot be performed at the data locations.
However, the process is more difficult when the data are outside the conditioning distance. One solution is to introduce a constraint that attracts the migrating channel to the data, similarly to the L-system conditioning [START_REF] Rongier | Connectivity of channelized sedimentary bodies: analysis and simulation strategies in subsurface modeling[END_REF] or to the conditioning of process-based methods [Lopez, 2003, Pyrcz and[START_REF] Pyrcz | Conditioning Event-based Fluvial Models[END_REF]. This implies adjusting the appearance of discrete migrations and avulsions depending on the data and their location. Conditioning to a sand probability cube is also problematic, especially for handling avulsions. This may require identifying the large-scale trends within the cube. It is also important to note that the overall methodology requires an important work of data interpretation and sorting to possibly pre-attribute them to each migrating system.
C
This work provides a basis for a more descriptive approach to channel migration that focuses on the spatial structure of the migration. The same approach stochastically simulates either forward or backward channel migration, starting with an initial channel simulated by L-system or interpreted on seismic data. The migration process is based on simulating a migration factor using sequential Gaussian simulation or multiple-point simulation with the curvature as secondary data. Four parameters are required by the SGS approach to adjust the migration patterns. The MPS approach calls for four parameters related to the simulation speed and quality and a training set that controls the migration patterns. Avulsion is performed by L-system simulation, as for the initial channel.
The first results are encouraging: they show a significant difference in connectivity from a process with no direct control on the channel stacking. Further work is required on some points, such as using the bend length as variogram range. Both SGS and MPS offer some conditioning ability, but only if the data are close to the channel. Data management at further distance could be done with attractive constraints, as done for initial channel conditioning [START_REF] Rongier | Connectivity of channelized sedimentary bodies: analysis and simulation strategies in subsurface modeling[END_REF] or with physical methods [e.g., Lopez, 2003, Pyrcz and[START_REF] Pyrcz | Conditioning Event-based Fluvial Models[END_REF]. Neck cut-offs remain to be introduced in the backward process with SGS. The training set required by MPS could be better used to take into account cut-offs and avulsions. The channel fill should also be simulated to better assess the impact on the static connectivity. The method was developed for the simulation of turbiditic channels, but it could also be applied to fluvial systems.
A
This work was performed in the frame of the RING project at Université de Lorraine. We would like to thank the industrial and academic sponsors of the Gocad Research Consortium managed by ASGA for their support and Paradigm for providing the SKUA-GOCAD software and API. We would also like to thank the associate editor, John Tipper, and Michael Pyrcz for their constructive comments which helped improve this paper.
A A S A.1 Numerical aspects
One key aspect of this method is shared with physical simulation methods: the horizontal migration factor has to be relatively smooth to avoid small-scale variability. Indeed, this variability tends to have a huge impact on the migration structure and can lead to inconsistencies.
A.1.1 Curvature computation
In our process, curvature values have no impact on the horizontal migration factor values themselves, only on their spatial structure. But having a curvature which evolves smoothly remains as important as in physical simulation methods. These methods usually smooth the curvature, either with a weighted average or based on cubic spline interpolation [START_REF] Crosato | Effects of smoothing and regridding in numerical meander migration models[END_REF]. [START_REF] Schwenk | The life of a meander bend: connecting shape and dynamics via analysis of a numerical model[END_REF] underline these inaccuracies in the curvature computation and propose to use a stabler curvature formula to avoid a smoothing phase. This formula is used here to compute the channel curvature κ at a centerline node i:
κ = 2(a y b x -a x b y ) (a 2 x -a 2 y )(b 2 x -b 2 y )(c 2 x -c 2 y ) with a x = x i -x i-1 , b x = x i+1 -x i-1 , c x = x i+1 -x i
and equivalently for y.
A.1.2 Regridding
The regridding is a key step of the channel migration. Indeed, as the bends migrate, the distance between two successive channel nodes can increase or decrease. These variations lead to instabilities in the resulting migration. A regridding step is required to prevent too many variations of the inter-node distance. This regridding step is the same as in physical methods [e.g., [START_REF] Schwenk | The life of a meander bend: connecting shape and dynamics via analysis of a numerical model[END_REF]:
• If the distance between two successive nodes is higher than 4 3 l i , with l i the initial inter-node distance, a new node is added. The position of this node is computed using a natural monotonic cubic spline interpolation of both x and y coordinates following the curvilinear coordinate.
• If the distance between two successive nodes is smaller than 1 3 l i , with l i the initial inter-node distance, the second node is suppressed.
During the migration, two successive migration vectors may cross each other, leading to an unwanted self-intersection of the channel. The migration vectors are so checked for intersection. When an intersection may happen, the two nodes are suppressed to eliminate the possible cycle.
A.1.3 Smoothing
If the curvature computation does not require any smoothing step, the migration factor realizations can display small-scale fluctuations that have a huge, and sometimes undesired, impact on the migration. This is especially the case with a non-Gaussian variogram model and/or a curvature weight equals to ±1 in the sequential Gaussian simulation. The simulated horizontal migration factor is smoothed right before the migration step. The smoothing procedure uses the weighted average defined by [START_REF] Crosato | Effects of smoothing and regridding in numerical meander migration models[END_REF]:
i = i-1 + 2 i + i+1 4
with i the (retro-)migration factor of the node i. It can be applied several times depending on the wanted smoothness.
A.2 Parameter set for migration with sequential Gaussian simulation
Four parameters are required to perform a migration with sequential Gaussian simulation (SGS): a migration factor distribution, an aggradation factor distribution, a variogram and a curvature weight.
A.2.1 Migration and aggradation factor distributions
Two migration factor distributions control the distances between two successive channels in the migration process: one for the horizontal component of the migration and one for the vertical component. These distributions can be obtained by interpreting horizontal and vertical distances between channels or point bars on seismic of field analogs for instance. They are geometrical parameters. The horizontal factor should be chosen as small as possible, as it tends to increase the impact of the small scale distance variations on the horizontal migration. The vertical factor is unique for all the nodes of a given channel but may vary between channels, hence the need for a distribution.
A.2.2 Variogram
The variogram informs about the spatial model of a variable [e.g., [START_REF] Gringarten | Teacher's Aide Variogram Interpretation and Modeling[END_REF]. It is usually infered from the data. These data can come from the partial interpretation of migrating channels on a seismic to get migration factor values along the interpreted channel parts. If no data is available, the migration factor is considered as a Gaussian variable. The purpose is to have a migration factor that evolves as smoothly as possible to avoid small-scale variability during the migration. The variogram model is then chosen Gaussian. The nugget effect adds noise to the realizations and is kept to 0. The sill is fixed to 1.
This leaves one parameter: the variogram range. This parameter represents the horizontal extension of a migration area, which can stretch over several bends (figure A.1). It has a main impact on the migration. By default, it must be close to the desired bend length for the bends that grow during the process. A range smaller than the bend length leads to the development of smaller-length bends through the migration. A range larger than the bend length makes the migration occur over several bends. Thus, some bends seem to migrate and other to retro-migrate.
A.2.3 Curvature weight
The curvature weight adjusts the curvature influence on the spatial structure of the migration factor (figure A.1). When equals to (-)1, the migration factor follows strictly the curvature spatial structure, favoring lateral migration. When equals to 0, the migration factor is independent from the curvature and more various migration patterns appear: lateral migration, downsystem migration and even their counterparts in retromigration. The curvature weight is so related to the stability of the system: when a system is unstable, channel stacking patterns are highly variable as the influence of the previous channel over the next one is weaker.
This parameter is the harder to adjust. It depends on the wanted migration patterns, which can be deduced from a seismic or from analogs. However, the lateral migration is the only pattern that can be favored in the current design of the method: having only lateral migration is possible, but having only downsystem migration is not.
A.3 Parameter set for migration with multiplepoint simulation
Besides from the training set, the Direct Sampling (DS) method does not require much parameters. Most of these parameters balance the realization quality and the speed of the process.
For more details about those parameters and their effect, see [START_REF] Meerschman | A Practical Guide to Performing Multiple-Point Statistical Simulations with the Direct Sampling Algorithm[END_REF].
A.3.1 Size parameters
Two parameters have a direct impact on the simulation speed: the maximal number of nodes in a data event and the maximal proportion of the training model to scan. The maximal number of nodes in the data event is simply the maximal number n max of nodes with a value to consider in a data event. All these nodes are the closest to the node to simulate. A low number speeds up the simulation, but it may be at the cost of the realization quality. A high number does not necessarily mean a good quality. Indeed, the size of the data event limits the number of potential patterns in the training set. It is then more difficult to find a pattern similar enough to the data event.
The maximal proportion to scan determines how much of the training model to scan before stopping the process. The training model is either the entire training set, or only one migration step of that training set. This parameter stops the process when no satisfying pattern is found. It speeds up the simulation, but it may be at the cost of the realization quality.
Figure 2
2 Figure 2Migration principle: the centerline nodes are moved along the vectors v a and v m . v a is the aggradation component along the vertical direction symbolized by the normalized vector ẑ. The aggradation factor a determines the vertical displacement. v m is the migration component along the normal direction to the centerline symbolized by the normalized vector n. The migration factor m determines the horizontal displacement. ŝ is the normalized vector along the streamwise direction.
Figure 3
3 Figure 3 Main parameters used for horizontal bend migration with SGS. m is the migration factor, r the variogram range and γ c the curvature weight. The later perturbs the two other parameters by fitting more or less the migration spatial structure to the curvature spatial structure.
2 .
2 At a given node (figure 5): a) The n closest nodes with already a value form two data events N x , one for the migration factor and one for the curvature. b) Those data events are searched within the training model: i. A position is randomly chosen and the training model is then scanned linearly.
) with d D,P ∈ [0, 1], n the number of nodes in the data event, Z the compared property, x a node in the data event, y a node in the training set pattern and T M is the training model. B. If the distances are both the lowest encountered, the value of the central node associated to the pattern is saved.
Figure 4
4 Figure 4 Framework for the simulation of the migration factor m with MPS.
Figure 5
5 Figure5DS simulation principle: the neighboring configuration around the node to simulate, the data event, is sought within the similar configurations in the training model, the patterns. Here the neighboring configurations contains the four nodes with a known value that are the closest to the node to simulate. When the distance D DP between the data event and a pattern is lower than a given threshold d t , the process stops and the node to simulate gets the value at the same location within the pattern.
Figure 6
6 Figure 6 Principle of global avulsion based on L-system.
Figure 7
7 Figure 7Examples of realizations for both the forward and backward SGS migration processes SGS initiated by channels simulated with a L-system. The input parameters are given in the supplementary materials.
Figure 8 Figure 9
89 Figure 8 Enlargements on some areas of the channels on figure 7 illustrating different aspects of channel evolution reproduced by forward and backward migration simulations.
Figure 10
10 Figure 10 Dataset of the application: a curvilinear grid representing a master channel with a sand probability cube.
Figure 11 Figure 12
1112 Figure 11 Examples of realizations corresponding to different approaches for channel simulation. Each realization contains 40 channels within a master channel.
Figure 13
13 Figure13Realizations and their skeletons for each set within the two groups separated by the dissimilarities. Each realization is the closest to the mean MDS point of its set and group (see figure12).
Figure 14
14 Figure 14Box-plots comparing the range of indicators -except the node degree proportions -computed on three sets of realizations with different methods and parameters. OS1. Organized stacking realizations within the group 1; CDS1 and CDS2. Conditioned disorganized stacking realizations within the groups 1 and 2; DS1 and DS2. Disorganized stacking realizations within the groups 1 and 2;
Figure 15
15 Figure 15Mean node degree proportions of the levee skeletons for each set and group. The error bars display the minimum and maximum proportions. The first 1 node degree corresponds to the nodes of degree one along a grid border. The second 1 node degree corresponds to the nodes of degree one inside the grid.
Figure 16
16 Figure 16Two migrating channels with two local abrupt migrations and associated skeletons. An organized stacking of the two channels results in a single branch on the skeleton. The areas of abrupt migration, where the channels are not stacked anymore, result in a loop on the skeleton.
A.3.2 Threshold parameters
The migration process based on the DS calls for two thresholds: one for the horizontal migration factor, one for the curvature. When a threshold is close to 0, the retained pattern has to be highly similar to the data event. A threshold closer to 1 authorizes more dissimilar patterns, at the cost of the realization quality.
Those parameters have two roles. First they have an impact on the simulation speed: the higher the threshold, the faster the simulation. The second role is similar to the role of the curvature weight with the SGS: it controls the impact of the curvature on the migration (figure A.2). When the curvature threshold is far higher than the migration factor threshold, the curvature impacts less the process. When the two thresholds have similar values, the curvature influence is more noticeable. Contrary to the curvature weight of the SGS, a threshold is always positive. Thus, a threshold gives no control on migration or retro-migration trends. Only the training set controls such trends. is a triangular distribution with a minimum, a mode and a maximum.
A.4 Simulation parameters
is a uniform distribution with a minimum and a maximum. | 70,880 | [
"9825",
"9231",
"878434"
] | [
"247127",
"507460",
"247127",
"507460"
] |
01476161 | en | [
"info"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01476161/file/Meyer_ACC17_extended.pdf | Pierre-Jean Meyer
Dimos V Dimarogonas
Compositional abstraction refinement for control synthesis under lasso-shaped specifications
This paper presents a compositional approach to specification-guided abstraction refinement for control synthesis of a nonlinear system associated with a method to overapproximate its reachable sets. The control specification consists in following a lasso-shaped sequence of regions of the state space. The dynamics are decomposed into subsystems with partial control, partial state observation and possible overlaps between their respective observed state spaces. A finite abstraction is created for each subsystem through a refinement procedure, which starts from a coarse partition of the state space and then proceeds backwards on the lasso sequence to iteratively split the elements of the partition whose coarseness prevents the satisfaction of the specification. The composition of the local controllers obtained for each subsystem is proved to enforce the desired specification on the original system. This approach is illustrated in a nonlinear numerical example.
I. INTRODUCTION
For model checking and control synthesis problems on continuous systems under high-level specifications, a classical approach is to abstract the continuous dynamics into a finite transition system [START_REF] Tabuada | Verification and control of hybrid systems: a symbolic approach[END_REF]. Although both model checking and abstraction fields have received significant attention, the link between them is not as straightforward as it appears: due to over-approximations involved in the abstraction procedure, the unsatisfaction of the specification on an abstraction cannot be propagated to the original system. This led to the introduction of an interface layer named abstraction refinement aiming at iteratively refining an initial coarse abstraction until the specification is satisfied on the obtained refined abstraction. This topic has been extensively studied in the context of model checking for hardware design, thus primarily focused on verification problems (as opposed to control synthesis) for large but finite systems [START_REF] Lee | Tearing based automatic abstraction for CTL model checking[END_REF], [START_REF] Pardo | Incremental CTL model checking using BDD subsetting[END_REF], [START_REF] Lind-Nielsen | Stepwise CTL model checking of state/event systems[END_REF], with the most popular approach being based on CounterExample-Guided Abstraction Refinement (CEGAR) [START_REF] Clarke | Counterexampleguided abstraction refinement for symbolic model checking[END_REF], [START_REF] Barner | Symbolic localization reduction with reconstruction layering and backtracking[END_REF], [START_REF] Balarin | An iterative approach to language containment[END_REF], [START_REF] Govindaraju | Counterexample-guided choice of projections in approximate symbolic model checking[END_REF]. Later work then also considered control problems [START_REF] Henzinger | Counterexample-guided control[END_REF], [START_REF] Fu | Abstractions and sensor design in partial-information, reactive controller synthesis[END_REF] and infinite systems [START_REF] Clarke | Abstraction and counterexample-guided refinement in model checking of hybrid systems[END_REF], [START_REF] Chutinan | Verification of infinite-state dynamic systems using approximate quotient transition systems[END_REF], [START_REF] Yordanov | Formal analysis of piecewise affine systems through formula-guided refinement[END_REF], [START_REF] Zadeh Soudjani | Adaptive and sequential gridding procedures for the abstraction and verification of stochastic processes[END_REF].
In this paper, we present a method for specification-guided abstraction refinement for control synthesis of continuous systems. We consider a control specification consisting in following a lasso-shaped sequence of regions of the state space, which can be seen as a satisfying trace of a Linear Temporal Logic formula [START_REF] Baier | Principles of model checking[END_REF]. A coarse abstraction of the system is then initially considered and iteratively refined in This work was supported by the H2020 ERC Starting Grant BUCOPH-SYS, the EU H2020 AEROWORKS project, the EU H2020 Co4Robots project, the SSF COIN project and the KAW IPSYS project.
The authors are with ACCESS Linnaeus Center, School of Electrical Engineering, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden. {pjmeyer, dimos}@kth.se its elements preventing the satisfaction of this specification. In most continuous systems, exact computation of the reachable sets as in [START_REF] Henzinger | Counterexample-guided control[END_REF], [START_REF] Yordanov | Formal analysis of piecewise affine systems through formula-guided refinement[END_REF] is not possible. We thus rely on methods to efficiently compute over-approximations of the reachable sets (for a given finite time), using for example polytopes [START_REF] Chutinan | Verification of polyhedral-invariant hybrid automata using polygonal flow pipe approximations[END_REF], oriented hyper-rectangles [START_REF] Stursberg | Efficient representation and computation of reachable sets for hybrid systems[END_REF], ellipsoids [START_REF] Kurzhanskiy | Ellipsoidal techniques for reachability analysis of discrete-time linear systems[END_REF], zonotopes [START_REF] Girard | Zonotope/hyperplane intersection for hybrid systems reachability analysis[END_REF], level sets [START_REF] Mitchell | Level set methods for computation in hybrid systems[END_REF] or the monotonicity property [START_REF] Smith | Monotone dynamical systems: an introduction to the theory of competitive and cooperative systems[END_REF], which is considered in the examples of this paper. Other relevant works with similar objectives include: [START_REF] Nilsson | Incremental synthesis of switching protocols via abstraction refinement[END_REF] which focuses on reach-avoid-stay control specifications and computes abstractions based on infinite-time reachability of neighbor states; and [START_REF] Moor | Learning by doing: systematic abstraction refinement for hybrid control synthesis[END_REF] which uses sets of finite prefixes to describe abstractions of infinite behaviors.
A novelty compared to the mentioned literature is that we combine the abstraction refinement approach with the compositional framework from [START_REF] Meyer | Invariance and symbolic control of cooperative systems for temperature regulation in intelligent buildings[END_REF], thus widening the range of applications to systems of larger dimensions. In this work, the global dynamics are decomposed into subsystems with partial control and partial observation of the state (with possible overlaps on their respective state spaces), then the abstraction refinement is applied to each subsystem and the obtained local controllers are combined to control the original system. A journal version of this approach was presented in [START_REF] Meyer | Compositional abstraction refinement for control synthesis[END_REF] with the main differences in the current submission being: i) the refinement algorithm considers lasso-shaped sequences as its specification (as opposed to finite sequences in [START_REF] Meyer | Compositional abstraction refinement for control synthesis[END_REF]); ii) the numerical application considers a nonlinear system (as opposed to a linear one in [START_REF] Meyer | Compositional abstraction refinement for control synthesis[END_REF]).
The structure of this paper is as follows. The problem is formulated in Section II. Section III describes the general method to obtain compositional abstractions. The abstraction refinement algorithm to be applied to each subsystem is presented in Section IV. Then, Section V provides the main result that the local controllers can be composed to control the original system. Finally, a numerical illustration of this method is presented in Section VI.
II. PROBLEM FORMULATION
A. Notations
Let N, Z + and R be the sets of positive integers, nonnegative integers and reals, respectively. For
a, b ∈ R n , the interval [a, b] ⊆ R n is defined as [a, b] = {x ∈ R n | a ≤
x ≤ b} using componentwise inequalities. In this paper, a decomposition of a system into subsystems is considered. As a result, both scalar and set variables are used as subscript of other variables, sets or functions:
• lower case letters and scalars give naming information relating a variable, set or function to the subsystem of corresponding index (e.g. x i and u i are the state and input of the i-th subsystem S i ); • index sets denoted by capital letters are used to represent the projection of a variable to the dimensions contained in this set. Alternatively, we also use the operator π I to denote the projection on the dimensions contained in I (e.g. for x ∈ R n and I ⊆ {1, . . . , n}, x I = π I (x)).
B. System description
We consider a discrete-time nonlinear control system subject to disturbances described by
x + = f (x, u, w), (1)
with state x ∈ X ⊆ R n , bounded control and disturbance inputs u ∈ U ⊆ R p and w ∈ W ⊆ R q , respectively. The one step reachable set of (1) from a set of initial states X ⊆ X and for a subset of control inputs U ⊆ U is defined as
RS(X , U) = {f (x, u, w) |x ∈ X , u ∈ U, w ∈ W } . ( 2
)
Throughout this paper, we assume that we are able to compute over-approximations RS(X , U) of the reachable set defined in (2):
RS(X , U) ⊆ RS(X , U). (3)
Several methods exist for over-approximating reachable sets for fairly large classes of linear and nonlinear systems, see e.g. [START_REF] Chutinan | Verification of polyhedral-invariant hybrid automata using polygonal flow pipe approximations[END_REF], [START_REF] Stursberg | Efficient representation and computation of reachable sets for hybrid systems[END_REF], [START_REF] Kurzhanskiy | Ellipsoidal techniques for reachability analysis of discrete-time linear systems[END_REF], [START_REF] Girard | Zonotope/hyperplane intersection for hybrid systems reachability analysis[END_REF], [START_REF] Mitchell | Level set methods for computation in hybrid systems[END_REF], [START_REF] Smith | Monotone dynamical systems: an introduction to the theory of competitive and cooperative systems[END_REF]. System (1) can also be described as a non-deterministic infinite transition system S = (X, U, -→) where
• X ⊆ R n is the set of states, • U ⊆ R p is the set of inputs,
• a transition x u -→ x , equivalently written as x ∈ P ost(x, u), exists if x ∈ RS({x}, {u}).
C. Specification
We assume that the state space X ⊆ R n is an interval of R n and we consider a uniform partition P of X into smaller identical intervals. To ensure that P is a partition, all intervals (including X) are assumed to be half-closed. In what follows, the elements of P are called cells of the state space. In this paper, we focus on a control objective consisting in following a lasso-shaped path ψ = ψ pref .(ψ suf f ) ω composed of two strings of cells in P : a finite prefix path ψ pref , followed by a finite suffix path ψ suf f repeated infinitely often.
Problem 1: Find a controller C : X → U such that the system S follows the infinite path ψ = ψ(0)ψ(1)ψ(2) . . . , i.e. for any trajectory x : Z + → X of the controlled system, we have x(k) ∈ ψ(k) for all k ∈ Z + .
Although considering this particular type of control objectives may seem restrictive, a wider range of control problems can actually be covered from the observation that, given a Linear Temporal Logic formula, at least one of its satisfying traces takes the form of a lasso-shaped path as above [START_REF] Baier | Principles of model checking[END_REF]. Solving Problem 1 then also ensures that the controlled system satisfies the corresponding formula from which the lasso path ψ is derived.
III. COMPOSITIONAL ABSTRACTIONS
In this paper, Problem 1 is addressed with a compositional abstraction refinement approach, where the system is decomposed into subsystems before applying an abstraction refinement algorithm to each of them. In this section, we first present the general method adapted from [START_REF] Meyer | Invariance and symbolic control of cooperative systems for temperature regulation in intelligent buildings[END_REF] to obtain compositional abstractions.
A. System decomposition
We decompose the dynamics (1) into m ∈ N subsystems. Let (I c 1 , . . . , I c m ) be a partition of the state indices {1, . . . , n} and (J 1 , . . . , J m ) a partition of the control input indices {1, . . . , p}. Subsystem i ∈ {1, . . . , m} can then be described using the following sets of indices:
• I c i represents the state components to be controlled;
• I i ⊇ I c
i are all the state components whose dynamics are modeled in the subsystem;
• I o i = I i \I c
i are the state components that are only observed but not controlled;
• K i = {1, . . . , n}\I i are the unobserved state components considered as external inputs to subsystem i; • J i are the input components actually used for control;
• L i = {1, . . . , p}\J i are the remaining control components considered as external inputs to subsystem i. The role of all the index sets above can be summarized as follows: for subsystem i ∈ {1, . . . , m}, we model the states x Ii = (x I c i , x I o i ) where x I c i are to be controlled using the inputs u Ji and x I o i are only observed to increase the precision of the subsystem while x Ki and u Li are considered as external disturbances. It is important to note that the subsystems may share common modeled state components (i.e. the sets I i may overlap), though the sets of controlled state components I c i and modeled control input components J i are assumed to be disjoints for two subsystems.
B. Subsystem's abstraction
For each subsystem i ∈ {1, . . . , m}, we want to create a finite abstraction S i of the original system S, which models only the state and input components x Ii and u Ji , respectively. S i will then be used to synthesize a local controller focusing on the satisfaction of the specification for the controlled state components x I c i using the modeled control inputs u Ji . The general structure of the abstraction
S i = (X i , U i , -→ i ) is as follows. • X i is a partition of π Ii (X) into a finite set of intervals called symbols.
It is initially taken equal to π Ii (P ) and will then be refined in Section IV.
• U i is a finite subset of the projected control set π Ji (U ). • A transition s i ui -→ i s i , equivalently written as s i ∈ P ost i (s i , u i ), exists if s i ∩ π Ii (RS AG2 i (s i , u i )) = ∅.
The set RS AG2 i (s i , u i ) ⊆ X represents an overapproximation of the reachable set of (1) based on the partial knowledge available to subsystem i. The remaining of this section describes how this set is obtained.
The unmodeled inputs u Li are known to be bounded in π Li (U ). We also know that other subsystems will synthesize controllers satisfying the specification for the unobserved and uncontrolled state components (x Ki and x I o i , respectively) of subsystem i. This is formalized by the following assumeguarantee obligations [START_REF] Henzinger | You assume, we guarantee: Methodology and case studies[END_REF], which are assumptions that are taken internally in each subsystem but do not imply any additional constraints on the overall approach: the control synthesis achieved in each subsystem is exploited to guarantee that the obligations on other subsystems hold.
A/G Obligation 1: For all x ∈ X, i ∈ {1, . . . , m} and
k ∈ Z + , if x Ii ∈ π Ii (ψ(k)), then x Ki ∈ π Ki (ψ(k)).
A/G Obligation 2: For all i ∈ {1, . . . , m}, s i ∈ X i and
k ∈ Z + , if s i ⊆ π Ii (ψ(k)), then for all u i ∈ U i we have π I o i (RS AG2 i (s i , u i )) ⊆ π I o i (ψ(k + 1)
). Intuitively, if the state of subsystem i is in the projection π Ii (ψ(k)) of some cell ψ(k) ∈ P , then the unobserved states x Ki also start from the projection π Ki (ψ(k)) of this cell (A/G Obligation 1) and the uncontrolled states x I o i will reach the next step π
I o i (ψ(k + 1)) of ψ (A/G Obligation 2). Given a symbol s i ∈ X i of S i with s i ⊆ π Ii (ψ(k)) and a control value u i ∈ U i , the set RS AG2 i (s i , u i ) ⊆ X is obtained in the following two steps. We first compute an intermediate set RS AG1 i (s i , u i ) ⊆ X using A/G Obligation 1
and the operator RS in (3) as follows,
RS AG1 i (s i , u i ) = RS(ψ(k)∩π -1 Ii (s i ), U ∩π -1 Ji ({u i })), (4)
resulting in a larger over-approximation of the reachable set (2) where the unobserved variables x Ki and u Li are considered as bounded disturbances: given s ⊆ X such that s ⊆ ψ(k) and a control input u ∈ U , (2), ( 3) and (4) give
RS(s, {u}) ⊆ RS(s, {u}) ⊆ RS AG1 i (π Ii (s), π Ji (u)). (5) Next, RS AG1 i (s i , u i ) is updated into RS AG2 i (s i , u i ) using A/G Obligation 2: RS AG2 i (s i , u i ) = RS AG1 i (s i , u i )∩π -1 I o i (π I o i (ψ(k+1))). (6)
The set RS AG2 i is thus the same set as the overapproximation RS AG1 i , but without the states that violate the specification on the uncontrolled state dimensions I o i , since they are known to be controlled by other subsystems. The particular case where RS AG2 i = ∅ means that despite the best control actions from other subsystems, the state of the system will always be driven out of the targeted cell ψ(k+1).
IV. REFINEMENT ALGORITHM
For each subsystem i ∈ {1, . . . , m}, starting from the coarsest abstraction corresponding to the initial partition X i = π Ii (P ), the abstraction refinement method presented in this section aims at iteratively identifying elements of this abstraction preventing the satisfaction of the specification ψ for subsystem i and refining these elements to obtain a more precise abstraction. The advantages of this specification guided approach are thus to automatically refine the state partition if the specification is not initially satisfied and to avoid the computation of the whole abstraction when only a small part is actually relevant to the specification.
Assumption 2: ψ = ψ(0)ψ(1) . . . ψ(r) for some r ∈ N. For any k, l ∈ {0, . . . , r} such that k = l and for any subsystem i ∈ {1, . . . , m} we have π Ii (ψ(k)) = π Ii (ψ(l)).
For clarity of notations, this approach is presented in Algorithm 1 in the particular case of Assumption 2 where the desired lasso-shaped path ψ = ψ pref .(ψ suf f ) ω is finite (i.e. ψ suf f = ∅) and for each subsystem it does not visit the same cell twice. The straightforward modifications required to cover the general case without Assumption 2 are provided at the end of this section.
Input: Partition P of X, discrete control set U i , Input: Cell sequence ψ = ψ(0) . . . ψ(r) ∈ P r+1 , Input: Partition projection P i : P → 2 Xi such that
P i (σ) = {s i ∈ X i | s i ⊆ π Ii (σ)}. Initialization: X i = π Ii (P ), V r i = {π Ii (ψ(r))}, V r iX = π Ii (ψ(r)), Queue = ∅. for k from r -1 to 0 do {V k i , V k iX , C i } = ValidSets (k, V k+1 iX ) Queue = AddToQueue (ψ(k)) while V k i = ∅ do ψ(j) = FirstInQueue (Queue) forall s i ∈ P i (ψ(j))\V j i do X i = Split (s i ); for l from j to k do {V l i , V l iX , C i } = ValidSets (l, V l+1 iX ) return {X i , r-1 k=0 V k i ⊆ X i , C i : r-1 k=0 V k i → U i } Algorithm 1: Refinement algorithm for subsystem i.
a) Inputs: Algorithm 1 is provided with the initial partition P of the state space X, a finite set U i of control values for subsystem i as in Section III-B, the finite sequence of cells ψ(0) . . . ψ(r) ∈ P r+1 defining the specification ψ from Section II-C as in Assumption 2 and an operator P i : P → 2 Xi giving the set of all symbols s i ∈ X i included in the projection π Ii (σ) of each cell σ ∈ P . For each cell ψ(k) in the sequence ψ = ψ(0) . . . ψ(r) the goal is to compute the subset V k i ⊆ P i (ψ(k)) of symbols that are valid with respect to the specification ψ, i.e., that can be controlled such that all successors are valid symbols of the next cell ψ(k + 1). The set V k iX then corresponds to the projection of V k i on the continuous state space π Ii (X). b) Initialization: The set of symbols X i is initially taken as the coarsest partition of the state space π Ii (X) (i.e. π Ii (P )) and will be refined during the algorithm when unsatisfaction of ψ is detected. We proceed backward on the finite sequence ψ = ψ(0) . . . ψ(r) and thus take the final cell ψ(r) as fully valid: V r i = P i (ψ(r)) = {π Ii (ψ(r))} and V r iX = π Ii (ψ(r)). We also initialize a priority queue (Queue = ∅) which will be used to determine which cell of P is to be refined at the next iteration of the algorithm. c) External functions: Algorithm 1 calls four external functions. The function ValidSets looks for the valid symbols and their associated control inputs for a particular step of the specification sequence. This function is detailed in Algorithm 2 and explained in the next paragraph. Func-tions AddToQueue and FirstInQueue deal with the management of the priority queue and Split represents the refinement of the partition. Although these 3 functions offer significant degrees of freedom towards maximizing the efficiency of the algorithm, this optimization problem is beyond the scope of this paper and is left as future research.
Input: P , U i , ψ and P i from Input to Algorithm 1,
Input: Index k ∈ {0, . . . , r -1} of considered cell ψ(k) Input: Next cell's valid set V k+1 iX . V k i = {s i ∈ P i (ψ(k)) | ∃u i ∈ U i such that ∅ = π Ii (RS AG2 i (s i , u i )) ⊆ V k+1 iX V k iX = x i ∈ π Ii (X) ∃s i ∈ V k i such that x i ∈ s i ∀s i ∈ V k i , C i (s i ) is chosen in u i ∈ U i ∅ = π Ii (RS AG2 i (s i , u i )) ⊆ V k+1 iX return {V k i , V k iX , C i } Algorithm 2:
ValidSets. Computes the valid sets and controller for subsystem i at step k of the specification ψ. d) Valid sets: In the main loop of Algorithm 1, assuming we have previously found non-empty valid sets (V k+1 i , . . . , V r i ), we call the function ValidSets for step k of the specification as in Algorithm 2. This function first computes the valid set V k i for step k by looking for the symbols in P i (ψ(k)) for which the over-approximation RS AG2 i of the reachable set is both non-empty and contained in the valid set V k+1 iX of the next cell ψ(k+1) for at least one value of the discrete control input. Note that in this particular call where the cell ψ(k) is visited for the first time, P i (ψ(k)) contains a single element: π Ii (ψ(k)). The set V k iX is taken as the projection of V k i on the continuous state space π Ii (X). Then, the controller C i associates each valid symbol in V k i to the first of such satisfying control values that has been found. Algorithm 2 finally outputs V k i , V k iX and C i to Algorithm 1. Since the cell ψ(k) is considered here for the first time, we add it to the priority queue with the function AddToQueue.
e) Refinement and update: If the valid set V k i is empty, we select (with function FirstInQueue) the first cell ψ(j) of the priority queue and refine it. The refinement is achieved by the function Split and consists in uniformly splitting all the invalid symbols of P i (ψ(j)) into a number of identical subsymbols (e.g. 2 in each state dimension in I i ). After this, we need to update the valid sets V j i and V j iX and controller C i for the refined cell ψ(j) using the ValidSets function. The possibly larger valid set V j iX obtained after this refinement can then induce a larger valid set at step j -1, which in turns influences the following steps. The refinement and update of the valid set at step j thus requires an update (using function ValidSets) for all other cells from ψ(j -1) to ψ(k). The refined cell ψ(j) can then be moved to any other position in the priority queue (here assumed to be handled by the function FirstInQueue) and these steps are repeated until V k i = ∅. f) Outputs: The algorithm provides three outputs. The first one is the refined partition X i for subsystem i. The second one gathers the sets V k i ⊆ P i (ψ(k)) ⊆ X i of valid symbols for all k ∈ {0, . . . , r -1}. Finally, the controller C i associates a unique control value (since we stop looking as soon as a satisfying control is found) to each valid symbol. g) General case: The general case without Assumption 2 can be covered by modifying Algorithm 1 as follows. For duplicated cells π Ii (ψ(k)) = π Ii (ψ(l)) with k = l, we need a controller C i : X i × {0, . . . , r} → U i which now also depends on the current position k ∈ {0, . . . , r} in ψ in order to know which next cell ψ(k + 1) should be targeted.
When ψ = ψ pref .(ψ suf f ) ω is a lasso path with nonempty suffix ψ suf f = ψ(r + 1) . . . ψ(f ), Algorithm 1 is first called on ψ suf f . This call is then repeated with the new initialization
[V f i , V f iX , C i ] = ValidSets(f, V r+1 iX ) (i.e
. the last suffix cell ψ(f ) must be driven towards the first suffix cell ψ suf f (r + 1)) until further calls of the ValidSets function have no more influence on the sets V k i for k ∈ {r + 1, . . . , f }. In this loop, the valid set of a refined suffix cell ψ(k) needs to be reset to fully valid (V k i = P i (ψ(k))) to avoid propagation of empty valid sets. A final call of Algorithm 1 is then done for ψ pref with the initialization
[V r i , V r iX , C i ] = ValidSets(r, V r+1 iX ) (i.
e. the last prefix cell ψ(r) must be driven towards the first suffix cell ψ(r + 1)).
V. COMPOSITION
Algorithm 1 in Section IV is applied to each subsystem i ∈ {1, . . . , m} separately. In this section, we then show that combining the controllers C i of all subsystems results in a global controller solving Problem 1 by ensuring that the original system S follows the lasso-shaped sequence ψ.
A. Operator for partition composition
Due to the possible overlap of the state space dimensions for two subsystems and the fact that the refined partitions do not necessarily match on these common dimensions, we first need to define an operator for the composition of sets of symbols (either the refined partition X i or the valid sets V k i obtained in Algorithm 1). Given two refined sets X i and X j as obtained in Section IV, we first introduce an intermediate operator :
X i X j = s ∈ π Ii∪Ij (2 X ) ∃s i ∈ X i , π Ii (s) ⊆ s i , ∃s j ∈ X j , π Ij (s) ⊆ s j ,
followed by the main composition operator defined as:
X i X j = X i X j \{s ∈ X i X j | ∃s ∈ X i X j , s s }.
Intuitively, we first ensure that the set X i X j is at least as fine as both partitions X i and X j , thus providing a covering of π Ii∪Ij (X):
s∈Xi Xj s = π Ii∪Ij (X). Then, X i X j is converted into a partition X i X j of π Ii∪Ij (X) by removing all subsets strictly contained in another element of X i X j .
Proposition 3: If X i and X j are partitions of π Ii (X) and π Ij (X), respectively, then X i X j is a partition of π Ii∪Ij (X).
Proof: Let x ∈ π Ii∪Ij (X). Since X i and X j are partitions, there exists s i ∈ X i and s j ∈ X j such that π Ii (x) ∈ s i and π Ij (x) ∈ s j , which implies that there exists s ∈ X i X j such that x ∈ s. Then, the set X i X j is also a covering since it only removes elements of X i X j which are strictly contained in other elements of X i X j .
Let now s, s ∈ X i X j such that x ∈ s∩s . Since X i and X j are partitions, we also know that s i ∈ X i and s j ∈ X j as defined above are unique. From X i X j ⊆ X i X j , we thus have π Ii (s), π Ii (s ) ⊆ s i and π Ij (s), π Ij (s ) ⊆ s j , which implies that s ∪ s ∈ X i X j . Therefore, s and s can only be in X i X j if s = s = s ∪ s .
B. Composed transition system
We now define the transition system S c = (X c , U c , -→ c ) as the composition of the abstractions S i obtained in Algorithm 1 for each subsystem i ∈ {1, . . . , m}. S c contains the following elements:
• X c = X 1 • • • X m
is the composition of the refined partitions for each subsystem. From Proposition 3, we know that X c is a partition of X. From the definition of the operator , the projection π Ii (s) of s ∈ X c does not necessarily correspond to a symbol of X i . However, we know (see proof of Proposition 3) that there exists a unique symbol s i ∈ X i containing this projection. Therefore, for each i ∈ {1, . . . , m}, we define the decomposition function
d i : X c → X i such that d i (s) = s i is the unique symbol s i ∈ X i satisfying π Ii (s) ⊆ s i . • U c = U 1 ו • •×U m is the composition of the discretized
control sets (which is a simple Cartesian product since they are defined on disjoint dimensions). We can then introduce the controller C c : X c → U c as the composition of the controllers C i : X i → U i obtained on the abstraction of each subsystem in Algorithm 1:
∀s ∈ X c , C c (s) = (C 1 (d 1 (s)), . . . , C m (d m (s))), (7)
which is then used to define the transition relation of S c .
• ∀s, s ∈ X c , u = C c (s), s u -→ c s ⇐⇒ ∀i ∈ {1, . . . , m}, d i (s) u J i -→ i d i (s ).
Intuitively, the transition s u -→ c s , equivalently written as s ∈ P ost c (s, u), exists when the control input u ∈ U c is allowed by the local controllers C i for all i ∈ {1, . . . , m} and when the transition in S c can be decomposed (using the decomposition functions d i : X c → X i ) into existing transitions for all subsystems. Finally, we define the set
U c (s) = {u ∈ U c | P ost c (s, u) = ∅}.
C. Main result
To control S with the controller C c in [START_REF] Clarke | Abstraction and counterexample-guided refinement in model checking of hybrid systems[END_REF], the systems S = (X, U, -→) and S c = (X c , U c , -→ c ) must satisfy the following alternating simulation relation, adapted from [START_REF] Tabuada | Verification and control of hybrid systems: a symbolic approach[END_REF].
Definition 4 (Alternating simulation): A map H : X → X c is an alternating simulation relation from S c to S if it holds: ∀x ∈ X, s = H(x), ∀u c ∈ U c (s), ∃u ∈ U such that ∀x ∈ P ost(x, u), H(x ) ∈ P ost c (s, u c ).
This definition means that for any pair (x, s) of matching state and symbol and any control u c of the abstraction S c , there exists an equivalent control for the original system S such that any behavior of S is matched by a behavior of S c . As a consequence, if a controller is synthesized so that S c satisfies some specification, then we know that there exists a controller ensuring that S also satisfies the same specification. We can show that such a relation can be found when both S c and S use the same controls u = C c (s).
Theorem 5: The map H : X → X c such that x ∈ s ⇔ H(x) = s is an alternating simulation relation from S c to S.
Proof: Let x ∈ X, s = H(x) ∈ X c and u ∈ U c (s). By definition of S c , we have U c (s) ⊆ {C s (s)} for all s ∈ X c . If U c (s) = ∅, the condition in Definition 4 is trivially satisfied. Otherwise, we have u = C c (s) defined as in [START_REF] Clarke | Abstraction and counterexample-guided refinement in model checking of hybrid systems[END_REF] which implies that x ∈ ψ(k) for some k ∈ Z + . Let x ∈ P ost(x, u), s = H(x ) and denote the decompositions of s and s as s i = d i (s) and s i = d i (s ) for all i ∈ {1, . . . , m}. By definition of the over-approximation operator RS in (3), we have x ∈ RS(s, {u}). With the inclusion in [START_REF] Chutinan | Verification of polyhedral-invariant hybrid automata using polygonal flow pipe approximations[END_REF] and the fact that π Ii (s) ⊆ s i , we obtain
x ∈ RS AG1 i (s i , u Ji ) for all i. If x ∈ ψ(k + 1), then x Ii ∈ s i ∩ π Ii (RS AG2 i (s i , u Ji ))
and this intersection is thus non-empty, which implies that s i ∈ P ost i (s i , u Ji ) for all i. Then, s ∈ P ost c (s, u) by definition of S c . On the other hand, if x / ∈ ψ(k + 1), then there exists l ∈ {1, . . . , n} such that x l / ∈ π l (ψ(k + 1)) and there exists a unique subsystem j ∈ {1, . . . , m} such that l ∈ I c j . Therefore we have x I c j / ∈ π I c j (ψ(k + 1)) and then π I c j (RS AG1 j (s j , u Jj )) π I c j (ψ(k + 1)). This implies that u Jj / ∈ U j (s j ) which contradicts the fact that u ∈ U c (s). Theorem 5 thus confirms that using A/G Obligations 1 and 2 is reasonable since it preserves the alternating simulation relation on the composition S c while reducing the conservatism of the over-approximations in each subsystem.
The next result immediately follows from the definition of S c (U c (s) ⊆ {C s (s)}) and the proof of Theorem 5 (if C c (s) exists, then P ost c (s, C c (s)) = ∅, i.e. C c (s) ∈ U c (s)).
Corollary 6: U c (s) = {C c (s)} for all s ∈ X c . These two results can then be exploited to solve Problem 1. Theorem 7: Let x : Z + → X be any trajectory of S from an initial state x(0) ∈ X such that H(x(0))
∈ V 0 1 • • • V 0 m and subject to the controller C X c : X → U with C X c (x) = C c (H(x)) for all x ∈ X. Then x(k) ∈ ψ(k) for all k ∈ Z + .
Proof: From Theorem 5, it is sufficient to prove that the composed system S c controlled by C c in [START_REF] Clarke | Abstraction and counterexample-guided refinement in model checking of hybrid systems[END_REF] follows 7) is thus well defined since we have d i (s) ∈ V k i for all i and Corollary 6 implies that there exists s ∈ P ost c (s, C c (s)). By definition of S c , this implies that d i (s ) ∈ P ost i (d i (s), C i (d i (s))) for all i. Then Algorithm 2 gives that d i (s ) ∈ V k+1 i for all i and it follows that s ∈ V k+1
ψ if it starts at s 0 = H(x(0)) ∈ V 0 1 • • • V 0 m . Let k ∈ Z + and s ∈ X c such that s ∈ V k 1 • • • V k m . The control value C c (s) in (
1 • • • V k+1 m , therefore s ⊆ ψ(k + 1
). If Algorithm 1 terminates in finite time for all subsystems i, Theorem 7 thus defines a controller C X c ensuring that the continuous system S follows the desired path ψ. However, if S follows ψ, we cannot guarantee that Algorithm 1 will find partitions X i for all subsystems i where ψ can be followed.
VI. NUMERICAL ILLUSTRATION
The use of intervals as the elements of the state partition (required by the compositional approach in Section III) particularly suits the computation of over-approximations of the reachable set using the monotonicity property. The reader is referred to [START_REF] Smith | Monotone dynamical systems: an introduction to the theory of competitive and cooperative systems[END_REF], [START_REF] Angeli | Monotone control systems[END_REF] for a description of monotone (control) systems and to e.g. [START_REF] Meyer | Invariance and symbolic control of cooperative systems for temperature regulation in intelligent buildings[END_REF] for their use to over-approximate the reachable set and create abstractions. In this section, we thus consider the 8D nonlinear monotone system described by:
ẋ = Ax -βx 3 + u, (8)
with state x ∈ R 8 , bounded control input u ∈ [-5, 5] 8 , constant parameter β = 0.01 ∈ R and componentwise cubic power x 3 . The diagonal elements of the matrix A ∈ R 8×8 are equal to -0.8 and the remaining elements represent state interactions and are shown in the directed graph of Figure 1.
To match the description of (1) in Section II-B, the system ( 8) is then sampled with a period of 1.2 seconds. [START_REF] Clarke | Counterexampleguided abstraction refinement for symbolic model checking[END_REF]. A directed edge from node i to node j is labeled with the value of ∂ ẋj /∂x i .
In view of the state coupling shown in Figure 1, we decompose the system (8) into 5 subsystems defined as follows by their index sets I i , I c i and J i . We first take three pairs We consider a control objective initially formulated as the Linear Temporal Logic formula ♦σ 2 ∧ ♦σ 3 representing the surveillance task of visiting infinitely often both partition cells σ 2 = [-3, 3] 8 and σ 3 = [3, 9] 8 . Assuming that the initial state of the system is in the cell σ 0 = [-9, -3] 8 , this can then be reformulated as a lasso-shaped sequence ψ = σ 0 .σ 1 .(σ 2 .σ 3 ) ω , where the second cell σ 1 of the prefix is [-3, 3] for the state dimensions 3 and 7 while it remains [-9, -3] (as in σ 0 ) on the other dimensions. Note that ψ does not satisfy Assumption 2 due to both its non-empty suffix and duplicated prefix cells (e.g. π I1 (σ 0 ) = π I1 (σ 1 )).
I 1 = I c 1 = J 1 = {1, 2}, I 2 = I c 2 = J 2 = {4,
Algorithm 1 is then applied to each subsystem, where the Split function uniformly splits a symbol into 2 subsymbols per dimension and the priority queue is handled as follows: we only refine a cell when no coarser candidate exists, and when more than one cell can be refined we prioritize the one whose last refinement is the oldest. In Figure 2, we display the resulting refined partitions and valid symbols for each subsystem. Below, we detail the refinement process in the case of subsystem 5 in Figure 2e. We start with the top right cell π I5 (σ 3 ) as fully valid. For π I5 (σ 2 ) (center), no satisfying control is found to drive the whole cell into π I5 (σ 3 ), so it is split into 4 identical subsymbols, two of which are valid. We loop back on the last cell π I5 (σ 3 ) of the suffix and find that the whole cell can be controlled towards the valid symbols of π I5 (σ 2 ). The valid set V 3 5 is thus unchanged by the last call of ValidSets and Algorithm 1 is done with the suffix.
Since no satisfying control is found to bring the last cell π I5 (σ 1 ) (bottom center) of the prefix to the valid symbols of π I5 (σ 2 ), we then split π I5 (σ 1 ) into 4 subsymbols, 3 of which are valid. Similarly, π I5 (σ 0 ) (bottom left) is split into 4 subsymbols since it cannot be controlled towards the valid set of π I5 (σ 1 ). None of the obtained subsymbols of π I5 (σ 0 ) are valid and we thus refine the next cell in the queue: π I5 (σ 1 ). This refinement only splits into 4 subsymbols the unique invalid symbol of π I5 (σ 1 ) (i.e. its top right symbol). All new subsymbols are valid (they can be controlled towards the valid set of π I5 (σ 2 )), and an update of π I5 (σ 0 ) gives that all 4 of its symbols are valid (V 0 5X = π I5 (σ 0 )), thus ending Algorithm 1.
Using Matlab on a laptop with a 2.6 GHz CPU and 8 GB of RAM, these results after applying Algorithm 1 to all subsystems were obtained in 11.1 seconds. As a comparison, the same abstraction refinement algorithm applied in a centralized way (no decomposition and a single abstraction representing the whole system) was still in its first suffix call of Algorithm 1 after more than 48 hours of computation.
VII. CONCLUSION
In this paper, we presented a novel approach to abstraction creation and control synthesis in the form of a compositional specification-guided abstraction refinement procedure. This approach applies to nonlinear systems associated with a method to over-approximate its reachable sets, and to lassoshaped specifications. The dynamics are decomposed into subsystems representing partial descriptions of the system and a finite abstraction is then created for each subsystem through a refinement procedure starting from a coarse partition of the state space. Each refined abstraction is associated with a local controller and the composition of these local controllers enforces the specification on the original system.
Current efforts aim at maximizing the algorithm efficiency using its degrees of freedom in the splitting strategy and the management of the priority queue. We also work towards combining this approach into a common framework with plan revision methods.
Fig. 1 :
1 Fig.1: State interactions in[START_REF] Clarke | Counterexampleguided abstraction refinement for symbolic model checking[END_REF]. A directed edge from node i to node j is labeled with the value of ∂ ẋj /∂x i .
5} and I 3 = I c 3 = J 3 =
333 {6, 7} where all the observed states are also controlled (I o 1 = I o 2 = I o 3 = ∅). The last two subsystems only aim at controlling a single state each, but also observe an additional state: I 4 = {2, 3}, I c 4 = J 4 = {3}, I o 4 = {2} and I 5 = {7, 8}, I c 5 = J 5 = {8}, I o 5 = {7}. The considered state space X = [-9, 9] 8 is partitioned into 3 elements per dimension, thus resulting in a partition P of 6561 cells. The control interval U = [-5, 5] 8 is discretized into 5 values per dimension ({-5, -2.5, 0, 2.5, 5}).
(a) I1 = I c 1 =Fig. 2 :
12 Fig. 2: Refined partitions and valid symbols (in yellow) for all 5 subsystems. | 40,189 | [
"1231646"
] | [
"398719",
"398719"
] |
00147620 | en | [
"math"
] | 2024/03/04 23:41:46 | 2004 | https://hal.science/hal-00147620/file/polrev.pdf | Javier Hidalgo
Philippe Soulier
email: [email protected]
Estimation of the location and exponent of the spectral singularity of a long memory process
Keywords: Long Memory, Fractionally differenced processes, Gegenbauer processes, Periodogram, Semiparametric estimation
We consider the estimation of the location of the pole and memory parameter ω0 and d of a covariance stationary process with spectral density f
. We investigate optimal rates of convergence for the estimators of ω0 and d, and the consequence that the lack of knowledge of ω0 has on the estimation of the memory parameter d. We present estimators which achieve the optimal rates. A small Monte-Carlo study is included to illustrate the finite sample performance of our estimators.
Introduction
Given a covariance stationary process X, the search for cyclical components is of undoubted interest. This is motivated by the observed periodic behaviour exhibited in many time series. A well known model capable of generating such a periodic behaviour is the regression model
x t = µ + ρ 1 cos(ω 0 t) + ρ 2 sin(ω 0 t) + ε t , (1.1)
where ρ 1 and ρ 2 are zero mean uncorrelated random variables with the same variance and {ε t } is a stationary sequence of random variables independent of ρ 1 and ρ 2 . Model (1.1) has enjoyed extensive use and different techniques have been proposed for the estimation of the frequency, amplitude and phase; see [START_REF] Whittle | The simultaneous estimation of a time series harmonic components and covariance structure[END_REF], [START_REF] Grenander | Statistical Analysis of stationary time series[END_REF], [START_REF] Hannan | Non-linear time series regression[END_REF][START_REF] Hannan | The estimation of frequency[END_REF] and [START_REF] Chen | Consistent estimate for hidden frequencies in a linear process[END_REF]. A second model exhibiting peaks in its spectral density function is the autoregressive AR (2) process
I -a 1 B -a 2 B 2 X = ε, (1.2)
where B is the backshift operator. When the roots of the polynomial 1a 1 za 2 z 2 are not real (which implies that a 2 < 0), then the process X exhibits a periodic behaviour with frequency ω 0 = 1 arc cos(a 1 /2 √ -a 2 ). Models (1.1) and (1.2) represent two extreme situations explaining cyclical behaviour of the data. Model (1.2) possesses a continuous spectral density function whereas model (1.1) has a spectral distribution function with a jump at the frequency ω 0 . Whereas the cyclical pattern of model (1.2) fades out with time fairly quickly (in the sense that the autocorrelation of the process decays exponentially fast), the cyclical component of the data remains constant in model (1.1).
Between these two extreme situations, there exists a class of intermediate models capable of generating a cyclical behaviour in the data, stronger and more persistent than ARM A models, e.g. (1.2), but unlike model (1.1), their amplitude does not remain constant over time. Such a model has been proposed by [START_REF] Andel | Long-memory time series models[END_REF] and extended by [START_REF] Gray | On generalized fractional processes[END_REF][START_REF] Gray | On generalized fractional processes -a correction[END_REF] who coined it the GARM A model. It is defined as (Ie iω0 B) -d/2 (Ie -iω0 B) -d/2 X = ε,
where ε is an ARM A process. The spectral density function of the GARM A process is given by
f (x) = σ 2 2π |1 -e i(x-ω0) | -d |1 -e i(x+ω0) | -d |P (e iλ )/Q(e iλ )| 2 (1.4)
where σ 2 > 0, and P and Q are polynomials without common roots and without roots inside the unit disk. As 1 -2 cos(ω 0 )z + z 2 is known as the Gegenbauer polynomial, GARM A processes are also referred to as Gegenbauer processes. When ω 0 = 0, model (1.3) boils down to the more familiar FARIMA(p, d, q) model, originated by [START_REF] Adenstedt | On large-sample estimation for the mean of a stationary random sequence[END_REF] and examined by [START_REF] Granger | An introduction to long memory time series and fractional differencing[END_REF] and [START_REF] Hosking | Fractional differencing[END_REF]. The coefficient d is frequently referred to as the memory parameter, or the fractional differencing coefficient. One can also sometimes find reference to the coefficient g defined as g = d if ω 0 ∈ {0, π} and g = d/2 if ω ∈ (0, π).
The spectral density has a singularity at ω 0 with a power law |xω 0 | -α , where α = 2d if ω 0 ∈ {0, π} and α = d if ω 0 ∈ (0, π). (Note that in both cases, α = 2g). These models exhibit "long range dependence", meaning that the autocovariance function γ(k) := cov(X 0 , X k ) is not absolutely summable. This makes their probabilistic properties and the asymptotic distribution of some relevant statistics very different from those of usual "weakly dependent" processes such as ARM A models.
Maximum likelihood and Whittle estimators for these models have been investigated and proved to be √ n-consistent and asymptotically normal when a parametric model is correctly specified. In the case of Gaussian or linear processes, this was shown by [START_REF] Fox | Large-sample properties of parameter estimates for strongly dependent stationary Gaussian time series[END_REF], [START_REF] Dahlhaus | Efficient parameter estimation for self-similar processes[END_REF], [START_REF] Giraitis | A central limit theorem for quadratic forms in strongly dependent linear variables and its application to asymptotic normality of Whittle's estimate[END_REF] in the case ω 0 = 0, [START_REF] Hosoya | A limit theory with long-range dependence and statistical inference on related models[END_REF] in the case ω 0 = 0 known, and more recently [START_REF] Giraitis | Gaussian estimation of parametric spectral density with unknown pole[END_REF] dealt with the case ω 0 unknown. However, misspecification of the model can lead to inconsistent estimates of the coefficient d and of the location of the pole ω 0 . This has drawn attention of researchers and practioners to semi-parametric methods. This means that the process ǫ in the fractional differencing equation (1.3) is not fully specified, but considered as an infinite dimensional nuisance parameter, while the parameters of interest are ω 0 and α. More precisely, we consider a covariance stationary linear process X with spectral density function
f (x) = |1 -e i(x-ω0) | -d |1 -e i(x+ω0) | -d f * (x), x ∈ [-π, π] \ {±ω 0 }, (1.5)
where 0 < d < 1/2 if ω 0 = 0 or π and 0 < d < 1 if ω 0 ∈ (0, π) and f * is a positive and continuous function on [-π, π]. The difference between (1.3) and (1.5) lies in the fact that in (1.5), the so-called smooth component f * of the spectral density f is neither constrained to be rational, as in (1.4), nor to be characterized by a finite dimensional parameter.
The main objectives of this paper are twofold. The first one is to provide, under mild conditions, a consistent semi-parametric estimator of ω 0 with the best possible rate of convergence. The method we have chosen is the method of [START_REF] Yajima | Estimation of the frequency of unbounded spectral densities[END_REF] which consists in maximizing the periodogram of the data. [START_REF] Yajima | Estimation of the frequency of unbounded spectral densities[END_REF] proved the consistency of this estimator under the assumption of Gaussianity. Theorem 1 below relaxes this assumption of Gaussianity and slightly improves the rate of convergence obtained by Yajima. The second objective is to investigate the consequences that the lack of knowledge of ω 0 might have on the estimation of α. Theorem 2 shows that a modified version of the GPH estimator of Geweke and Porter-Hudak (1983) (see also [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF] has the same rate-optimality properties proved by [START_REF] Giraitis | Rate optimal semiparametric estimation of the memory parameter of the Gaussian time series with long range dependence[END_REF] where ω 0 was known, and it is asymptotically Gaussian with the same asymptotic variance obtained in [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF]. That is, the statistical properties of this estimator of α are unaffected by the lack of knowledge of ω 0 . A short Monte-Carlo experiment confirms these theoretical results.
Definition and asymptotic properties of the estimators
Let X 1 , • • • , X n be n observations of the process X and let I n (x) = (2πn) -1 | n t=1 X t e itx | 2 be the periodogram. Define ñ = [(n -1)/2] and let x k = 2kπ/n, -ñ ≤ k ≤ ñ be the Fourier frequencies. The estimator of ω 0 is defined as ωn = 2π n arg max 1≤k≤ñ I n (x k ). (2.1)
Theorem 1. Let X be a strict sense linear process, i.e. there exists an i.i.d. sequence (Z t ) t∈Z with zero mean and finite eighth moment and a square summable sequence (a t ) t∈Z such that X t = µ + j∈Z a j Z t-j and define a(x) = j∈Z a j e ijx . Assume that a is differentiable on
[-π, π] \ {ω 0 } and ∀x ∈ [0, π] \ {ω 0 }, K 1 |x -ω 0 | -ν ≤ |a(x)| ≤ K 2 |x -ω 0 | -ν , (2.2) |(x -ω 0 )a ′ (x)/a(x)| ≤ K, (2.3)
for some positive constants K, K 1 , K 2 and ν ∈ (0, 1/2). Let v n be a non-decreasing sequence such that lim n→∞ v -2ν n log(n) = 0. Then n vn (ω nω 0 ) converges in probability to zero.
Comments on assumptions (2.2) and (2.3) Assumption (2.2) ensures that there is a pole. Assumption (2.3) is an adaptation of Assumption A1 of [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF] in the case of a pole outside zero; it is needed to obtain upper bounds for the covariance of the discrete Fourier transform ordinates of X.
We now define the modified GPH estimator of the exponent of the singularity. Since model (1.5) does not allow the singularity to be non symmetric, as in [START_REF] Arteche | Semiparametric inference in seasonal and cyclical long memory processes[END_REF], the definition of the estimator is symmetric around the pole. Recall that the exponent of the singularity is defined as
α = d if ω 0 ∈ (0, π) and α = 2d if ω 0 ∈ {0, π}. Denote g(x) = -log(|1 -e ix |), ḡm = m -1 m k=1 g(x k ), s 2 m = 2 m k=1 (g(x k ) -ḡm ) 2 and for k = -m, • • • , -1, 1, • • • , m, γ k = s -2 m (g(x k ) -ḡm ). The estimator αn is defined as αn = 1≤|k|≤m γ k log{I n (ω n + x k )}.
(2.4) (A1) f * is an integrable function on [-π, π] and there exists a neighborhood
V 0 = [ω 0 -ϑ, ω 0 + ϑ] of ω 0 such that for all x ∈ V 0 , (a) |f * (x) -f * (ω 0 )| ≤ K f * (ω 0 )|x -ω 0 | β ,
for some β ∈ (0, 1] and some positive constant K > 0 or, (b) f * is differentiable at ω 0 and
|f * (x) -f * (ω 0 ) -(x -ω 0 )f * ′ (ω 0 )| ≤ K|x -ω 0 | β ,
for some β ∈ (1, 2] and some positive constant K > 0.
Theorem 2. Let X be a Gaussian process whose spectral density f can be expressed as
f (x) = |1 -e i(x-ω0) | -d |1 -e i(x+ω0) | -d f * (x), x ∈ [-π, π] \ {±ω 0 },
with d ∈ (0, 1) if ω 0 ∈ (0, π) and d ∈ (0, 1/2) if ω 0 ∈ {0, π} and f * is an even function that satisfies (A1) and
∀x ∈ [0, π] \ {ω 0 }, |(x -ω 0 )f * ′ (x)/f * (x)| ≤ K. (2.5)
Let δ be a positive real number such that δ < 2β/(2β +1) and define
m = m(n) = [n δ ]. Then m 1/2 (α n -α) converges weakly to N (0, π 2 /12) if ω 0 ∈ (0, π) and m 1/2 (α n -α) converges weakly to N (0, π 2 /6) if ω 0 ∈ {0, π}.
The first step in the proof of Theorem 2 is to obtain the distribution of the GPH estimator when the location of the pole is known. Let ω n be the closest Fourier frequency to ω 0 , with the convention that if x k1 and x k2 are equidistant from ω 0 we take ω n as the smallest of the two. Let αn be the (infeasible) GPH estimator based on the knowledge of ω 0 :
αn = 1≤|k|≤m γ k log{I n (ω n + x k )}.
(2.6) Proposition 1. Under the assumptions of Theorem 2, m 1/2 (α nα) converges weakly to N (0, π 2 /12) if ω 0 ∈ (0, π) and N (0, π 2 /6) if ω 0 ∈ {0, π}.
Comments
• Assumption (2.5) corresponds to Assumption A1 of [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF] and is used to obtain covariance bounds for the discrete Fourier transform and log-periodogram ordinates of the process X (Lemmas 1 and 4 below). (A1) corresponds to A2 of [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF] and is used to control the bias of the GPH estimator.
• An asymptotic distribution for the estimator of ω 0 would obviously be of great interest, especially if the rate of convergence n could be obtained. Unfortunately, this has not been already achieved. Nevertheless, the rate of convergence of the present estimator is close to the parametric rate n obtained by [START_REF] Giraitis | Gaussian estimation of parametric spectral density with unknown pole[END_REF] and its empirical performance is quite good, as shown in section 3. [START_REF] Hidalgo | Semiparametric estimation when the location of the pole is unknown[END_REF] proposes an alternative estimator for which he obtains an asymptotic ditribution with a rate of convergence close to the parametric rate n provided the process X has enough finite moments.
• If ω 0 ∈ {0, π}, note that αn is approximately equal to twice the GPH estimator dn of d whose asymptotic distribution under the same assumptions is N (0, π 2 /24), cf. [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF]. Hence, there is no efficiency loss incurred by estimating ω 0 . The asymptotic variance in the case ω 0 ∈ {0, π} is twofold the asymptotic variance in the case ω 0 ∈ (0, π), because ǫ k (ω n ) = ǫ -k (ω n ) in the former case, while in the latter case, the m Fourier frequencies on the left and on the right of ω 0 can be considered "asymptotically i.i.d.".
• The symmetrization procedure is necessary in the case ω 0 ∈ (0, π), in order to obtain the same rate of convergence as in the case ω 0 ∈ {0, π}. Without the symmetrization, the maximum possible value of δ would be (2/3) ∧ (2β/(2β + 1)), (2/3 being the value corresponding to β ≥ 1), instead of 2β/(2β + 1) here. The reason is that by symmetry, the first derivative of the spectral density at ω 0 is 0 when ω 0 ∈ {0, π}, whereas it need not be so if ω 0 ∈ (0, π). For more details see [START_REF] Hidalgo | Semiparametric estimation when the location of the pole is unknown[END_REF].
• Theorem 2 is proved under the assumption of Gaussianity. Using the techniques of [START_REF] Velasco | Non-Gaussian log-periodogram regression[END_REF] and [START_REF] Hurvich | The FEXP estimator for non Gaussian, potentially non stationary processes[END_REF], it could be proved under the weaker assumption of linearity in the strict sense. The derivations would then be extremely involved and lenghty. We prefer to give a simple proof under the assumption of Gaussianity.
Case of multiple singularities
Model (1.5) can be extended to allow for more than one spectral singularity or pole, that is
f (x) = s i=1 |1 -e i(x-ω i ) | -di |1 -e i(x+ω i ) | -di f * (x),
where
ω i = ω j if i = j, 0 < d i < 1 if ω i = 0 and 0 < d i < 1/2 if ω i = 0
, and f * is a smooth function (C 2 over [-π, π], say). Then the poles ω i , i = 1, . . . , s and the values d i , i = 1, ..., s, can be estimated sequentially due to the local character of our estimators as we now illustrate. Suppose for expositional purposes that ω 1 = 0, ω 2 = 0, d 1 > d 2 and s = 2. Then ω 1 and ω 2 can be estimated by
ω1 n = 2π n arg max 1≤k≤ñ I n (x k ) and ω2 n = 2π n arg max 1≤k≤ñ;|x k -ω 1 n |≥zn/n I n (x k ),
respectively, where z n is a non decreasing sequence such that for any positive real numbers κ and κ ′ ,
log κ (n) ≪ n z n ≪ n κ ′ .
An exemple of such a sequence is n) . Then the rate of convergence of ω1 n to ω 1 will be unchanged and the rate of convergence of ω2 n to ω 2 will be n/v 2,n , where
z n = ne - √ log (
v -2d2 2,n log(n) = o(1).
Let us briefly explain why such a sequence z n is needed to yield a n/v 2,n consistent estimator of ω 2 . First the proof of Theorem 1 can be easily adapted to prove that ωn is still
n/v 1,n consistent with v 1,n such that v -d1 1,n log(n) = o(1)
. Then if z n is chosen as proposed above, with probability tending to one,
|ω 1 n -ω 1 | ≤ z n /2n. Hence |ω 2 n -ω 1 | ≥ z n /2n and max |x k -ω 1 |≥zn/(2n),|x k -ω 2 |≥v2,n/n f (x k ) ≤ max((n/z n ) d1 , (n/v 2,n ) d2 ) = (n/v 2,n ) d2 .
As in the proof of Theorem 1, the final argument is that
E[I n (ω 2 n )] ≍ n d2 , where ω 2 n is the closest Fourier frequency to ω 2 . If d 1 = d 2 then obviously ω1
n is no longer consistent. Nevertheless, it is not difficult to see that the
pair {ω 1 n , ω2 n } is a n/v n consistent estimator of the pair {ω 1 , ω 2 }, with v n = v 1,n = v 2,n .
The estimators of d 1 and d 2 will be given by (2.4) but with ωn being replaced by ω1 n and ω2 n respectively and their rates of convergence will be the optimal ones. The proof of this result would be along the same lines as the proof of Theorem 2, with added technicalities.
The previous comments are valid only if the number of poles s is known. Clearly an important issue is to know how many poles do we have. For instance one may even doubt the actual presence of a pole. If indeed there is no pole then the value of ωn is spurious, but nevertheless we claim that in that case αn converges to zero, and more precisely, αn = O P (m -γ ) for any γ ∈ (0, 1/2), under the assumptions of Theorem 2 with α = 0 and assumption (A1) extended to the whole interval [-π, π], that is, assuming
f = f * is a smooth function over [-π, π].
We briefly outline the arguments. Define S n (x ℓ ) = 1≤|k|≤m γ k log{I n (x ℓ + x k )/f (x l + x k )} and let A be an arbitrary positive real number. Then:
P(m γ |α n | ≥ A) ≤ P(m γ |S n (ω n )| ≥ A/2) + P(m γ | 1≤|k|≤m γ k log{f (ω n + x k )| ≥ A/2).
Now, under the assumption on the sequence m, the last term is o(1) since 1≤|k|≤m γ k = 0 and it is assumed here that f satisfies (A1) since α = 0. By applying Markov's inequality, we obtain, for any integer q:
P(m γ |S n (ω n )| ≥ A/2) ≤ ñ ℓ=1 P(m γ |S n (x ℓ )| ≥ A/2) ≤ ñ ℓ=1 (A/2) -q m γq E[|S n (x ℓ )| q ].
It can then be shown that, under the Gaussian assumption, E[|S n (x ℓ )| q ] ≤ Cm -q/2 , where the constant C depends only on f and q (using for instance Theorem 2.1 and following the lines of the proof of Theorem 3.1 in [START_REF] Soulier | Moment bounds and a central limit theorem for functions of Gaussian vectors[END_REF]). Hence, we can conclude that lim
A→∞ lim sup n→∞ P(m γ |S n (ω n )| ≥ A/2) = 0, which proves our claim.
It is plausible that the rate of convergence can be improved to O P (m -1/2 ), (possibly up to some logarithmic factor), but this is beyond the scope of the present paper.
An asymptotic distribution for αn could be obtained in some particular cases (such as Gaussian white noise), but there are probably many different situations, thus making it impossible to have a general result.
Minimax rates of convergence
In semiparametric estimation, rates of convergence are an important issue. Two problems are considered here. The first one is the estimation of the location of the pole ω 0 , and the second one is estimation of the exponent of the singularity α when ω 0 is unknown.
When ω 0 is equal to zero and known, [START_REF] Giraitis | Rate optimal semiparametric estimation of the memory parameter of the Gaussian time series with long range dependence[END_REF] have proved that the best possible rate of convergence of an estimator of α under assumption (A1) is n β/(2β+1) . Theorem 2 states that even if ω 0 is unknown, then m 1/2 (α nα) is asymptotically normal at any rate m such that m/n 2β/(2β+1) → 0. Since an extra unknown parameter cannot possibly result in an improved rate of convergence, Theorem 2 shows that the rate of convergence of αn is optimal as far as obtaining a central limit theorem is concerned.
The problem of estimating the pole is not yet satisfactorily answered. It has been conjectured that the best possible rate of convergence, even in a parametric context is n, meaning that there exists a constant c > 0 such that for any estimator ωn based on observations [START_REF] Giraitis | Gaussian estimation of parametric spectral density with unknown pole[END_REF] have defined an estimator ωn such that ωnω 0 = O P (1/n), but they have not proved that this rate of convergence is optimal. In the present semiparametric context, a lower bound for the rate of convergence is not known, and up to our best knowledge, there exists no estimator better (and simpler) that the one presented here.
X 1 , • • • , X n , it holds that lim n→∞ P(n|ω n - ω 0 | > c) > 0. In a parametric context,
There is an obvious problem of identifiability of ω 0 if the singularity α or the fractional differencing coefficient d is not bounded away from 0. If d is bounded away from zero, say d > d 0 , Theorem 1 implies that the estimator presented here attains the rate of convergence n log -γ (n) for any γ > 1/2d 0 . We have not been able to obtain a lower bound for the rate of convergence of an estimator of ω 0 when d is bounded away from zero by a fixed constant. If d can get closer an closer to zero at a given rate, we have obtain the following result.
Theorem 3. Let s be a positive integer and let d n be a sequence of real numbers such that
lim n→∞ (d n + nd s n ) = 0. Denote d(ω 0 ) = 1 if ω 0 ∈ (0, π) and d(ω 0 ) = 1/2 if ω 0 ∈ {0, π}.
There exists a positive constant c such that,
inf ωn inf ω0∈[0,π] inf dn≤d<d(ω0) sup f * P ω0,d,f * (nd n |ω n -ω 0 | ≥ c) > 0,
where P ω0,d,f * denotes the distribution of any second order stationay process with spectral density f (x) = and sup f * means that the supremum is evaluated over all function f * such that (2.5) holds.
Choosing d n = 1/n proves that ω 0 cannot be consistently estimated (in the minimax sense) if d is not bounded away from zero.
Monte-Carlo simulations
In order to investigate how well w n behaves and the relative performance of αn compared to the infeasible estimator αn (with αn defined in (2.6)) in small samples, a limited Monte-Carlo study was carried out. We report the result for the parameter g defined as α/2 and the corresponding estimators ĝn and gn instead of α, to be consistent with the literature. When ω 0 ∈ {0, π}, the expected variance of ĝn is the usual π 2 /24m, whereas when ω 0 ∈ (0, π), the expected variance of ĝn is π 2 /48m which matches that obtained by [START_REF] Arteche | Semiparametric inference in seasonal and cyclical long memory processes[END_REF].
We have simulated 5000 replications of series of length n = 256, 512 and 1024 of Gaussian processes with spectral densities |1e ix | -2g and |1e i(x-π/2) | 2g |1e i(x+π/2) | 2g with g = 0.1, 0.2, 0.3 and 0.4.
They were generated by the method of Davies and Harte (1987) using formulae for autocovariance given in [START_REF] Arteche | Semiparametric inference in seasonal and cyclical long memory processes[END_REF]. The results of the Monte-Carlo experiment are given in Tables 1 to 4.
Table 1 shows the performance of ŵn . As expected, the finite sample performance of ŵn becomes better as n and/or g increases. Specifically for g = 0.1, the standard deviation of ŵn is far larger than when g = 0.4. Moreover it appears that the precision of ŵn is not affected by the true value of w 0 . Note that the positive larger bias for w 0 = 0 compared to the case w 0 = π/2 is due to the fact that the estimator cannot take negative values, so that ŵnw 0 ≥ 0 for w 0 = 0, which implies that it is a nonnegative random variable.
A quick look at Tables 2 to 4, especially the Table for the Mean Square Error (M.S.E.), indicates that as predicted by the theory, ĝn and gn given in (2.6) are very close, although for g small, for instance g = 0.1, ĝn tends to have a larger bias than gn . On the other hand, for larger values of g, not only the M.S.E. of ĝn and gn are similar but their bias and standard deviation are also very close. So, the Monte-Carlo experiment tends to confirm the theoretical results obtained in Theorem 2, that is, that m 1/2 (ĝ ngn ) = o p (1). Note that the method employed to prove Theorem 2 is precisely to show that the latter is the case. The empirical standard deviation of ĝn and gn are close to the theoretical ones, especially for larger values of g.
Remark A more erratic behaviour of ŵn and ĝn when g is small, for instance g = 0.1 is obseved. One possible explanation is that the spectral density is flatter for g = 0.1 than for g = 0.4. So to locate and obtain the maximum becomes harder. This will translate into the fact that the estimator of g will be worst. Another possible explanation is due to the randomness of I n (x), which implies that the error I n (x)f (x) becomes larger relative to I n (x), or f (x), as g becomes smaller. That is, the ratio of the noise of I n (x), I n (x)f (x), to its signal given by I n (x), or f (x), becomes larger, so more observations are expected to be needed to obtain a more accurate estimate. One way to alleviate this problem, in small samples, could be by looking at the maximum not of I n (x k ) but of the average 1 3 {I n (x k-1 ) + I n (x k ) + I n (x k+1 )}. This would have the effect to reduce the variability of the noise of I n (x k ), e.g. the variance of I n (x)f (x).
Proofs
In all the subsequent proofs, c, C, c(...) will denote numerical constants whose values depend only on their arguments and may change upon each appearance.
As always, the key ingredient of the proof is a bound on covariance of renormalised discrete Fourier transform ordinates. Such a bound has originally been obtained by [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF]
(ω) = (2πnf (ω + x k )) -1/2 n
t=1 X t e it(ω+x k ) . Lemma 1. Let X be a second order stationary process with spectral density f that satisfies (2.2) and (2.3). Then, for
1 ≤ k ≤ j ≤ ñ, |E[d k (ω n )d j (ω n )]| + E[d k (ω n ) dj (ω n )] -δ k,j ≤ C(d, l * ) log(1 + j)k -1 . (4.1)
Proof of Theorem 1. We must prove that for any ǫ > 0, lim n→∞ P(nv
-1 n |ω n -ω 0 | > ǫ) = 0. Recall that ω n is the closest Fourier frequency to ω 0 . Since |ω n -ω 0 | ≤ π/n, it is enough to prove that lim n→∞ P(nv -1 n |ω n -ω n | > ǫ) = 0. Define M n = max 1≤k≤ñ;x k =ωn In(x k ) f (x k ) and Λ ǫ,n = {k : 1 ≤ k ≤ ñ, |x k -ω n | > ǫv n /n}. Then, using (2.2) and the fact that f (x) = |a(x)| 2 E[Z 2
0 ]/(2π), we have
P(nv -1 n |ω n -ω n | > ǫ) = P(|ω n -ω n | > ǫv n /n) ≤ P(I n (ω n ) < M n max k∈Λn,ǫ f (x k )) ≤ P(I n (ω n ) < M n c(ω 0 , L)(ǫv n /n) -2ν ) ≤ P(I n (ω n ) < 2c(ω 0 , L) log(n)(ǫv n /n) -2ν ) + P(M n > 2 log(n)).
Let us first prove that lim n→∞ P(M n > 2 log(n)) = 0. To that purpose, it suffices to show that M n / log(n) converges in probability to 1.
Denote
I Z n (x) = 1 2πn | n k=1 Z k e ikx | 2 and M Z n = 2π max 1≤k≤ñ I Z n (x k ). We can assume without loss of generality that E[Z 2 0 ] = 1. By definition of M n and M Z n , M n log(n) - M Z n log(n) ≤ max 1≤k≤ñ;x k =ωn |I n (x k )/f (x k ) -2πI Z n (x k )| log(n) .
It has been proved in An, Chen and Hannan (1983) that lim sup n M Z n / log(n) ≤ 1. Davis and Mikosch (1999) showed that the lim sup is actually a limit and that it is equal to 1. Define
R n = max 1≤k≤ñ;x k =ωn |I n (x k )/f (x k ) -2πI Z n (x k )|.
To prove that R n / log(n) tends to zero in probability, we need the following bound, which generalises Lemma 1 to higher moments and can be proved in a similar way as Lemma 11 in Hurvich et al. (2002): under the assumptions of Theorem 1, there exists a constant C such that for all n and k = 0 such that 0
< x k + ω n < π, E[{I n (ω n + x k )/f (ω n + x k ) -2πI Z n (ω n + x k )} 4 ] ≤ C log 2 (k)k -2 .
Bounding the maximum by the sum and applying the Markov inequality we obtain that
P(R n / log(n) > ǫ) ≤ 1≤k≤ñ,x k =ωn E[{I n (x k )/f (x k ) -2πI Z n (x k )} 4 ] ǫ 4 log 4 (n) = O(log -4 (n)),
since the series log 2 (k)k -2 is summable. Thus R n / log(n) converges to zero in probability, which implies that M n / log(n) converges to one in probability.
We must now prove that lim n→∞ P(
I n (ω n ) < 2c(ω 0 , L) log(n)(ǫv n /n) -2ν ) = 0.
If ω 0 ∈ {0, π}, then ω n = ω 0 for all n. If the process X is not centered, i.e. µ = 0, then it is easily seen that
I n (ω 0 ) = nµ + o P (n). Hence lim n→∞ P(I n (ω 0 ) < 2c(ω 0 , L) log(n)(ǫv n /n) -2ν ) = 0, because ν < 1/2.
If now ω 0 ∈ (0, π) or ω 0 ∈ {0, π} and the process is centered, define
χ n = {E[I n (ω n )]} -1 I n (ω n ). As proved in Lemma 2 below, there exists a constant K such that E[I n (ω n )] ≥ Kn 2ν . Hence P(I n (ω n ) < 2C(ω 0 , L) log(n)(ǫv n /n) -2ν ) = P(χ n ≤ Cn 2ν {E[I n (ω n )]|} -1 (ǫv n ) -2ν log(n)) ≤ P(χ n ≤ C(ǫv n ) -2ν log(n)).
By assumption lim n→∞ log(n)v -ν n = 0. Since it is proved in Lemma 2 that χ n converges to a distribution with no mass at zero, the proof of Theorem 1 is concluded. Lemma 2. Under the assumption of Theorem 1 and if moreover the process X is centered in the case ω 0 ∈ {0, π}, then {E[I n (ω n )]} -1/2 I n (ω n ) converges weakly to a distribution without mass at 0.
Proof. Define σ 2 n = E[I n (ω n )] and ζ n = (2πn) -1/2 n k=1 X k e ikωn .
If ω 0 ∈ (0, π), then, for large enough n, ζ n is centered even if the process X is not. Thus we can assume, without loss of generality, that X is centered. Since moreover X is a linear process, the asymptotic normality of σ -1 n ζ n holds as soon as lim n→∞ σ n = ∞ (cf. Theorem 18.6.5 in [START_REF] Ibragimov | Independent and stationary sequences of random variables[END_REF]. Under assumption (2.2), we have:
σ 2 n = E[I n (ω n )] = 1 2πn π -π sin 2 (nx/2) sin 2 (x/2) f (x + ω n )dx ≥ K ′ n -1 π 2π/n sin 2 (nx/2) sin 2 (x/2) (x + ω n -ω 0 ) -2ν dx ≥ K ′′ n -1 π 2π/n sin 2 (nx/2) sin 2 (x/2) x -2ν dx ≥ K ′′′ n 2ν ,
which tends to infinity.
Proof of Proposition 1.
For ω ∈ [0, π], denote αn (ω) = 1≤|k|≤m γ k log{I n (ω + x k )}.
Note that with this notation, αn = αn (ω n ) and αn = αn (ω n ). Define also
l * := log(f * ), ǫ k (ω) = log{I n (ω +x k )/f (ω +x k )}+E (where E = -.577216 . . . is Euler's constant), β k = s -1 m (g(x k )-ḡm ) = s m γ k and ξ m (ω) = 1≤|k|≤m β k ǫ k (ω)
. With these notations, we obtain that for any ω,
αn (ω) = d 1≤|k|≤m γ k {g(x k + ω -ω 0 ) + g(x k + ω + ω 0 )} + 1≤|k|≤m γ k l * (ω + x k ) + s -1 m ξ m (ω). (4.2)
Replacing ω with ω n in (4.2) above, we get
s m (α n (ω n ) -α) = d 1 (0,π) (ω n ) 1≤|k|≤m β k {g(x k + ω n -ω 0 ) -g(x k )} + d 1 (0,π) (ω n ) 1≤|k|≤m β k g(x k + ω n + ω 0 ) + 1≤|k|≤m β k l * (ω n + x k ) + ξ m (ω n ). (4.3)
Since by definition, 1≤|k|≤m β k = 0, (4.3) can be written as
s m (α n (ω n ) -α) = d 1 (0,π) (ω n ) 1≤|k|≤m β k {g(x k + ω n -ω 0 ) -g(x k )} + d 1 (0,π) (ω n ) 1≤|k|≤m β k {g(x k + ω n + ω 0 ) -g(ω n + ω 0 )} + 1≤|k|≤m β k {l * (ω n + x k ) -l * (ω n )} + ξ m (ω n ) =: b 1,m (ω n ) + b 2,m (ω n ) + B m (l * ) + ξ m (ω n ).
The terms b 1,m and b 2,m vanish if ω 0 ∈ {0, π} and are well defined if ω 0 ∈ (0, π), at least for large enough n, since m/n → 0, which implies that x k + ω n ∈ (0, π) and x k + ω n + ω 0 = 0 (modulo 2π). The third bias term B m (l * ) depends upon the smoothness of l * at ω 0 . The next Lemma gives bounds for the deterministic terms and some relevant quantities. Its proof relies on elementary computations and is omitted.
Lemma 3. Assume that lim m→∞ (1/m + m/n) = 0, then max 1≤|k|≤m |β k | = O(log(m)m -1/2 ), lim m→∞ s 2 m /m = 2, b 2 1,m (ω n ) ≤ C log 2 (m)m -1/2 and b 2 2,m (ω 0 ) ≤ Cm 5 n 4 (ω 0 ∧ (π -ω 0 )) 2 1 (0,π) (ω 0 ). If (A1) holds, then B 2 m (l * ) ≤ Cm 2β+1 n -2β .
For a Gaussian process, it is now a well established fact that Lemma 1 and the covariance inequality for functions of Gaussian vectors of Arcones (1994, Lemma 1) can be used to derive bounds for the bias and covariance of the renormalised log-periodogram ordinates. This has been shown in various places; see for instance [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF], [START_REF] Giraitis | Rate optimal semiparametric estimation of the memory parameter of the Gaussian time series with long range dependence[END_REF], [START_REF] Hurvich | The mean squared error of Geweke and Porter-Hudak's estimator of a long-memory time-series[END_REF], [START_REF] Moulines | Log-periodogram regression of time series with long range dependence[END_REF] or [START_REF] Iouditsky | Adaptive estimation of the fractional differencing coefficient[END_REF]. A central limit theorem for weighted sums of log-periodogram ordinates can also be derived, as in [START_REF] Robinson | Log-periodogram regression of time series with long range dependence[END_REF], [START_REF] Moulines | Log-periodogram regression of time series with long range dependence[END_REF] and [START_REF] Soulier | Moment bounds and a central limit theorem for functions of Gaussian vectors[END_REF] in a more general framework. Thus we state without proof the following Lemma. Lemma 4. Let X be a Gaussian process whose spectral density satisfies (2.5). Then, for 1 ≤ |k| ≤ j,
|E[ǫ k (ω n )]| ≤ C(d, l * ) log(k + 1)k -1 , , (4.4
)
E[ǫ k (ω n )ǫ j (ω n )] - π 2 6 δ k,j ≤ C(d, l * ) log 2 (j + 1)k -2 . (4.5)
If lim m→∞ (log 2 (n)/m + m/n) = 0 then ξ m (ω n ) converges weakly to N (0, π 2 /3) if ω 0 ∈ {0, π} and N (0, π 2 /6) if ω 0 ∈ (0, π).
Lemmas 3 and 4 yield Proposition 1.
Proof of Theorem 2. Write now
s m (α n -α) = s m (α n -α) + s m (α n -αn ).
By Proposition 1 it is sufficient to prove that s m (α nαn ) tends to zero in probability, i.e. for all ǫ > 0,
lim n→∞ P(s m |α n -αn | > ǫ) = 0.
For any M > 0 and 0 < γ < 1/2, we can write,
P(s m |α n -αn | > ǫ) ≤ P(s m |α n -αn | > ǫ, nm -γ |ω n -ω n | ≤ 2πM ) + P(nm -γ |ω n -ω n | > 2πM ) ≤ P( max θ∈[-M,M] |S n (θ)| > ǫ) + P(nm -γ |ω n -ω n | > 2πM ),
where we have defined for any θ > 0,
S n (θ) = s m (α n (ω n + 2π[m γ θ]/n) -αn ).
Theorem 1 implies that for any 0 < γ < 1/2, if m = m(n) = n δ for some 0 < δ < 1, then lim n→∞ P(nm -γ |ω nω n | > M ) = 0. Hence, the proof of Theorem 2 is completed by the following Lemma.
Lemma 5. Let X be a Gaussian process whose spectral density satisfies (2.5) and (A1). If m = [n δ ] for some 0 < δ < 2β/(2β + 1), then for any 0 < γ < 1/2, the sequence S n (θ) converges to zero in probability uniformly with respect to θ ∈ [-M, M ], that is, for all ǫ > 0,
lim n→∞ P( max θ∈[-M,M] |S n (θ)| > ǫ) = 0. Proof. Denote j(θ) = [m γ θ], ǫ k = ǫ k (ω n
) and recall that by symmetry,
β k = β -k for all k = 1, • • • , m.
Write now S n (θ) as follows:
S n (θ) = 1≤|k|≤m β k {log(I n (ω n + x k+j(θ) ) -log(I n (ω n + x k )} = β j(θ) {log[I n (ω n )] -log[I n (ω n -x j(θ) )]} (4.6) + d 1≤|k|≤m,k =-j(θ) β k {g(x k+j(θ) + ω n -ω 0 ) -g(x k + ω n -ω 0 )} (4.7) + d 1≤|k|≤m,k =-j(θ)
β k {g(x k+j(θ) + ω n + ω 0 ) -g(x k + ω n + ω 0 )} (4.8)
+ 1≤|k|≤m,k =-j(θ) β k {l * (x k+j(θ) + ω n ) -l * (x k + ω n )} + 1≤|k|≤m,k =-j(θ) β k {ǫ k+j(θ) -ǫ k } =: β j(θ) {log[I n (ω n )] -log[I n (ω n -x j(θ) )]} + dA n (θ) + dA ′ n (θ) + B n (θ) + T n (θ).
Under the assumption of Gaussianity, it is easily seen that for any λ ∈ [-π, π], E[log 2 (I n (λ))] = O(log 2 (n)). Applying Lemma 3, we obtain the following bound for the term (4.6):
β 2 j(θ) {E[log 2 (I n (ω n ))] + E[log 2 (I n (ω n -x j(θ) ))]} = O(m -1 log 4 (n)),
uniformly with respect to θ.
Consider now the term in (4.7), i.e. A n . For x = 0, g is differentiable and g ′ (x) = -1 2 cot( x 2 ). Since moreover g is even, this implies that for all x, y in [-π, π] \ {0}, |g(x)g(y)| ≤ |x -y|/(|x| ∧ |y|). Thus we get the bounds
|g(x k + x j + ω n -ω 0 ) -g(x k + ω n -ω 0 )| ≤ j/(k -1/2), if 1 ≤ k ≤ m, j/((-k -j) -1/2), if -m ≤ k ≤ -1 -j, j/((-k -1/2) ∧ (j + k -1/2)), if -j + 1 ≤ k ≤ -1. (4.9)
Using these bounds, the property m k=1 β 2 k = 1, and applying the Hölder inequality, we obtain:
|A n (θ)| ≤ j(θ) log 2 (m)m -1/2 ≤ log 2 (m)m γ-1/2 .
If ω 0 ∈ (0, π), we must also consider A ′ n , whereas if ω 0 ∈ {0, π}, A ′ n = A n . For n large enough, ω n + x k ∈ (0, π) and ω n + x k+j(θ) ∈ (0, π) since by assumption m/n → 0. Thus, there exists a constant C which depends only on the function g such that
|g(ω 0 + ω n + x k+j(θ) ) -g(ω 0 + ω n + x k )| ≤ Cx j(θ) /(ω 0 ∧ (π -ω 0 )). The property m k=1 β 2 k = 1 implies that 1≤|k|≤m |β k | ≤ √ m, hence, if ω 0 ∈ (0, π)
, we obtain:
|A ′ n (θ)| ≤ Cx j(θ) ω 0 ∧ (π -ω 0 ) 1≤|k|≤m |β k | ≤ 2πCM m 1/2+γ n(ω 0 ∧ (π -ω 0 )) .
The term B n presents no difficulty since the function l * is smooth everywhere. Since γ < 1/2, it is easily obtained that
|B n (θ)| ≤ Cm β+1/2 n -β .
Thus the sequences A n , A ′ and B n converge to zero uniformly on compact sets. We now examine the properties of the process T n . To prove that T n converges to zero in probability uniformly with respect to θ ∈ [-M, M ], we apply Theorem 15.6 in [START_REF] Billingsley | Convergence of probability measures[END_REF]. It suffices to check that for all θ ∈ [-M, M ],
T n (θ) tends to zero in probability and that for all θ, θ 1 ,
θ 2 ∈ [-M, M ] such that θ 1 < θ < θ 2 , E[|T n (θ 1 ) -T n (θ)||T n (θ 2 ) -T n (θ)|] ≤ C(θ 1 -θ 2 ) 2 . (4.10)
We can restrict our attention to those (θ 1 , θ 2 ) such that |θ 1θ 2 | ≥ m -γ , since otherwise, the left hand side of (4.10) is zero. Moreover, applying the Cauchy-Schwarz inequality, it is sufficient to check that for all
θ 1 , θ 2 ∈ [-M, M ] such that |θ 1 -θ 2 | ≥ m -γ , it holds that E[|T n (θ 1 ) -T n (θ 2 )| 2 ] ≤ C(θ 1 -θ 2 ) 2 . (4.11)
Let θ 1 , θ 2 ∈ [-M, M ] and consider the increments of T n (θ 1 , θ 2 ) := T n (θ 1 ) -T n (θ 2 ). Assume that θ 1 < θ 2 and denote j i = j(θ i ), i = 1, 2. Without loss of generality, it can be assumed that j 1 < j 2 , since otherwise, T n (θ 1 , θ 2 ) = 0. We can then split T n (θ 1 , θ 2 ) in the following terms.
T n (θ 1 , θ 2 ) = m+j1 k=-m+j 1 k =0,k =j1 β k-j1 ǫ k - m+j2 k=-m+j 2 k =0,k =j2 β k-j2 ǫ k -β j1 ǫ j1 + β j2 ǫ j2 = -m+j2-1 k=-m+j1 β k-j1 ǫ k - m+j2 k=m+j1+1 β k-j2 ǫ k (4.12) + -1 k=-m+j2 (β k-j1 -β k-j2 )ǫ k + j1-1 k=1 (β k-j1 -β k-j2 )ǫ k (4.13) + j2-1 k=j1+1 (β k-j1 -β k-j2 )ǫ k + m+j1 k=j2+1 (β k-j1 -β k-j2 )ǫ k (4.14) -β j1 ǫ j1 + β j2 ǫ j2 -β j1-j2 ǫ j1 -β j2-j1 ǫ j2 (4.15)
We have three kind of terms to bound. The sums in (4.12) have only j 2j 1 terms. Applying Lemmas 3 and 4, we obtain:
E[( -m+j2-1 k=-m+j1 β k-j1 ǫ k )] 2 + E[( m+j2 k=m+j1+1 β k-j2 ǫ k )] 2 ≤ Cm -1 log 2 (m)(j 2 -j 1 ) 2 ≤ Cm 2γ-1 log 2 (m)(θ 1 -θ 2 ) 2 .
To bound the sums in (4.13) and (4.14), we need a bound for β k-j1β k-j2 . Recall that by definition, β k = s m (g(x k )ḡm ) and that |g(x)g(y)| ≤ |x -y|/(|x| ∧ |y|). Thus, if for any integer k = j 1 , j 2 , it holds that
|β k-j1 -β k-j2 | ≤ Cm -1/2 |j 1 -j 2 |/(|k -j 1 | ∧ |k -j 2 |). (4.16)
Let us evaluate for instance the second moment of the last term in (4.14), which can be expressed as
E[( m+j1 k=j2+1 (β k-j1 -β k-j2 )ǫ k ) 2 ] = m+j1 k=j2+1 (β k-j1 -β k-j2 ) 2 var(ǫ k ) + 2 j2+1≤k<l≤m+j1 (β k-j1 -β k-j2 )(β l-j1 -β l-j2 )cov(ǫ k , ǫ l ). (4.17) For all k ∈ {j 2 + 1, • • • , m + j 1 }, it holds that |k -j 1 | ∧ |k -j 2 | = k -j 2 .
Using Lemmas 3, 4 and the bound in (4. [START_REF] Giraitis | A central limit theorem for quadratic forms in strongly dependent linear variables and its application to asymptotic normality of Whittle's estimate[END_REF], we obtain that the first term on the right hand side of (4.17) is bounded by
Cm -1 (j 2 -j 1 ) 2 m+j1 k=j2+1 (k -j 2 ) -2 ≤ C(θ 1 -θ 2 ) 2 m 2γ-1 ,
whereas the second term on the right of (4.17) is bounded in absolute value by
Cm -1 (j 2 -j 1 ) 2 j2+1≤k<l≤m+j1 k -2 (k -j 2 ) -1 (l -j 2 ) -1 log 2 (l) ≤ C log 3 (m)(θ 1 -θ 2 ) 2 m 2γ-1 .
The other sums in (4.13) and (4.14) are dealt with similarly. To complete the investigation of T n , we apply Lemmas 3 and 4 to obtain that
(β 2 j1 + β 2 j2 + β 2 j1-j2 )E[ǫ 2 j1 + ǫ 2 j2 ] ≤ C log 2 (m)m -1 .
Altogether, since by assumption |θ 2θ 1 | ≥ m -γ , we have proved that, for sufficiently small η > 0, and for a constant depending only on ω 0 ,
E[T 2 n (θ 1 , θ 2 )] ≤ C(ω 0 )m -η (θ 1 -θ 2 ) 2 .
We have proved (4.11) and using the same techniques, we can prove that E[T n (θ)] = O(m -η ) for sufficiently small η. Hence, applying Billingsley (1968, Theorem 15.6) we conclude that T n converges uniformy to 0 in probability. This concludes the proof of Lemma 5.
Proof of Theorem 3.
Let Θ n = [0, π]×[d n , 2d n ] and denote θ = (λ, d), f * θ (x) = exp d pn-1 j=1 α j (λ) cos(jx) and f θ (x) = |1 -e i(x-λ) | -d |1 -e i(x+λ) | -d f * θ (x)
, where α j (λ) = 2 cos(jλ)j -1 and p n is a non decreasing sequence of integers such that p n = o(n), d n p n = o [START_REF] Adenstedt | On large-sample estimation for the mean of a stationary random sequence[END_REF]. With these notations, we can write
log(|1 -e i(x-λ) ||1 -e i(x+λ) |) = - ∞ j=1 α j (λ) cos(jx) and f θ (x) = exp{d ∞ j=pn α j (λ) cos(jx)}.
The assumption d n p n = o(1) ensures that f θ satisfies (2.5) for large enough n. Denote E θ the expectation with respect to the distribution of a stationary Gaussian process with spectral density f θ . Let q be a continuously differentiable probability density function on [0, π] with finite information, i.e. π 0 q ′ 2 (s)/q(s)ds < ∞. Let q n be a density defined on Θ n by q n (θ) = πd -1 n q(πd -1 n (dd n ))q(λ). It then obviously holds that
inf λn sup d,λ,f * E d,λ,f * [( λn -λ) 2 ] ≥ inf λn sup θ∈Θn E θ [( λn -λ) 2 ] ≥ inf λn Θn E θ [( λn -λ) 2 ]q n (θ)dθ. For θ = (λ, d), denote I (1)
n (θ) the Fisher information for the parameter λ when d is known. We can apply the so-called Bayesian information bound (cf. Theorem 1 in Gill and Levit 1995).
inf λn Θn E θ [( λn -λ) 2 ]q n (θ)dθ ≥ Θn I (1) n (θ)q n (θ)dθ + Ĩ(q) -1 , (4.18)
where Ĩ(q) = π 0
∂ ∂λ q(θ) 2 1
q(θ) dθ. We now need an asymptotic expansion of Θn I
n (θ)q n (θ)dθ. Let Σ θ denote the covariance matrix of the Gaussian vector (X 1 , • • • , X n ). The Fisher information for λ is given by
I (1) n (θ) = 1 2 tr Σ -1 θ ∂ ∂λ Σ θ 2
Let J n denotes the n × n identity matrix. For a given function h, let T n (h) denote the Toeplitz matrix of order n defined by T n (h) j,k = ĥ(jk) = π -π h(x)e i(j-k)x dx. Define h θ = f θ -1. With these notations, we get Σ θ = 2πJ n + T n (h θ ). Let ρ n denote the spectral radius of the matrix T n (h θ ). As appears from the proof of Lemma 2.1 of [START_REF] Iouditsky | Adaptive estimation of the fractional differencing coefficient[END_REF], in the case λ = 0, under the assumptions of Theorem 3, and with the choice of p n made here, ρ n = o(1). The proof of this result is still valid in the present context, since it only uses the bound |α j (λ)| ≤ 2/j. Thus ρ n = o(1) uniformly with respect to λ, which implies that
I (1) n (θ) = 1 8π 2 tr ∂ ∂λ Σ θ 2 1 + o(1) . Define now g θ = log(f θ ) = d ∞ j=pn α j (λ) cos(jx) and k θ = f θ -1 -g θ .
With these notations, it holds that Σ θ = 2πJ n + T n (g θ ) + T n (k θ ) and
∂ ∂λ Σ θ = ∂ ∂λ T n (g θ ) + ∂ ∂λ T n (k θ ) =: A θ + B θ .
It is easily seen that
A θ (j, k) = -2dπ sin(|j -k|λ)1 {|j-k|≥pn} , hence tr(A 2 θ ) = 4d 2 π 2 n-1 j=pn (n -j) sin 2 (jλ).
Integrating with respect to q n yields,
Θn tr(A 2 θ )q n (θ)dθ = c(q)n 2 d 2 n (1 + o(1)),
for some positive constant c(q) depending only on q. Again, the proof of Lemma
I (1) n (θ)q n (θ)dθ = c(q)n 2 d 2 n (1 + o(1)).
Putting this bound into (4.18), we conclude that lim inf
n→∞ sup d,λ,f * n 2 d 2 n E d,λ,f * [( λn -λ) 2 ] ≥ lim inf n→∞ (c(q) + o(1)) -1 > 0. g 0.
in the case ω 0 = 0, and then generalised to the case ω 0 ∈ (0, π) by Arteche and Robinson (2000) and Giraitis et al. (2001, Lemma 4.6). Define d k
2.1 of Iouditsky et al.
(2001) can be adapted to prove that tr(B 2 θ ) = o(tr(A 2 θ )). This finally yields
Θn
Table 1 .
1 Bias and Standard Deviation (in parentheses) of nω n /(2π).
1 0.2 0.3 0.4
256 29.55 (35.47) 8.73 (17.17) 3.15 (4.25) 1.81 (2.74)
ω 0 = 0 n 512 45.25 (60.78) 8.49 (17.22) 3.37 (4.95) 1.98 (2.38)
1024 44.67 (79.78) 7.07 (10.57) 3.33 (3.86) 2.16 (2.18)
256 0.671 (23.87) 0.032 (9.08) 0.015 (3.01) 0.005 (0.81)
ω 0 = π 2 n 512 -0.189 (38.22) -0.081 (7.65) 0.040 (2.16) -0.004 (0.68)
1024 -0.416 (43.13) 0.020 (5.61) 0.004 (1.68) 0.007 (0.44)
Table 1 :
1 Bias of the long-memory parameter estimators
g
0.1 0.2 0.3 0.4
m=64 0.091 0.102 0.093 0.084
ĝn m=32 0.121 0.141 0.135 0.119
m=16 0.170 0.190 0.186 0.169
256
m=64 0.087 0.087 0.087 0.080
gn m=32 0.132 0.131 0.126 0.109
m=16 0.202 0.195 0.179 0.154
m=128 0.068 0.068 0.060 0.058
ĝn m=64 0.086 0.096 0.089 0.082
m=32 0.114 0.133 0.130 0.118
ω 0 = 0 n 512
m=128 0.058 0.058 0.058 0.057
gn m=64 0.086 0.086 0.086 0.078
m=32 0.130 0.130 0.124 0.108
m=256 0.051 0.042 0.040 0.040
ĝn m=128 0.066 0.062 0.058 0.056
m=64 0.085 0.090 0.085 0.081
1024
m=256 0.040 0.040 0.040 0.040
gn m=128 0.058 0.058 0.058 0.056
m=64 0.086 0.086 0.086 0.077
m=32 0.107 0.110 0.102 0.089
ĝn m=16 0.152 0.162 0.151 0.124
m=8 0.233 0.236 0.223 0.188
256
m=32 0.095 0.095 0.095 0.086
gn m=16 0.148 0.146 0.138 0.118
m=8 0.231 0.224 0.206 0.177
m=64 0.078 0.070 0.064 0.060
ĝn m=32 0.106 0.105 0.097 0.083
m=16 0.153 0.159 0.147 0.118
ω 0 = π 2 n 512 m=64 0.062 0.062 0.062 0.060
gn m=32 0.093 0.093 0.092 0.082
m=16 0.144 0.143 0.135 0.114
m=128 0.054 0.044 0.042 0.041
ĝn m=64 0.074 0.066 0.062 0.059
m=32 0.102 0.100 0.093 0.081
1024
m=128 0.041 0.041 0.041 0.041
gn m=64 0.060 0.060 0.061 0.058
m=32 0.090 0.090 0.090 0.081
Table 2 :
2 Standard Deviation of the long-memory parameter estimators
20
Table 3 :
3 Mean Square Error of the long-memory parameter estimators | 45,766 | [
"840049",
"832166"
] | [
"35688",
"101"
] |
01203579 | en | [
"info"
] | 2024/03/04 23:41:46 | 2015 | https://inria.hal.science/hal-01203579/file/MICCAI-STACOM-Myocardial-Infarction-NAF.pdf | Héloïse Bleton
Jàn Margeta
Hervé Lombaert
Hervé Delingette
Nicholas Ayache
Myocardial Infarct Localization using Neighbourhood Approximation Forests
Keywords: Machine Learning, Neighbourhood Approximation Forests, myocardial infarction, wall thickness
This paper presents a machine-learning algorithm for the automatic localization of myocardial infarct in the left ventricle. Our method constructs neighbourhood approximation forests, which are trained with previously diagnosed 4D cardiac sequences. We introduce a new set of features that simultaneously exploit information from the shape and motion of the myocardial wall along the cardiac cycle. More precisely, characteristics are extracted from a hyper surface that represents the profile of the myocardial thickness. The method has been tested on a database of 65 cardiac MRI images in order to retrieve the diagnosed infarct area. The results demonstrate the effectiveness of the NAF in predicting the left ventricular infarct location in 7 distinct regions. We evaluated our method by verifying the database ground truth. Following a new examination of the 4D cardiac images, our algorithm may detect misclassified infarct locations in the database.
Introduction
Cardiac imaging is now routinely used for evaluating specific anatomical and functional characteristics of hearts. For instance, the localization of cardiac infarcts requires contrast agent injection and a thorough examination of the myocardial wall thickness and its motion [START_REF] Medrano-Gracia | An atlas for cardiac MRI regional wall motion and infarct scoring[END_REF] [START_REF] Wei | Three-dimensional segmentation of the left ventricle in late gadolinium enhanced MR images of chronic infarction combining long-and short-axis information[END_REF]. We propose to assist and automate this process with a system that automatically categorizes the localization of infarcts in the left ventricle. We exploit information from existing databases of 4D cardiac image sequences, that already contain the infarct localization from previously diagnosed patients. In such context, 4D images should be compared in an image reference space.
One way to represent the population is with statistical anatomical atlases [START_REF] Perperidis | Construction of a 4D statistical atlas of the cardiac anatomy and its use in classification[END_REF] that are constructed by combining all available subjects in a single average reference. In this paper, we favor a representation that considers all available subjects in a database. Here, we consider data that is classified along their recorded infarct localization. For this purpose, multi-atlas methods [START_REF] Rohlfing | Quo vadis, atlas-based segmentation?[END_REF] could be used. However, they require costly image registrations [START_REF] Heckemann | Improving intersubject image registration using tissue-class information benefits robustness and accuracy of multi-atlas based anatomical segmentation[END_REF]. Retrieval systems, instead, find images of subjects in a database that are close to a query image [START_REF] Müller | A review of content-based image retrieval systems in medical applicationsclinical benefits and future directions[END_REF]. The information on the infarct location of the retrieved subjects may be relevant for establishing diagnoses in previously unseen subjects.
Content based retrieval systems require the notion of distances between images [START_REF] Swets | Using discriminant eigenfeatures for image retrieval[END_REF]. They have been used in other areas such as neuro-images [START_REF] Konukoglu | Neighbourhood approximation using randomized forests[END_REF] or endomicroscopy [START_REF] André | A smart atlas for endomicroscopy using automated video retrieval[END_REF]. However, to the best of our knowledge, they were not applied for categorizing infarct locations in 4D cardiac images. This raises the question on how distances between 4D images should be defined. We suggest to learn this metric between subjects that belong to different categories of infarct locations, using the Neighborhood Approximation Forests algorithm (NAF) [START_REF] Konukoglu | Neighbourhood approximation using randomized forests[END_REF]. This machine-learning approach approximates distances between new query images and images in a database, via an affinity matrix between subjects. Decision forests have already been applied for processing medical images such as a fully automatic segmentation of the left ventricle [START_REF] Margeta | Layered spatio-temporal forests for left ventricle segmentation from 4D cardiac MRI data[END_REF]. Our method builds upon simple shape and motion features derived from binary segmentation that are fast to compute and based on a hyper surface representing the myocardial thickness along the cardiac cycle.
The contribution of this paper is the use of a distance learning approach for automatically categorizing the location of cardiac infarcts from 4D cardiac image sequences. We tested several features that are extracted from a novel hyper surface representation of the thickness profile. The next section describes our localization method, and is followed by our results that evaluates the performance of the proposed features. We discuss on the differences found in our results and elaborate on future improvements of our infarct localization method.
Method
Our localization method consists of categorizing automatically the location of cardiac infarcts via a retrieval approach based on the Neighborhood Approximation Forests (NAF). We now suggest feature representations that are specific for the localization of infarcts in 4D cardiac image sequences. The underlying assumption is that infarction affects the myocardial shape and motion since complex phenomena are often involved, such as wall thickening or chamber dilation [START_REF] Medrano-Gracia | An atlas for cardiac MRI regional wall motion and infarct scoring[END_REF].
Neighbourhood Approximation Forests
The NAF consists of an ensemble of binary decision trees designed for the purpose of clustering similar cardiac sequences together. Its automatic learning of image neighborhoods provides the capability of querying a training dataset of images, I, by retrieving the most similar images given a previously unseen image, J. Further details of the algorithm are described in [START_REF] Konukoglu | Neighbourhood approximation using randomized forests[END_REF]. Three phases are required: feature extraction, training and testing stages. We now describe how to apply them for the specific problem of locating infarcts in 4D images.
The learning process aims at finding the optimal shape and motion features for predicting the category of infarct location. Our training dataset contains 4D cardiac image sequences, each labeled with a category of infarct location, e.g., infarct is in septal or lateral area. Each 4D image should have an associated 4D segmentation mask of the left ventricular muscle. In our case, each binary mask has been cropped with a bounding box centered on the left ventricle and oriented such that both ventricles are aligned horizontally along a left-right axis.
Feature extraction A surface representing the thickness profile over the cardiac cycle is first extracted from 4D myocardial masks. The barycenter of the left ventricle mask is computed for each slice and each frame of the 4D mask. Rays are subsequently casted from the barycenter to the exterior of the mask, as illustrated on Fig. 4. The ray-binary mask intersection is used to evaluate the myocardial thickness at each angle. As a result, the myocardial thickness h(s, t, θ) is represented by a hyper surface, where the spatial coordinates are the corresponding slice s, the frame time t, and the angle θ.
The thickness profile is smoothed out by a Gaussian kernel filter (with a width of 0.4) to reduce possible segmentation errors. The thickness profile is also normalized in order to adjust its thickness values in a standardized common scale, such that the average thickness value over the 4D hyper surface is 0 and the standard deviation 1. As the space and temporal resolutions are specific to each image, point sampling should be normalized. The slice position s is normalized between 0 at the apex, and 1 at the left ventricular base. The frame time t is normalized between 0 at diastole, and 1 at the end of the cardiac cycle. The angle is kept between 0 and 2π, starting from a reference in the lateral wall.
Below, we describe groups of features f (I) extracted from the thickness profiles. In the following cases, h(s, t, θ) denotes the thickness, sampled on the slice s, the frame time t and the angle θ. Feature 1: Raw thickness. The profile constitutes the input features for each tree:
f 1 (I) = {h(s i , t j , θ k )} i,j∈[0;1], and θ∈[0 • ;360 • ] .
In other words, given a 4D image I, this feature representation consists of the list of surface heights. This should characterize infarcts as a function of myocardial thickness over space and time.
Feature 2: Raw thickness and thickness differences. This feature representation provides the raw thickness profile and the absolute difference of thicknesses sampled between the frame t 0 and each frame t:
f 2 (I) = {h(s i , t j , θ k ), |h(s i , t 0 , θ k ) -h(s i , t j , θ k )|} i,j∈[0;1], and θ∈[0 • ;360 • ] .
This feature is similar to the first feature representation, however, the thickness difference is added. This should characterize infarcts as discrepancies in myocardial thickness over space and time.
Training phase During this phase, the forest is trained: parameters of each tree are fixed using the training set I and the distance measurement ρ(I n , I m ) between each pair of images (I n ,I m ). The distance metric ρ(I n , I m ) for a regression problem is defined as follows:
ρ(I n , I m ) = |θ a (I n ) -θ a (I m )|
, where θ a (I n ) denotes the angle that corresponds to the infarct location, as illustrated on Fig. 3a. A set of visual features f (I n ) is computed from each training image I n . Along the forest construction, each tree tests a randomized subset of f (I n ). A tree is grown by finding at each node p, the optimal split of the dataset into two branches (I p Left , I p Right ) that best separates the incoming images I p in compact clusters. In the best case, cardiac images with similar infarct location should end in one leaf. In other words, the best threshold τ p is found for each selected feature f mp . The couple (parameters m p ,threshold τ p ) are stored at each node awaiting for the testing phase.
Obtaining the most compact partioning of I p is also equivalent to maximizing the information gain G (Eq. 1) at node p:
(m p , τ p ) = arg max m,τ G(I p , m, τ ), ( 1
)
where m is the set of features, and τ the set of potential thresholds, and
G(I p , m p , τ p ) = C(I p ) - |I p Right | |I p | C(I p Right ) - |I p Left | |I p | C(I p Left ), (2)
where the set of images I p Left of the left child node is defined by the test function Γ (m p , τ p ) applied on the images of the parent node, and similarly for the definition of the right node. Moreover, the compactness is defined by
C(A) = 1 |A| 2 Ii∈A Ij ∈A ρ(I i , I j ),
and |A| is the number of images within a subset A. More details on the training phase of the NAFs can be found in [START_REF] Konukoglu | Neighbourhood approximation using randomized forests[END_REF].
Testing phase During the following phase, one testing cardiac image travels across all tree nodes using the trained decisions, starting from the root node and ending in one leaf. Each leaf contains the training images for which similar decisions were taken. Consequently, when a testing image reaches a final leaf, it is considered a neighbor of the training images already present in the same final leaf. An affinity matrix is built by repeating this neighborhood approximation for each tree by storing the affinities between all testing images and the training images, as illustrated in Fig. 2. Indeed, the NAF algorithm keeps a record of the most similar cardiac sequences to a testing image J j in a similarity matrix W , where rows correspond to training images, and columns to testing images. For each tree, W (i, j) is incremented when J j reaches the leaf node that also includes the training image I i [START_REF] Konukoglu | Neighbourhood approximation using randomized forests[END_REF]. In this paper, the resulting affinity matrix determine the angle, where the myocardial infarct is approximatively located (refer to Fig. 3). The predicted angle on a testing image J j , is based on the resulting similarity matrix such that: θ a (J j ) = i W (i,j)θa (Ii) i W (i,j)
Results
Dataset and settings
Cardiac images of patients with coronary artery disease and a left ventricle infarction were randomly selected from the Defibrillators to Reduce Risk by Magnetic Resonance Imaging Evaluation database (DETERMINE) included in the Cardiac Atlas Project (CAP) [START_REF] Fonseca | The Cardiac Atlas Project. An imaging database for computational modeling and statistical atlases of the heart[END_REF]. 65 4D left ventricular masks were obtained with the software CAP Client, made available by the Left Ventricular Segmentation Challenge conducted for the Statistical Atlases and Computational Models of the Heart Workshop (STACOM) in 2011. Each mask is annotated by additional clinical information including the infarct location (anterior-septal, anterior, anterior-lateral, lateral, inferior-lateral, inferior, inferior-septal).
Evaluation of infarct localization
We validated our approach by retrieving the neighbours and the predicted angle by forming a training set and a testing set from the expert-annotated database. Some of the cardiac images are duplicated to obtain balanced class distribution in the training set. Therefore, the database consists of 115 images that groups 7 types of infarct location together.
The 10-fold cross validation technique is used for estimating the accurate performance of our classifier. The set of 115 images is partitioned into 10 subsets: 1 subset is randomly chosen as the testing set while the 9 remaining subsets form the training set. This method is repeated 10 times by varying the testing subset. Each infarct location in the dataset is labeled by an angle according to Fig. 3a. Left-ventricular regions cover large areas, spanning up to 60 • . Following the testing phase of the NAF method, the predicted angle of each testing image is compared to the expected angle of infarction.
We proposed two types of features to locate the infarct of unseen cardiac images. Our forest is composed of 100 trees where the maximal depth is 20. Results associated with each type of features are shown in Fig. 3b, where the average angle of each category is reported. With the first type of features, which characterizes the thickness of the myocardium, the localization of seven areas lead to average angular errors between 5 • and 48 • , which are below the maximal span of each areas of 60 • . However, the inferior-posterior area lead to an average error of 175 • . This leads us to examine each 4D image labeled with inferior-posterior infarct, revealing potentially misclassified infarct location, as seen on Fig. 4. The main drawback of this first type of features is that only the myocardial wall shape is taken into account, notably, only the wall thinning in the infarct area or the wall thickening in the opposite wall of the infarct. Indeed, considering only the minimal thickness is not enough to localize an infarct, as the thickness of the myocardial wall changes over time and possibly gets thinner at end-systole than in the infarct area.
Motivated by the previous results, motion information is combined to shape information in the features 2 by considering the difference of thicknesses over time. Following a myocardial infarction, the cardiac wall may not necessarily change over the cardiac cycle whereas the wall thickness of a healthy heart changes over time. Consequently, our second feature type that captures the thickness differences over time infarcts should indicate infarct as areas where the thickness is not changing over time.
With the second type of features, the infarct location is predicted with an average angular error of up to 52 • from the expected angle in all categories. This remains below the maximal span of each areas of 60 • . Our algorithm is able to locate the infarct location within the right area even if there are potential sources of error in the dataset. For instance, the database ground truth may be corrupted by misaligned binary masks if the septum is not perfectly located at 180 • as illustrated on Fig. 3a. a) Infarct locations in the database and the predicted locations with our method. b) Misclassified infarct locations in the database. Fig. 4. The white arrows represent the database ground truth, whereas the red arrows show the infarct location that was predicted with our method. In Fig. 4b, our algorithm underlined a possible misclassification as the infarct seems located in another myocardial area.
Conclusion
We used our machine learning neighbourhood-based algorithm for detecting the infarct in the left ventricular wall. We propose 2 types of features for improving the infarct localization where shape and motion information have been taken into consideration. These features have been extracted from a hyper surface that represents the thickness profile and has been designed along the cardiac cycle. We learnt to approximatively locate the infarct by retrieving the corresponding angle from the undiagnosed images. The most relevant infarct location is based on an affinity matrix. Our approach may be relevant in assisting clinical diagnosis of left ventricular infarct and may sometimes detect misclassified infarct in a database. Future work will focus on evaluating local wall deformation fields to better localize the infarct over the 3D cardiac volume. We could also consider to collect the myocardial thickness from 4D cardiac images instead of binary masks.
a) Thickness extraction. b) Thickness hyper surface.
Fig. 1 .
1 Fig. 1. a) Thickness extraction along the myocardial mask in red, red circle shows the mask barycenter, h denotes the thickness and θ the angle of a casted ray. b) 4D thickness profile at end-diastolic and end-systolic frames, parameterized by h(s, t, θ), with the slice s, the frame time t, and the angle θ.
Fig. 2 .
2 Fig. 2. The NAF testing phase. The trained NAF determines the most similar images (in the bottom/ in red) of the testing cardiac sample (in the top of each tree/ in green), by performing trained tests at each node.
a) Sections of the left ventricular wall [12]. b) Results on average prediction of infarct location.
Fig. 3 .
3 Fig. 3. a) Sections of the myocardial wall related to an angle, ranging from 0 • to 360 • . b) Results and comparison with the expected angle for each category: anterior (A), anterior-septal (AS), inferior-septal (IS), inferior-posterior (IP), inferior-lateral (IL), lateral (L), anterior-lateral (AL).
Acknowledgements
The authors wish to thank Alistair Young for providing the DETERMINE database. This research is partially funded by the ERC Advanced Grant Me-dYMAFunding. | 19,395 | [
"915003",
"597",
"2424",
"833436"
] | [
"30281",
"30281",
"30281",
"30281",
"30281"
] |
01476336 | en | [
"sdv"
] | 2024/03/04 23:41:46 | 2015 | https://pasteur.hal.science/pasteur-01476336/file/Revision%202_%20Manuscript_INTHEALTH-D-14-00118_BD_HB_MTC_ARA%20_AfroREB_revis....pdf | Betty Dodet
email: [email protected]
Mathurin C Tejiokem
Abdou-Rahman Aguemon
Hervé Bourhy D For Afroreb
Human rabies deaths in Africa: breaking the cycle of indifference
Keywords: Ebola, rabies, Africa
The current outbreak of Ebola virus disease has mobilized the international community against this deadly disease. However, rabies, another deadly disease, is greatly affecting the African continent, with an estimated 25 000 human deaths every year. And yet, the disease can be prevented by a vaccine, if necessary with immunoglobulin, even when administered after exposure to the rabies virus. Rabies victims die because of neglect and ignorance, because they are not aware of these life-saving biologicals, or because they cannot access them or do not have the money to pay for them. Breaking the cycle of indifference of rabies deaths in humans in Africa should be a priority of governments, international organizations and all stakeholders.
Since the discovery in 1976 of Ebola virus disease (EVD, formerly known as Ebola hemorrhagic fever), outbreaks of the virus have episodically deserved front page coverage. This was the case in 1995, when an outbreak in the Democratic Republic of Congo killed 250 people, and in 2000-2001 when an outbreak took the lives of 224 people in Uganda. Totally, 1548 EVD deaths out of 2361 cases occurred in Sub-Saharan Africa between 1976 and 2013. The current outbreak in Guinea, Sierra Leone and Liberia, the most devastating yet, may end up with more than 3000 victims. With a case fatality rate up to 90%, EVD is one of the world's most deadly diseases [START_REF]Ebola virus disease[END_REF][START_REF]Ebola Hemorrhagic Fever. Chronology of Ebola Hemorrhagic Fever Outbreaks[END_REF].
The international engagement against EVD, coordinated by the World Health Organization (WHO) is a good example of how the different stakeholders can mobilize their resources to help the African continent. Many experts have been deployed to help with infection control measures in clinics hospitals, and to trace known contacts of infected patients in the different countries involved. Indeed, in the absence of a vaccine and specific treatment, the only way to control the infection is through protective measures in clinics and hospitals, at community gatherings or at home. Rabies, another one of the world's most deadly diseases, is also a huge health problem for Africa. Once the first symptoms occur, there is no effective treatment, and the death rate is approaching 100%. Each year, in Africa, rabies kills an estimated 25,000 people, with about one death every 20 minutes. Children are the most affected by the disease, with 4 out of every 10 deaths occurring in children under the age of 15 [START_REF]Rabies[END_REF].
In contrast to EVD, rabies is a vaccine-preventable disease [START_REF]Rabies[END_REF][START_REF]WHO Expert consultation on rabies[END_REF]. It can be prevented through timely immunization even after exposure to the deadly virus. There are also effective vaccines for dogs, the main vector and transmitter of rabies to humans. Mass vaccination of dogs is recognized as the most cost-effective and sustainable way to eliminate rabies in humans [START_REF] Lembo | The blueprint for rabies prevention and control: a novel operational toolkit for rabies elimination[END_REF][START_REF] Fooks | Current status of rabies and prospects for elimination[END_REF].
Why is it that there are still so many deaths from rabies in Africa? This issue was addressed by rabies experts of the public health and veterinary sectors from francophone Africa, during the Fourth AfroREB (Africa Rabies Expert Bureau) meeting that took place in Dakar in October 2013. Established in 2008, AfroREB brings together rabies experts from 15 francophone countries in Africa [START_REF] Dodet | Africa Rabies Expert Bureau (AfroREB). Fighting rabies in Africa: the Africa Rabies Expert Bureau[END_REF][START_REF] Dodet | The fight against rabies in Africa: From recognition to action[END_REF].
Like EVD, rabies is linked to poverty, poor health systems and lack of education. The population is often not aware of the rabies risk, and of what to do in case of a dog bite. Sub-Saharan Africa lacks rabies prevention centers, where bite victims can find the life-saving biologicals (vaccine and immunoglobulin). Health care centers equipped for rabies prevention are scarce and limited to capital cities. They are often not accessible to the rural population, and the biologicals for rabies prophylaxis may not be available or affordable for bite victims. Furthermore, dog rabies remains enzootic in much of the world and attempts to control dog rabies in Africa are either inexistent or not successful, this being largely due to a lack of intersectoral collaboration between ministries, and the considerable challenge posed by the integration of budgets across ministries. Therefore, and in contrast to EVD, rabies is trapped in the vicious cycle of indifference. It does not attract media or political attention, as it is one of the oldest recognized diseases and has been controlled in the developed world. As a result of poor surveillance and reporting of rabies cases, there is little reliable data on the rabies burden in African countries [START_REF] Nel | Discrepancies in data reporting for rabies, Africa[END_REF]. Rabies is frequently misdiagnosed, as it often develops with a variety of symptoms that mimic other encephalitic diseases including cerebral malaria [START_REF] Fooks | Current status of rabies and prospects for elimination[END_REF]. Cases that are clinically diagnosed are often not reported, thus they are not accounted for at the central level, even in countries where rabies is a notifiable disease. Collecting and shipping skin biopsy samples, or the three serial daily saliva samples, to the laboratory for ante mortem diagnostic confirmation is also an issue, and postmortem diagnosis of brain samples is usually not accepted by the family of the patient or even proposed by physicians [START_REF] Dacheux | More accurate insight into the incidence of human rabies in developing countries through validated laboratory techniques[END_REF].
But as long as rabies cases are not reported, rabies deaths will be ignored and the disease will not receive the attention it deserves. Rabies is in competition with many other problems in African countries. With limited resources, and in the absence of awareness of the problem in the population as well as in governments, the priority goes elsewhere. Without resources for sensitization, rabies surveillance or healthcare for bite victims, the number of rabies deaths is increasing. An effective surveillance and notification of animal and human rabies cases is crucial so that the authorities of each country can be aware of the real rabies burden, and give it the place it deserves in their public health programs [START_REF] Dodet | The fight against rabies in Africa: From recognition to action[END_REF].
The best way to prevent rabies in humans is an integrated 'One Health' approach, as promoted by WHO, the World Organisation for animal Health (OIE) and the Food and Agriculture Organization (FAO). It includes the implementation of programs combining: rabies education and awareness; increased access to post-exposure prophylaxis (PEP) in accordance with WHO recommendations; pre-exposure prophylaxis (PrEP) to those at high risk; large-scale dog vaccination programs and responsible dog ownership. Also, there is a need to strengthen the health system and collaboration with others sectors concerned.
In 1983, recognizing the importance of human rabies transmitted by dogs, the governments of Latin America made the political decision to eliminate the disease and placed rabies on their public policy agendas. With support from the Pan American Health Organization (PAHO), they provided appropriate treatment for people potentially at risk of acquiring the disease (pre-and post-exposure prophylaxis), mass vaccination of dogs, and epidemiological surveillance. Since 1983, Latin America has successfully reduced by more than 90% canine and human rabies [START_REF]WHO Expert consultation on rabies[END_REF][START_REF] Schneider | Current status of human rabies transmitted by dogs in Latin America[END_REF].
In 2007, the Philippines approved the Anti-Rabies Act into law, that required effective reporting of human and animal rabies, and established a National Rabies Prevention and Control Program based on multisectoral cooperation between the different ministries: (Agriculture, Health, Education) and the Local Government Units, with the assistance of international organizations (WHO, OIE) and NGOs (Global Alliance for Rabies Control-GARC). The program includes: education campaigns, increased access to PEP by the establishment of animal bite prevention centres all over the country, pre-exposure prophylaxis for children in areas where rabies incidence exceeds 2.5 human rabies cases per million population, and mass vaccination of dogs [START_REF] Quiambao | Rabies pre-exposure prophylaxis[END_REF]. The number of human rabies cases has declined by 22% from 2005-2012.
Currently, pilot projects that aim to eliminate dog rabies are being conducted in Africa and include a project coordinated by WHO and funded by the Bill & Melinda Gates Foundation in KwaZulu Natal (South Africa) and in Tanzania, and a program carried out in N'Djamena (Chad) by the Chad government, the Swiss Tropical and Public Health Institute and GARC. Sub-regional networks of African rabies experts, SEARG (Southern and Eastern African Rabies Group) and AfroREB are advocating for rabies control, in line with the Global Alliance for Rabies Control. Several North-South initiatives and networks for rabies control in Africa are being established, such as ICONZ (Integrated Control of Neglected Zoonoses, www.iconzafrica.net), and ADVANZ (Advocacy for Neglected Zoonotic Diseases, www.advanz.org). Both of these programs aim to control neglected zoonotic diseases, including rabies. RESOLAB initially established by FAO to strengthen the African Veterinary laboratories diagnostic capacities for avian flu, now includes some support for rabies diagnosis. The International Pasteur Institute Network plays an important role in rabies diagnostic, prevention and control as well as in capacity building in the field of rabies over the world, including in Africa [START_REF] Bourhy | Rabies, still neglected after 125 years of vaccination[END_REF]14]. These efforts should be pursued and coordinated in order to reduce the rabies burden in Africa.
It is not the objective of this Commentary to attempt to define which disease, EVD or rabies, is most deadly and deserves the most attention, as both are urgent public health concerns. As tools do exist that could prevent the estimated 25,000 human rabies deaths every year in Africa, it is a shame not to use them or to more actively promote their use.
AfroREB members recognized that breaking the cycle of indifference is the first priority, and they call on their governments, international organizations and all stakeholders to give rabies the attention it deserves, and to unite their efforts to promote the elimination of dogtransmitted rabies in Africa.
Workshop on Surveillance and Control of Rabies -Pasteur institute Dakar -December 3-14, 2013. http://predemics.biomedtrain.eu/cms/Default.aspx?Page=19812%20&menu=494
Acknowledgements
The authors would like to thank Dr Deborah Briggs, past-Executive Director, Global Alliance for Rabies Control, for her advice and support. AfroREB benefits from a grant from Sanofi Pasteur and Merial. | 11,729 | [
"11293"
] | [
"55917",
"217352",
"169670",
"462779"
] |
00098062 | en | [
"math"
] | 2024/03/04 23:41:46 | 2008 | https://hal.science/hal-00098062v4/file/lagu4.pdf | Tran Ngoc Lien
email: [email protected].
Dang Duc Trong
email: [email protected].
Alain Pham
email: [email protected]
Ngoc Dinh
Laguerre polynomials and the inverse Laplace transform using discrete data
Keywords: inverse Laplace transform, Laguerre polynomials, Lagrange polynomials, ill-posed problem, regularization Mathematics Subject Classification 2000. 44A10, 30E05, 33C45
We consider the problem of finding a function defined on (0, ∞) from a countable set of values of its Laplace transform. The problem is severely ill-posed. We shall use the expansion of the function in a series of Laguerre polynomials to convert the problem in an analytic interpolation problem. Then, using the coefficients of Lagrange polynomials we shall construct a stable approximation solution. Error estimate is given. Numerical results are produced.
Introduction
Let L 2 ρ (0, ∞) be the space of real Lebesgue measurable functions defined on (0, ∞) such that
f 2 L 2 ρ ≡ ∞ 0 |f (x)| 2 e -x dx < ∞.
This is a Hilbert space corresponding to the inner product < f, g >= ∞ 0 f (x)g(x)e -x dx.
We consider the problem of recovering a function f ∈ L 2 ρ (0, ∞) satisfying the equations
Lf (p j ) ≡ ∞ 0 e -pj x f (x)dx = µ j (DIL)
where p j ∈ (0, ∞), j = 1, 2, 3, ... Generally, we have the classical problem of finding a function f (x) from its given image g(p) satisfying
Lf (p) ≡ ∞ 0 e -px f (x)dx = g(p), (1)
where p is in a subset ω of the complex plane. We note that Lf (p) is usually an analytic function on a half plane {Re p > α} for an appropriate real number α. Frequently, the image of a Laplace transform is known only on a subset ω of the right half plane {Re p > α}. Depending on the set ω, we shall have appropriate methods to construct the function f from the values in the set {Lf (p) : p ∈ ω}.
Hence, there are no universal methods of inversion of the Laplace transform.
If the data g(p) is given as a function on a line (-i∞ + a, +i∞ + a) (i.e., ω = {p : p = a + iy, y ∈ R}) on the complex plane then we can use the Bromwich inversion formula ( [START_REF] Widder | The Laplace transform[END_REF], p. 67) to find the function f (x).
If ω ⊂ {p ∈ R : p > 0} then we have the problem of real inverse Laplace transform. The right hand side is known only on (0, ∞) or a subset of (0, ∞). In this case, the use of the Bromwich formula is therefore not feasible. The literature on the subject is impressed in both theoretical and computational aspects (see, e.g. [START_REF] Ahn | A flexible inverse Laplace transform algorithm and its applications[END_REF][START_REF] Al-Shuaibi | A regularization method for approximating the inverse Laplace transform[END_REF][START_REF] Byun | A real inversion formula for the Laplace transform[END_REF][START_REF] De Mottoni | Stabilization and error bounds for the inverse Laplace transform[END_REF][START_REF] Rizzardi | A modification of Talbot's method for the simultaneous approximation of several values of the inverse transform[END_REF][START_REF] Soni | A unified inverse Laplace transform formula involving the product of a general class of polynomials and the Fox H-function[END_REF]). In fact, if the data g(p) is given exactly then, by the analyticity of g, we have many inversion formulas (see,e.g., [START_REF] Al-Shuaibi | A regularization method for approximating the inverse Laplace transform[END_REF][START_REF] Ang | Complex variables and regularization method of inversion of the Laplace transform[END_REF][START_REF] Boumenir | The inverse Laplace transform and analytic pseudodifferential operators[END_REF][START_REF] Saitoh | Integral transforms, Reproducing kernels and their Applications[END_REF][START_REF] Saitoh | Conditional stability of a real inverse formula for the Laplace transform[END_REF][START_REF] Talenti | Recovering a function from a finite number of moments[END_REF]). In [START_REF] Al-Shuaibi | A regularization method for approximating the inverse Laplace transform[END_REF], the author approximate the function f by
f (t) ∼ = N k=0 b k (a)d k (e x g(e x ))/dx k
where b k (a) are calculated and tabulated regularization coefficients and g is the given Laplace transform of f . Another method is developped by Saitoh and his group ( [START_REF] Amano | Error estimates of the real inversion formulas of the Laplace transform[END_REF][START_REF] Ang | A multidimensional Hausdorff moment problem: regularization by finite moments[END_REF][START_REF] Saitoh | Integral transforms, Reproducing kernels and their Applications[END_REF][START_REF] Saitoh | Conditional stability of a real inverse formula for the Laplace transform[END_REF]), where the function f is approximated by integrals having the form
u N (t) = ∞ 0 g(s)e -st P N (st)ds, N = 1, 2, ...
where P N is known (see [START_REF] Ang | A multidimensional Hausdorff moment problem: regularization by finite moments[END_REF]). Using the Saitoh formula, we can get directly error estimates. However, in the case of unexact data, we have a severely trouble by the ill-posedness of the problem. In fact, a solution corresponding to the unexact data do not exist if the data is nonsmooth, and in the case of existence, these do not depend continuously on the given data (that are represented by the right hand side of the equalities). Hence, a regularization method is in order. In [START_REF] Ang | Complex variables and regularization method of inversion of the Laplace transform[END_REF], the authors used the Tikhonov method to regularize the problem. In fact, in this method, we can approximate u 0 by functions u β satisfying
βu β + L * Lu β = L * g, β > 0.
Since L is self-adjoint (cf. [START_REF] Ang | Complex variables and regularization method of inversion of the Laplace transform[END_REF]), the latter equation can be written as
βu β + ∞ 0 u β (s) s + t ds = ∞ 0 e -st g(s)ds.
The latter problem is well-posed.
Although the inverse Laplace transform has a rich literature, the papers devoted to the problem with discrete data are scarce. In fact, from the analyticity of Lf (p),if Lf (p) is known on a countable subset of ω ⊂ {Re p > α} accumulating at a point then Lf (p) is known on the whole {Re p > α}. Hence, generally, a set of discrete data is enough for constructing an approximation function of f . It is a moment problem. In [START_REF] Lebedev | Special Functions and Their Applications[END_REF], the authors presented some theorems on the stabilization of the inverse Laplace transform. The Laplace image is measured at N points to within some error ǫ. This is achieved by proving parallel stabilization results for a related Hausdorff moment problem. For a construction of an approximate solution of (DIL), we note that the sequence of functions (e -pj x ) is (algebraically) linear independent and moreover the vector space generated by the latter sequence is dense in L 2 (0, ∞). The method of truncated expansion as presented in ([6], Section 2.1) is applicable and we refer the reader to this reference for full details. In [START_REF] Dung | A Hausdorff-like Moment Problem and the Inversion of the Laplace transform[END_REF][START_REF] Vu | A Hausdorff Moment Problems with Non-Integral Powers: Approximation by Finite Moments[END_REF], the authors convert (DIL) into a moment problem of finding a function in L 2 (0, 1) and, then, they use Muntz polynomials to construct an approximation for f . Now, in the present paper, we shall convert (DIL) to an analytic interpolation problem on the Hardy space of the unit disc. After that, we shall use Laguerre polynomials and coefficients of Lagrange polynomials to construct the function f . An approximation corresponding to the non exact data and error estimate will be given.
The remainder of the paper divided into two sections. In Section 2, we convert our problem into an interpolation one and give a uniqueness result. In Section 3, we shall give two regularization results in the cases of exact data and non exact data. Numerical comparisons with exact solution are given in the last section.
A uniqueness result
In this paper we shall use Laguerre polynomials
L n (x) = e x n! d n dx n (e -x x n ).
We note that {L n } is a sequence of orthonormal polynomials on L 2 ρ (0, ∞). We note that (see [START_REF] Abramowitz | Handbook of Mathematical Functions[END_REF], [START_REF] Borwein | Polynomials and polynomial Inequalities[END_REF], page 67)
exp xz z -1 (1 -z) -1 = ∞ n=0 L n (x)z n .
Hence, if we have the expansion
f (x) = ∞ n-0 a n L n (x) then ∞ 0 f (x) exp xz z -1 (1 -z) -1 e -x dx = ∞ n=0 a n z n . It follows that ∞ n=0 a n z n = ∞ 0 f (x) exp x z -1 (1 -z) -1 dx. Put Φf (z) = ∞ n=0 a n z n , α j = 1 -1/p j , one has Φf (α j ) = p j µ j ,
i.e., we have an interpolation problem of finding an analytic function Φf in the Hardy space H 2 (U ).
Here, we denote by U the unit disc of the complex plane and by H 2 (U ) the Hardy space. In fact, we recall that H 2 (U ) is the space of all functions φ analytic in U and if, φ ∈ H 2 (U ) has the expansion
φ(z) = ∞ k=0 a k z k then φ 2 H 2 (U) = ∞ k=0 |a k | 2 = 1 2π 2π 0 |φ(e iθ )| 2 dθ.
We can verify directly that the linear operator Φ is an isometry from L 2 ρ onto H 2 (U ). In fact, we have
Lemma 1 Let f ∈ L 2 ρ (0, ∞). Then Lf (z) is analytic on {z ∈ C| Rez > 1/2}. If we have an expansion f = ∞ n=0 a n L n then one has Φf ∈ H 2 (U ) and Φf 2 H 2 (U) = ∞ n=0 |a n | 2 = f 2 L 2 ρ (0,∞) .
Moreover, If we have in addition that
√ xf ′ ∈ L 2 ρ then ∞ n=0 n|a n | 2 ≤ √ xf ′ 2 L 2 ρ .
Proof
Putting F z (t) = e -zt f (t), we have F z ∈ L 2 (0, ∞) for every Rez > 1/2. Hence Lf (z) = ∞ 0 F z (t)dt is analytic for Rez > 1/2.
From the definitions of L 2 ρ (0, ∞) and H 2 (U ), we have the isometry equality. Now we prove the second inequalities. We first consider the case
f ′ , f ′′ in the space B = {g Lebesgue measurable on (0, ∞)| √ xg ∈ L 2 ρ (0, ∞)}. We have the expansion f = ∞ n=0 a n L n where a n =< f, L n >.
The function y = L n satisfies the following equation (see [START_REF] Rabenstein | Introduction to Ordinary Differential Equations[END_REF])
xy ′′ + (1 -x)y ′ + ny = 0 which gives (xe -x y ′ ) ′ + nye -x = 0.
It follows that
na n = ∞ 0 f (x)nL n (x)e -x dx = - ∞ 0 f (x)(xe -x L ′ n (x)) ′ dx = ∞ 0 f ′ (x)xe -x L ′ n (x)dx = - ∞ 0 (f ′ (x)xe -x ) ′ L n (x)dx = - ∞ 0 (xf "(x) + f ′ (x) -xf ′ (x))L n (x)e -x dx = -< xf " + f ′ -xf ′ , L n > .
Since L n is an orthonormal basis, we have the Fourier expansion
xf " + f ′ -xf ′ = ∞ n=0 (-na n )L n .
Using the Parseval equality we have
< xf " + f ′ -xf ′ , f >= ∞ n=0 (-na n )a n .
It can be rewritten as
∞ 0 (xe -x f ′ (x)) ′ f (x)dx = - ∞ n=0 na 2 n .
Integrating by parts, we get
∞ 0 xe -x |f ′ (x)| 2 dx = ∞ n=0 na 2 n . Now, for f ′ ∈ B we choose (f k ) such that f ′ k , f " k ∈ B for every k = 1, 2, ... and √ xf ′ k (resp.f k ) → √ xf ′ (resp.f ) in L 2 ρ as k → ∞. Assume that f k = ∞ n=0 a kn L n .
Then we have
∞ 0 xe -x |f ′ k (x)| 2 dx = ∞ n=0 na 2 kn .
The latter equality involves for every N
N n=0 na 2 kn ≤ √ xf ′ k 2 L 2 ρ (0,∞) (2)
Since f k → f in L 2 ρ as k → ∞ we have that a kn → a n as k → ∞, for each n. On the other hand, we have
√ xf ′ k → √ xf ′ in L 2 ρ as k → ∞. . Therefore, letting k → ∞ in (2) we get N n=0 na 2 n ≤ √ xf ′ 2 L 2 ρ (0,∞) .
Letting N → ∞ in the latter inequality, we get the desired inequality.
Using Lemma 1, one has a uniqueness result
Theorem 1. Let p j > 1/2 for every j = 1, 2, .... If pj >1 1 p j + 1/2<pj <1 2p j -1 p j = ∞
then Problem (DIL) has at most one solution in L 2 ρ (0, ∞).
Proof
Let f 1 , f 2 ∈ L 2 ρ (0, ∞) be two solutions of (DIL). Putting g = f 1 -f 2 then g ∈ L 2 ρ (0, ∞) and Lg(p j ) = 0. It follows that Φg(1 -1/p j ) = 0, j = 1, 2, ... It follows that α j = 1 -1/p j are zeros of Φg. We have Φg ∈ H 2 (U ) and
∞ j=1 (1 -|α j |) = pj >1 1 p j + 1/2<pj <1 2p j -1 p j = ∞.
Hence we get Φg ≡ 0 (see, e.g., [START_REF] Rudin | Real and Complex analysis[END_REF], page 308). It follows that g ≡ 0. This completes the proof of Theorem 1.
Regularization and error estimates
In the section, we assume that (p j ) is a bounded sequence, p j = p k for every j = k. Without loss of generality, we shall assume that ρ = 1 is an accumulation point of p j . In fact, if p j has an accumulation point ρ 0 > 1 then, by putting f (x) = e -(ρ0-1)x f (x) and p ′ j = p j -ρ 0 + 1, we can transform the problem to the one of finding f ∈ L 2 ρ (0, ∞) such that ∞ 0 e -p ′ j x f (x)dx = µ j , j = 1, 2, ... in which p ′ j has the accumulation point ρ = 1. In fact, in Theorem 2 below, we shall assume that 1 -1 pj ≤ σ for every j = 1, 2, ..., where σ is a given number. We denote by ℓ
L m (ν)(z k ) = ν k , 1 ≤ k ≤ m,
where z k = α k . If φ is an analytic function on U , we also denote
L m (φ) = L m (φ(z 1 ), ..., φ(z m )).
We define
L θ m (ν)(z) = 0≤k≤θ(m-1) ℓ (m) k (ν)z k .
The polynomial L θ m (ν) is called a truncated Lagrange polynomial (see also [START_REF] Trong | Reconstructing an analytic function using truncated Lagrange polynomials[END_REF]). For every g ∈ L 2
ρ (0, ∞), we put
T n g = (p 1 Lg(p 1 ), ..., p n Lg(p n )), T g = (p 1 Lg(p 1 ), ..., p n Lg(p n ), ...) ∈ ℓ ∞ .
Here, we recall that α n = 1 -1/p n . We shall approximate the function f by
F m = Φ -1 L θ m (T m f ) = 0≤k≤θ(m-1) ℓ (m) k (T m f )L k .
We shall prove that F m is an approximation of f . Before stating and proving the main results, some remarks are in order. We first recall the concept of regularization. Let f be an exact solution of (DIL), we recall that a sequence of linear operator A n : ℓ ∞ → L 2 ρ (0, ∞) is a regularization sequence (or a regularizer) of Problem (DIL) if (A n ) satisfies two following conditions (see, e.g., [START_REF] Isakov | Inverse problems for Partial differential equations[END_REF], page 25) (R1) For each n, A n is bounded, (R2) lim n→∞ A n (T f ) -f = 0. The number "n" is called the regularization parameter. As a consequence of (R1), (R2), we can get (R3) For ǫ > 0, there exists the functions n(ǫ) and δ(ǫ) such that lim ǫ→0 n(ǫ) = ∞, lim ǫ→0 δ(ǫ) = 0 and that
A n(ǫ) (µ) -f ≤ δ(ǫ) for every µ ∈ ℓ ∞ such that µ -T f ∞ < ǫ.
In the present paper, the operator
A n is Φ -1 L θ m .
The number ǫ is the error between the exact data T f and the measured data µ. For a given error ǫ, there are infinitely many ways of choosing the regularization parameter n(ǫ). In the present paper, we give an explicit form of n(ǫ).
Next, in our paper, we have the interpolation problem of reconstruction the analytic function φ = Φf ∈ H 2 (U ) from a sequence of its values (φ(α n )). As known, the convergence of L m (φ) to φ depends heavily on the properties of the points (α n ). The Kalmár-Walsh theorem (see, e.g., [START_REF] Gaier | Lectures on Complex Approximation[END_REF], page 65) shows that L m (φ) → φ for every φ in C(U ) for all φ analytic in a neighborhood of U if and only if (α n ) is uniformly distributed in U, i.e., lim
m→∞ m max |z|≤1 |(z -α 1 )...(z -α m )| = 1.
The Fejer points and the Fekete points are the sequences of points satisfying the latter condition (see [START_REF] Gaier | Lectures on Complex Approximation[END_REF], page 67). The Kalmár-Walsh fails if C(U ) is replaced by H 2 (U ) (see [START_REF] Trong | Reconstructing an analytic function using truncated Lagrange polynomials[END_REF] for a counterexample). Hence, the Lagrange polynomial cannot use to reconstruct φ. In [START_REF] Gaier | Lectures on Complex Approximation[END_REF], we proved a theorem similar to the Kalmár-Walsh theorem for the case of H 2 (U ). In fact, the Lagrange polynomials will convergence if we "cut off" some terms of the Lagrange polynomial. Especially, in [START_REF] Gaier | Lectures on Complex Approximation[END_REF] and the present paper, the points (α n ) are, in general, not uniformly distributed.
In Theorem 2, we shall verify the condition (R2). More precisely, we have Theorem 2 Let σ ∈ (0, 1/3), let f ∈ L 2 ρ (0, ∞) and let p j > 1/2 for j = 1, 2, ... satisfy
1 - 1 p j ≤ σ.
Put θ 0 be the unique solution of the equation (unknown x)
2σ 1-x 1 -σ = 1.
Then for θ ∈ (0, θ 0 ), one has
f -F m 2 L 2 ρ -→ 0 as m → ∞. If, we assume in addition that √ xf ′ ∈ L 2 ρ (0, ∞) then f -F m 2 L 2 ρ ≤ (1 + mθ) 2 f 2 L 2 ρ 2σ 1-θ 1 -σ 2m + 1 mθ √ xf ′ 2 L 2 ρ (0,∞) .
Proof
We have in view of Lemma 1
f -F m 2 L 2 ρ = 0≤k≤θ(m-1) |δ (m) k | 2 + k>θ(m-1) |a k | 2 (3)
where δ
(m) k = a k -ℓ (m)
k (T m f ). We shall give an estimate for δ
k . In fact, we have
Φf -L m (T m f ) 2 H 2 (U) = m-1 k=0 |δ (m) k | 2 + ∞ k=m |a k | 2 .
On the other hand, the Hermite representation (see, e.g. [START_REF] Gaier | Lectures on Complex Approximation[END_REF], page 59, [START_REF] Taylor | Advanced Calculus[END_REF]) gives
Φf (z) -L m (T m f )(z) = 1 2πi ∂U ω m (z)(Φf )(ζ)dζ ω m (ζ)(ζ -z)
where ω m (z) = (z -α 1 )...(z -α m ). Now, if we denote by σ
(m) -1 = σ (m) -2 = ... = 0 and σ (m) 0 = 1 σ (m) r = 1≤j1<...<jr≤m α j1 ...α jr (1 ≤ r ≤ m), β (m) s
= 1 2πi ∂U Φf (ζ)dζ ζ s+1 ω m (ζ)
then we can write in view of the Hermite representation
Φf (z) -L m (T m f )(z) = ∞ k=0 k r=0 (-1) r σ (m) m-r β (m) k-r z k .
From the latter representation, one gets
δ (m) k = k r=0 (-1) r σ (m) m-r β (m) k-r , 0 ≤ k ≤ m -1.
Now, by direct computation, one has
|β (m) s | ≤ 1 2π 2π 0 |Φf (e iθ )| |ω m (e iθ )| dθ. But one has |ω m (e iθ )| ≥ (|e iθ | -|α 1 |)...(|e iθ | -|α m |) ≥ (1 -σ) m .
Hence
|β (m) s | ≤ 1 2π(1 -σ) m 2π 0 |Φf (e iθ )|dθ ≤ Φf H 2 (U) (1 -σ) -m .
We also have
|σ (m) m-r | ≤ σ m-r C r m ≤ σ m-k 2 m , where C k m = m! k!(m-k)! . Hence, we have |δ (m) k | ≤ (1 + mθ) f L 2 ρ 2σ 1-θ 1 -σ m .
From the latter inequality, one has in view of ( 3)
f -F m 2 L 2 ρ ≤ (1 + mθ) 2 f 2 L 2 ρ 2σ 1-θ 1 -σ 2m + ∞ k≥mθ |a k | 2 .
For θ ∈ (0, θ 0 ), one has
0 < 2σ 1-θ 1 -σ < 2σ 1-θ0 1 -σ = 1.
Hence, we have lim
m→∞ f -F m 2 L 2
ρ = 0 as desired, since on the one hand we have the comparison between an exponential with base b < 1 and a power function and in the other hand the remain of a convergent series
∞ k=0 |a k | 2 . Now if √ xf ′ ∈ L 2 ρ (0, ∞) then one has since k mθ > 1 and from Lemma 1 ∞ k>mθ |a k | 2 ≤ 1 mθ ∞ k=0 k|a k | 2 ≤ 1 mθ √ xf ′ 2 L 2 ρ .
This completes the proof of Theorem 2. Now, we consider the case of non-exact data. In Theorem 3, we shall consider the condition (R3) of the definition of the regularization. Put
D m = max 1≤n≤m max |z|≤R ω m (z) (z -α n )ω ′ m (α n )
.
Let ψ : [0, ∞) → R be an increasing function satisfying
ψ(m) ≥ mD m , m = 1, 2, ... and m(ǫ) = [ψ -1 (ǫ -3/4 )] -1
where [x] is the greatest integer ≤ x.
Theorem 3. Let σ ∈ (0, 1/3), let f, √ xf ′ ∈ L 2 ρ (0, ∞) and let p j > 1/2 for j = 1, 2, ... satisfy 1 - 1 p j ≤ σ.
Put θ 0 be the unique solution of the equation (unknown x)
2σ 1-x 1 -σ = 1.
Let ǫ > 0 and let (µ ǫ j ) be a measured data of (Lf (p j )) satisfying
sup j |p j (Lf (p j ) -µ ǫ j )| < ǫ.
Then for θ ∈ (0, θ 0 ), one has
f -Φ -1 L θ m(ǫ) (ν ǫ ) 2 L 2 ρ ≤ 2(1 + m(ǫ)θ) 2 f 2 L 2 ρ 2σ 1-θ 1 -σ 2m(ǫ) + 2 m(ǫ)θ √ xf ′ 2 L 2 ρ + 2ǫ 1/2 .
where ν ǫ j = p j µ ǫ j for j = 1, 2, ... Proof We note that
L m (T m f )(z) -L m (ν ǫ )(z) = m j=1 (p j µ j -ν ǫ j ) ω m (z) (z -α j )ω ′ m (α j ) . It follows that L m (T m f ) -L m (ν ǫ ) ∞ ≤ ǫmD m . Hence L θ m (T m f ) -L θ m (ν ǫ ) H 2 (U) ≤ L m (T m f ) -L m (ν ǫ ) ∞ ≤ ǫmD m . It follows by the isometry property of Φ f -Φ -1 L θ m (ν ǫ ) 2 L 2 ρ ≤ 2 f -F m 2 L 2 ρ + 2 Φ -1 L θ m (T m f ) -Φ -1 L θ m (ν ǫ ) 2 L 2 ρ ≤ 2(1 + mθ) 2 f 2 L 2 ρ 2σ 1-θ 1 -σ 2m + 2 mθ √ xf ′ 2 L 2 ρ +2ǫ 2 m 2 D 2 m .
By choosing m = m(ǫ) we get the desired result.
Numerical results
We present some results of numerical comparison between the function f (x) given in L 2 p (0, ∞) and its approximated form F m as it is stated in Theorem 2.
First consider the function f (x) = e -x and its expansion in Laguerre series e
-x = n≥0 1 2 n+1 L n (x). (4)
So in the Hardy space H 2 (U ), we have to interpolate the analytic function
Φf (x) = n≥0 1 2 n+1 x n = 1 2 -x (5)
by the Lagrange polynomial L m (T m f ), interpolation defined by
L m (T m f ) 1 - 1 p i = p i ∞ 0
e -pix e -x dx = p i p i + 1 [START_REF] Ang | Moment Theory and Some Inverse Problems in Potential Theory and Heat Conduction[END_REF] where p i -→ 1 as i → ∞.
On the interval (-1.8, +1.8) we have drawn in Fig. 1 the curves e -x and its approximation L m (T m f )(x) for m = 10. If m = 12 there is divergence for our interpolation (Fig. 2) outside the interval (-1, +1). In both cases we have chosen θ 0 = 0.29 with σ = 0.25 (θ 0 given by 2σ 1-θo 1 -σ = 1, 0 < σ < 1 3 ). So in the 2nd case the truncated Lagrange polynomial is almost verified since 11 × 0.29 ∼ 3.2.
k
(ν) the coefficient of z k in the expansion of the Lagrange polynomial L m (ν) (ν = (ν 1 , ..., ν m )) of degree (at most) m -1 satisfying
Fig.1
Fig. 2 In
2 Fig.2
Fig.3
Fig.4
Acknowledgements.
The authors wish to thank the referees for their pertinent remarks, leading to improvements in the original manuscript. | 21,008 | [
"835257",
"830495",
"830483"
] | [
"20595",
"7178",
"98"
] |
01476650 | en | [
"phys"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01476650/file/01_WaveMechanicsLimit.pdf | Luca Roatta
email: [email protected]
Discretization of space and time in wave mechanics: the validity limit
Introduction
Let's assume, as work hypothesis, the existence of both discrete space and discrete time, namely spatial and temporal intervals not further divisible; this assumption leads to some interesting consequences. Here, as a first result, we find the limit of applicability of wave mechanics (and consequently also of quantum mechanics).
So, if we suppose that neither space nor time are continuous, but that instead both are discrete, it will not be possible to establish arbitrary values for a spatial coordinate or the time: any length must be an integer multiple of the fundamental length and any time interval must be an integer multiple of the fundamental time. Let's name l 0 the fundamental length and t 0 the fundamental time.
The applicability limit of wave mechanics
The values of l 0 and t 0 are not essential, at least for the moment: what matters is to establish the principle that l 0 > 0 and t 0 > 0.
Of course, if l 0 is the minimum length, neither a wavelength can be less than it; so l 0 is also the minimum wavelength. For an electromagnetic wave, as shown by the relation λν = c, at the minimum wavelength λ = l 0 corresponds the maximum frequency ν max , if we want to keep c constant. Then, if ν is the frequency of an electromagnetic wave, it must be ν ≤ ν max ; if we consider λ = l 0 (and consequently ν = ν max ) we have l 0 ν max = c, from which we obtain:
ν max = c l 0 (1)
It is evident that in a continuous context (l 0 → 0) there is no limit for the frequency.
The frequency can be expressed as ν = 1/T where T is the period; the maximum value for the frequency is reached when T assumes the minimum value t 0 .
So from Eq. ( 1) we obtain:
l 0 t 0 =c (2)
that indicates the relationship between l 0 and t 0 .
The energy associated to a wave is E = hν, so for Eq. ( 1) there is also an upper limit for the energy a wave can have. In particular:
E max =h ν max = hc l 0 (3)
Also for energy, in a continuous context (l 0 → 0) there is no upper limit.
The relation energy-mass E = mc 2 allows us to write
E max =h ν max =m max c 2 = hc l 0 (4)
obtaining
m max = h cl 0 (5)
that represents the maximum value for the mass that in a discrete context can be treated from the point of view of wave mechanics. Again, also for mass, in a continuous context (l 0 → 0) there is no upper limit.
We can obtain the same result starting from the expression [1][2] of the Compton wavelength λ c = h/mc and imposing λ c = λ min = l 0 and m = m max .
Conclusion
The assumption that both space and time are discrete has led to find the applicability limit of wave and quantum mechanics: in a discrete context (contrary to what happens in a continuous context) no object, having mass greater than m max , as expressed by Eq. (5), can be treated from the point of view of wave or quantum mechanics. | 2,950 | [
"1002559"
] | [
"302889"
] |
00147705 | en | [
"math"
] | 2024/03/04 23:41:46 | 2007 | https://hal.science/hal-00147705/file/biomotorLD51907.pdf | Benoît Perthame
Panagiotis E Souganidis
Asymmetric potentials and motor effect: a large deviation approach
Keywords: Hamilton-Jacobi equations, molecular motors, Fokker-Planck equations AMS Class. Numbers. 35B25, 49L25, 92C05
We provide a mathematical analysis of appearance of the concentrations (as Dirac masses) of the solution to a Fokker-Planck system with asymmetric potentials. This problem has been proposed as a model to describe motor proteins moving along molecular filaments. The components of the system describe the densities of the different conformations of the proteins.
Our results are based on the study of a Hamilton-Jacobi equation arising, at the zero diffusion limit, after an exponential transformation change of the phase function that rises a Hamilton-Jacobi equation. We consider different classes of conformation transitions coefficients (bounded, unbounded and locally vanishing).
Introduction
A striking feature of living cells is their ability to generate motion, as, for instance in muscle contraction already investigated theoretically in the 50's ( [START_REF] Huxley | Muscle structure and theories of contraction[END_REF]). But even more elementary processes allow for intra-cellular material transport along various filaments that are part of the cytoskeleton. These are known as "motor proteins". For example, myosins move along actin filaments and kinesins and dyneins move along micro-tubules. In the early 90's, it became possible to device a new generation of experiments in vitro where both the filaments and the motor proteins are sufficiently purified. This lead to an improved biophysical understanding of the biomotor process (see, for instance, [1,[START_REF] Jülicher | Modeling molecular motors[END_REF][START_REF] Peskin | the correlation ratchet: a novel mechanism for generating directed motion by ATP hydrolysis[END_REF][START_REF] Doering | Rotary DNA motors[END_REF], and the tutorial book [START_REF] Howard | Mechanics of motor proteins and the cytoskeleton[END_REF]) and gave rise to a large cellular biology literature. The experimental observations made possible to explain how chemical energy can be transformed into mechanical energy and to come up with mathematical models for molecular motors. The underlying principles are elementary and represent in fact the common basis for all biomotors. On the one hand, the filament provides for an asymmetric potential (this notion was introduced in the earliest theoretical descriptions by Huxley, [START_REF] Huxley | Muscle structure and theories of contraction[END_REF]), sometimes referred to as the energy landscape. On the other hand, the protein can reach several different conformations. This can be ATP/ADP hydrolysis but five to six different states of the protein could be involved during muscular contraction.
In this paper we consider the following model: Molecules can reach I configurations with density, for each i = 1, 2..., I, n i . A bath of such molecules is moving in an asymmetric potential seen differently by the I configurations denoted, for i = 1, ..., I, by ψ i . Fuel consumption triggers a configuration change among the different states with rates ν ij > 0, for i, j = 1, 2..., I. Diffusion, denoted below by σ, is taken into account.
Thes simple considerations lead to the following system of elliptic equations for the densities
(n i ) 1≤i≤I : -σ ∂ 2 ∂x 2 n i -∂ ∂x (∇ψ i n i ) + ν ii n i = j =i ν ij n j in (0, 1), σ ∂ ∂x n i (x) + ∇ψ i (x) n i (x) = 0 for x = 0 or 1. (1)
The zero flux boundary conditions means that the total number of molecules, in each molecular state, is preserved by transport (but not by configuration exchange).
Throughout the paper we assume that, for i = 1, ..., I
n i > 0 in [0, 1]. (2)
The zero flux boundary condition, motivated by the additional modeling assumption that total density is conserved, leads to the condition that, for all i = 1, ..., I,
ν ii = j, j =i ν ji . (3)
Several biomotor models, including the one described above, were analyzed in [START_REF] Chipot | Transport in a molecular motor system[END_REF][START_REF] Chipot | A variational principle for molecular motors. Dedicated to Piero Villaggio on the occasion of his 70th birthday[END_REF][START_REF] Kinderlehrer | Diffusion-mediated transport and the flashing ratchet[END_REF][START_REF] Hastings | Diffusion mediated transport with a look at motor proteins[END_REF] through optimal transportation methods. In [START_REF] Chipot | Transport in a molecular motor system[END_REF] it is proved that there is a positive steady state solution that can, for instance, be normalized by
1 0 1≤i≤I n i (x)dx = 1. (4)
The simplest way to explain this fact is to observe that the adjoint system,
-σ ∂ 2 ∂x 2 φ i + ∇ψ i ∂ ∂x φ i + ν ii φ i = j =i ν ji φ j in (0, 1), ∂ ∂x φ i = 0 in {0, 1}, (5)
admits the trivial solution φ 1 = φ 2 = ... = φ I = 1. This yields that 0 is the first eigenvalue of the system and thus of its adjoint (1). The Krein-Rutman theorem gives the n i 's, but the solution is not explicitly known except for I = 1, a situation where the motor effect cannot be achieved. The stability of this problem is also related to the notion of relative entropy [START_REF] Dolbeault | Remarks about the flashing rachet[END_REF][START_REF] Perthame | The general relative entropy principle -applications in Perron-Froebenius and Floquet theories and a parabolic system for biomotors[END_REF][START_REF] Perthame | Transport equations in biology[END_REF][START_REF] Michel | General relative entropy inequality: an illustration on growth models[END_REF].
The typical results obtained about biomotors in [START_REF] Chipot | Transport in a molecular motor system[END_REF][START_REF] Hastings | Diffusion mediated transport with a look at motor proteins[END_REF] are that, for small diffusion σ, under some precise asymmetry assumptions on the potentials, the solutions tend to concentrate, as σ → 0, as Dirac masses at either x = 0 or x = 1. In the sequel such a behavior will be called motor effect.
Our results (i) provide an alternative proof of this motor effect, and (ii) allow for more general assumptions like, for instance, various scalings on the coefficients ν ij . While [START_REF] Chipot | Transport in a molecular motor system[END_REF][START_REF] Hastings | Diffusion mediated transport with a look at motor proteins[END_REF] transform the system (1) into an ordinary differential equation and analyze directly its solution, here we use a direct PDE argument based on the phase functions R i = -σ ln n i that satisfy (in the viscosity sense, [START_REF] Bardi | Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations[END_REF]3,[START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF][START_REF] Fleming | Controlled Markov processes and viscosity solutions[END_REF]) a Hamilton-Jacobi solution. This is reminiscent to the method used for front propagation ( [START_REF] Evans | A PDE approach to geometric optics for certain reaction-diffusion equations[END_REF][START_REF] Barles | Wavefront propagation for reaction diffusion systems of PDE[END_REF]). We recall that the appearance of Dirac concentrations in a different area of biology (trait selection in evolution theory) relies also on the phase function and the viscosity solutions to Hamilton-Jacobi equations, [START_REF] Diekmann | The dynamics of adaptation : an illuminating example and a Hamilton-Jacobi approach[END_REF][START_REF] Barles | Concentrations and constrained Hamilton-Jacobi equations arising in adaptive dynamics[END_REF].
In Section 2 we obtain new and more precise versions of the results of [START_REF] Chipot | Transport in a molecular motor system[END_REF] by analyzing the asymptotics/rates as σ → 0. In Section 3 we present new results for large transition coefficients, while in Section 4 we consider coefficients that may vanish.
Bounded non-vanishing transition coefficients
We begin with the assumptions on the transition rates and potentials. As far as the former are concerned we assume that there exists k > 0 such that
ν ij ≥ k > 0 for all i = j. (6)
As far as the potentials are concerned we assume that, for all i = 1, 2, ..., I,
ψ i ∈ C 2,1 (0, 1), (7)
there exists a finite collection of intervals (J k ) 1≤k≤M such that min
1≤i≤I ψ ′ i > 0 in J k , (8)
and max
1≤i≤I ψ ′ i > 0 in [0, 1]. (9)
Notice that these assumptions are satisfied by periodic potentials with period 1/M .
Figure 1: Motor effect exhibited by the parabolic system (1) with two asymmetric potentials. Left: the potentials ψ1, ψ2. Right: the phase functions
R σ 1 = -σ ln(n σ 1 ), R σ 2 = -σ ln(n σ 2 ).
As announced in Theorem 2.1, we have R σ 1 ≈ R σ 2 and are nondecreasing. This means that the densities are concentrated as Dirac masses at x = 0. Here we have used σ = 10 -4 . See Figure 2 for another behavior.
Our first result is a new and more precise version of the result in [START_REF] Chipot | Transport in a molecular motor system[END_REF]. It yields that the system (1) exhibits a motor effect for σ small enough and molecules are necessarily located at x = 0. This effect is explained by a precise asymptotic result in the limit σ → 0.
To emphasize the dependence on the diffusion σ, in what follows we denote, for all i = 1, ..., I, by n σ i the solution of (1). Moreover, instead of (4), we use the normalization
1≤i≤I n σ i (0) = 1. ( 10
)
We have:
Theorem 2.1 Assume that (3), ( 6), ( 7), ( 8), ( 9) and (10) hold. Then, for all i = 1, ..., I,
R σ i = -σ ln n σ i ---→ σ→0 R in C(0, 1), R(0) = 0 and R ′ = min 1≤i≤I (ψ ′ i ) + .
In physical terms, R can be seen as an effective potential for the system. To state the next result, we recall that throughout the paper we denote by δ 0 the usual δ-function at the origin.
We have:
Corollary 2.2 Assume, in addition to (3), ( 6), ( 7), ( 8) and ( 9), that min 1≤i≤I ψ ′ i (0) > 0, and normalize n σ i by (4) instead of (10). There exist (ρ i ) 1≤i≤I such that
n σ i ---→ σ→0 ρ i δ 0 , ρ i > 0, and
1≤i≤I ρ i = 1.
There are several possible extensions of Theorem 2.1. Here we state one which, to the best of our knowledge, is not covered by any of the existing results.
To formulate it, we need to introduce the following assumption on the potentials (ψ i ) 1≤i≤I which replaces [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] and allows to consider more general settings. It is:
the set {x ∈ [0, 1] : max 1≤i≤I ψ ′ i (x) < 0} is a union of finitely many intervals (K l ) 1≤l≤M ′ ,
and (∪J k ) c ∩ (∪K l ) c is either a finite union of intervals or isolated points.
(11) We have: 6), ( 7), ( 8), [START_REF] Doering | Rotary DNA motors[END_REF] and [START_REF] Diekmann | The dynamics of adaptation : an illuminating example and a Hamilton-Jacobi approach[END_REF]. Then
Theorem 2.3 Assume (3), (
R σ i = -σ ln n σ i ---→ σ→0 R in C(0, 1), R(0) = 0 and R ′ = min 1≤i≤I (ψ ′ i ) + in ∪ J k , max 1≤i≤I ψ ′ i in ∪ K l , 0 in Int (∪J k ) c ∩ (∪K l ) c .
As a consequence we have:
Corollary 2.4 In addition to (3), ( 6), ( 7), ( 8), ( 11) and ( 4), assume that we have the same number of intervals J k and K l in ( 8) and ( 11) respectively, that 0 is the left endpoint of J 1 and, finally, that, for all k = 1, ..., M ,
K k max 1≤i≤I ψ ′ i (y)dy < J k min 1≤i≤I ψ ′ i (y)dy.
Then, for all i = 1, ..., I, there exist (ρ i ) 1≤i≤I such that
n σ i ---→ σ→0 ρ i δ 0 , ρ i > 0, and
1≤i≤I ρ i = 1.
Other possible extensions concern coefficients that may vanish somewhere and/or be unbounded. The former case is studied in Section 4. As far as the ν ij being unbounded, it will be clear from the proof of Theorem 2.1, that the coefficients can depend on σ as long as, for σ → 0 and all i, j = 1, ..., I, there exists α > 0 such that
σν σ ij → 0 and σ -α ν ij → ∞.
Going further in this direction leads to a different limits for -ln n i σ that we study in the next Section.
We continue next with the proof of Theorem 2.1. The modifications needed to prove Theorem 2.3 are indicated at the end of this section where we also discuss the proofs of the Corollaries.
Proof of Theorem 2.1 A direct computation shows that the R σ i 's satisfy, for
ν ii = ν ii -ψ ′′ i , the system -σ ∂ 2 R σ i ∂x 2 + ∂R σ i ∂x 2 -ψ ′ i (x) ∂R σ i ∂x + σ I j=1 ν ij e (R σ i -R σ j )/σ = σ ν ii in (0, 1), ∂R σ i ∂x = ψ ′ i in {0, 1}. (12)
Adding the equations of (1) and using (3) yield the conservation law
-σ ∂ 2 ∂x 2 [ 1≤i≤I n σ i ] - ∂ ∂x [ 1≤i≤I ψ ′ i n σ i ] = 0,
which together with the boundary condition gives
-σ ∂ ∂x 1≤i≤I n σ i - 1≤i≤I ψ ′ i n σ i = 0. ( 13
)
Setting
1≤i≤I n σ i = e -S σ /σ , we have ∂S σ ∂x = i ψ ′ i n σ i i n σ i ,
and, as a consequence, the total flux estimate
min 1≤i≤I ψ ′ i ≤ ∂S σ ∂x ≤ max 1≤i≤I ψ ′ i . (14)
The normalization (10) of the n σ i 's implies that S σ (0) = 0. As a result, there exists a S ∈ C 0,1 (0, 1) such that, after extracting a subsequence, S σ ---→ σ→0 S, S(0) = 0, and
min 1≤i≤I ψ ′ i ≤ ∂S ∂x ≤ max 1≤i≤I ψ ′ i in [0, 1]. (15)
Next we obtain bounds on the R σ i 's, which are independent of σ, and imply their convergence as σ → 0. This is the topic of the next Lemma which we prove after the end of the ongoing proof. Lemma 2.5 For each i = 1, ..., I there exists a positive constant
C i = C i (ψ ′ i , σν ii , σψ ′′ i )) such that |R σ i | + | ∂R σ i ∂x | ≤ C i in [0, 1],
Moreover, for all i = 1, ..., I,
R σ i ---→ σ→0 R = S, in C([0, 1]).
We obtain next the Hamilton-Jacobi satisfied by the limit R = S. The claim is that the limit is a viscosity solution (see, for instance, [3,[START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF]) of
∂R ∂x 2 + max 1≤i≤I [-ψ ′ i ∂R ∂x ] = 0 in (0, 1). ( 16
)
We do not state the boundary condition because we do not use them. It can, however, be proved that R satisfies
∂R ∂x ≤ max 1≤i≤I ψ ′ i at x = 0 and ∂R ∂x ≥ min 1≤i≤I ψ ′ i at x = 1.
We begin with the subsolution property. Letting σ → 0 in the inequality
-σ ∂ 2 R σ i ∂x 2 + ∂R σ i ∂x 2 -ψ ′ i ∂R σ i ∂x ≤ σ ν ii ,
gives, for all i = 1, ..., I, ∂R ∂x
2 -ψ ′ i ∂R ∂x ≤ 0.
To prove that R is a supersolution of ( 16) we observe that function
R σ = min 1≤i≤I R σ i satisfies the inequality -σ ∂ 2 R σ ∂x 2 + ∂R σ ∂x 2 + max 1≤i≤I [-ψ ′ i (x) ∂R σ ∂x ] + σ I i,j=1 ν ij ≥ σ min i ( ν ii ).
Letting again σ → 0, we find that
R = S = lim σ→0 R σ satisfies ∂R ∂x 2 + max 1≤i≤I [-ψ ′ i (x) ∂R ∂x ] ≥ 0.
We obtain now the formula for R. To this end, observe first that, since lim σ→0
R σ = R = S, letting σ → 0 in (14) yields min 1≤i≤I ψ ′ i ≤ ∂R ∂x ≤ max 1≤i≤I ψ ′ i . ( 17
)
Next we show that, in the viscosity sense,
∂R ∂x ≥ 0. ( 18
)
Indeed for a test function Φ, let x 0 ∈ (0, 1) be the maximum of R -Φ, i.e., (R -Φ)(x 0 ) = max 0≤x≤1 (R -Φ)(x) and assume that Φ ′ (x 0 ) < 0.
Applying the viscosity subsolution criterion to [START_REF] Howard | Mechanics of motor proteins and the cytoskeleton[END_REF], then implies that
Φ ′ (x 0 ) -max i ψ ′ i (x 0 ) ≥ 0.
This, however, contradicts the inequality
max 1≤i≤I ψ ′ i (x 0 ) > 0
that follows from the assumption [START_REF] Chipot | A variational principle for molecular motors. Dedicated to Piero Villaggio on the occasion of his 70th birthday[END_REF]. Combining ( 17) and ( 18) we get min 1≤i≤I
(ψ ′ i ) + ≤ ∂R ∂x ≤ max 1≤i≤I ψ ′ i . (19)
Finally, given a test function Φ, let x 0 ∈ (0, 1) be such that (R -Φ)
(x 0 ) = max 0≤x≤1 (R -Φ) and assume that Φ ′ (x 0 ) > 0.
Again by the viscosity criterion we must have
Φ ′ (x 0 ) -min 1≤i≤I ψ ′ i (x 0 ) ≤ 0,
and, hence, in the viscosity sense,
∂R ∂x ≤ ( min 1≤i≤I ψ ′ i ) + if ∂R ∂x > 0. ( 20
)
This concludes the proof of the formula in the claim.
We return now to the Proof of Lemma 3.1 For the Lipschitz estimate, observe that, at any extremum point x 0 of
∂R σ i ∂x , we have ∂ 2 R σ i ∂x 2 = 0.
Evaluating the equation at x 0 , we get
∂R σ i ∂x 2 ≤ ψ ′ i ∂R σ i ∂x + σ ν ii .
As a consequence, at x 0 we have
∂R σ i ∂x ≤ max 0≤x≤1 ψ ′ i + σ ν ii .
To identify the limit of min 1≤j≤I R σ j notice that the inequality
n σ i ≤ 1≤j≤I n σ j ≤ I max j n σ j gives -σ ln I + min 1≤j≤I R σ j ≤ S σ ≤ R σ i ,
and thus
S σ ≤ min 1≤i≤I R σ i .
Consequently, we have the uniform convergence
min 1≤i≤I R σ i ---→ σ→0 S.
To prove the claim about the limit of the R σ i we observe that summing over i the equations of ( 12) yields
σ I i,j=1 ν ij ( (R σ j -R σ i ) + σ ) 2 ≤ 2σ I i,j=1 ν ij e (R σ i -R σ j )/σ ≤ 2(σ 1≤i≤I ν ii + 2σ ∂ 2 i R σ i ∂x 2 + 1≤i≤I ψ ′ i ∂R σ i ∂x ).
Integrating in x and using the gradient estimates, we find that
I i,j=1 1 0 (R σ j -R σ i ) 2 = 1 2 I i,j=1 1 0 (R σ j -R σ i ) 2 + ≤ Cσ.
Together with the uniform gradient estimate on R σ i and the uniform bound on min 1≤j≤I R σ j , we deduce that
R σ i ---→ σ→0 R = S ∈ C 0,1 (0, 1).
We continue with the Proof of Corollary 2.2 The normalization (4) amounts to adding a constant to the R i . The exponential behavior of n σ i , with an increasing R σ i (from Theorem 2.1), yields that the n σ i 's converge, as σ → 0, to 0 uniformly on intervals [ε, 1] with ε > 0. Moreover, R(0) = 0. The result follows with ρ i ≥ 0. If ρ i = 0 for some i = 1, ..., I, then, letting σ → 0 in (1), gives, in the sense of distributions, that 0 =
j =i ν ij n j .
But then all the ρ j must vanish, which is impossible with the normalization of unit mass.
We present now a brief sketch of the proof of Theorem 2.3. Since it follows along the lines of the proof of Theorem 2.1, here we only point out the differences.
We have: Proof of Theorem 2.3 The Lipschitz estimates, the passage in the limit and the identification of the limiting Hamilton-Jacobi equation in the Theorem 2.1 did not depend on the assumption (9), hence, they hold true also on the case at hand. The final arguments of the proof of Theorem 2.3 also identify the limit on the set (∪K l ) c . On the set ∪K l we already know from (17) that R ′ is less than the claimed value, and thus it is negative. We conclude the equality by using the Hamilton-Jacobi equation. Indeed in this situation we know that
max 1≤i≤I [-ψ ′ i ∂R ∂x ] = - ∂R ∂x max 1≤i≤I ψ ′ i .
We conclude the section with the proof Corollary 2.4, which is simply a variant of the one for Corollary 2.2. We have: Proof of Corollary 2. [START_REF] Barles | Wavefront propagation for reaction diffusion systems of PDE[END_REF] The assumption on ∪J asserts that R is increasing on ∪J. Then it may decrease but, for x > 0, R(x) > R(0). With the unit mass normalization, this means that R(0) = 0 as before and the convergence result holds as before. In this section we consider transition coefficients normalized by 1/σ. For the sake of simplicity we take I = 2. This allows for explicit formulae. The equations for larger systems, i.e., I > 3, are more abstract. The system (1) is replaced by
Large transition coefficients
-σ ∂ 2 ∂x 2 n σ 1 -∂ ∂x (∇ψ 1 n σ 1 ) + 1 σ ν 1 n σ 1 = 1 σ ν 2 n σ 2 in (0, 1), -σ ∂ 2 ∂x 2 n σ 2 -∂ ∂x (∇ψ 2 n σ 2 ) + 1 σ ν 2 n σ 2 = 1 σ ν 1 n σ 1 in (0, 1), σ ∂ ∂x n σ i + ∇ψ i n σ i = 0 in {0, 1} for i = 1, 2. (21)
As before we assume that
n σ i > 0 in [0, 1] for i = 1, 2. ( 22
)
The result is:
Theorem 3.1 Assume (3), ( 6), ( 7), ( 8) and ( 9) and consider the solution (n σ 1 , n σ 2 ) to (21) normalized by n σ 1 (0) + n σ 2 (0) = 1. Then, as σ → 0 and i = 1, 2,
R σ i = -σ ln n σ i ---→ σ→0 R in C(0, 1), R(0) = 0, and
R ′ ≥ min 1≤i≤I ψ ′ i on ∪ J l , - √ k on (∪J l ) c .
The corollary below follows from Theorem 3.1 in a way similar to the analogous corollaries in the previous section. Hence, we leave the details to the reader. Corollary 3.2 In addition to the assumptions of Theorem 3.1, suppose that 0 ∈ J 1 , the potentials are small enough so that
√ k |K| < sup J min 1≤i≤2 ψ ′ i (y)dy,
To go for weaker assumptions would face the completely decoupled case (when ν vanishes) and the motor effect does not occur.
We have:
Theorem 4.1 Assume ( 7),( 8),( 9), (25) and normalize the solution (n σ i ) 1≤i≤I to ( 21) by [START_REF] Diekmann | The dynamics of adaptation : an illuminating example and a Hamilton-Jacobi approach[END_REF] We also have:
Corollary 4.2 In addition to the assumptions of Theorem 4.1, suppose that 0 ∈ J 1 . For i = 1, , , I, there exist ρ i ≥ 0 such that 1≤i≤I ρ i = 1, and, as σ → 0,
n σ i ---→ σ→0 ρ i δ 0 .
The direct conclusion of Theorem 4.1 is simply that The corollary follows from the fact that, for all i = 1, ..., I, n σ i ≥ 0. We do not know whether in this context each ρ i is positive. To get this, we need to assume something more like, for example, ν ij (0) > 0 for all i, j = 1, ..., I.
We conclude with a brief sketch of the Proof of Theorem 4.1. The total flux and Lipschitz estimates follow as before. A careful look at the proof of the convergence part of Theorem 2.1 shows that either the R σ i 's blow up or they are uniformly bounded and, hence, converge uniformly in (0, 1) to a subsolution of
|R ′ i | 2 -ψ ′ i R ′ i ≤ 0.
It then follows that
R ′ i ≥ 0 on (∪ α K α i ) c and R ′ i ≤ ψ ′ i (x) on ∪ α K α i .
The final step is to prove that
R(x) = min i∈L(x) R i (x) in L(x) = {i, ψ ′ i (x) ≥ 0}.
This follows as before. We leave the details to the reader.
Figure 2 :
2 Figure 2: Motor effect exhibited by the parabolic system (21) with large transition coefficients. The figure depicts the phase functions R σ 1 , R σ 2 . As announced in Theorem 3.1, we have R σ 1 ≈ R σ 2 and can decrease slightly. Here we have used σ = 5 10 -3 .
For i = 1, ..., I, let R σ i = -ln n σ i . Then, as σ → 0, either R σ i ---→ σ→0 R i in C(0, 1), or R σ i ---→ σ→0 ∞ uniformly in [0, 1]. Moreover, the function R = min 1≤i≤I R i satisfies R(0) = 0, R ′ ≥ 0 and R ′ = min 1≤i≤I ψ ′ i on ∪ J l .
and (n σ 1 , n σ 2 ) is normalized by [START_REF] Barles | Wavefront propagation for reaction diffusion systems of PDE[END_REF]. There exist ρ 1 , ρ 2 > 0 such that ρ 1 + ρ 2 = 1 and, as σ → 0 and for i = 1, 2,
We present next a sketch of the proof of Theorem 3.1 as most of the details follow as in the previous theorems. Proof of Theorem 3.1 The total flux and Lipschitz estimates follow as before. The main new point is the limiting Hamilton-Jacobi equation which is more complex. We formulate this as a separate lemma below. Its proof is based on the use of perturbed test functions. We refer to [START_REF] Barles | Wavefront propagation for reaction diffusion systems of PDE[END_REF] for the rigorous argument in a more general setting.
with
where, for i = 1, 2,
The formula for R ′ follows from the above Lemma by analyzing the solutions to the Hamilton-Jacobi equation as before. On the set ∪J l the answer follows from the bounds [START_REF] Fleming | Controlled Markov processes and viscosity solutions[END_REF]. On the set (∪J) c the argument is more elaborate. Using that R ′ is a subsolution, we get
Therefore both β 1 and β 2 are nonpositive and thus
On the other hand we know that on (∪J l ) c one of the potentials -for definiteness say ψ 1 -satisfies
The inequalities for R ′ are now proved.
Vanishing transition coefficients
We focus here to the case where the transition coefficients (ν ij ) 1≤i,j≤I may vanish at either some points or, in fact, on large sets. In this situation, we assume that
for each j = 1, ..., I, ψ ′ j < 0 on a finite collection of intervals (K α j ) 1≤α≤A j and for all j = 1, ..., I and α = 1, ..., A j , there exists i ∈ {= 1, ..., I} such that ψ ′ i ≥ 0 on K α j , and, in a left neighborhood of the right endpoint of K α j , ν ij > 0.
(25) | 23,812 | [
"739446"
] | [
"66",
"134437"
] |
01477111 | en | [
"info"
] | 2024/03/04 23:41:46 | 2016 | https://hal.science/hal-01477111/file/regragui2016.pdf | Younes Regragui
email: [email protected]
Najem Moussa
email: [email protected]
Agent-based system simulation of wireless battlefield networks
Keywords: Self-Organized Behavior, Mobility Models, Ad hoc Networks, Group mobility, Dismounted soldiers, Battlefield
Introduction
Recently, the army has been interested in developing new skills and competencies such as making soldiers more connected [START_REF] Cotton | Millimeter-wave stealth radio for special operations forces[END_REF] in the battlefield based on modern soldiers' electronic equipment and computer technologies by using mobile wireless ad hoc networks (MANETs) [START_REF] Council | Energy-Efficient Technologies for the Dismounted Soldier[END_REF][START_REF] Thomas | The Defence and National Security Capability Reporter[END_REF]. The utilization of dismounted soldiers is one of the major strategies being adopted in the Army which makes tactical operations on the battlefield much easier to control. In a military environment, the dynamics of such missions may change rapidly, so the dismounted soldier would need to incorporate several new technologies to exchange information consisting of surveillance and tactical operations in order to prevent intrusion and detect enemies [START_REF] Council | Energy-Efficient Technologies for the Dismounted Soldier[END_REF].
In the context of military operations, autonomous dismounted soldiers may interact simultaneously with the environment (battlefield) and with each other so as to complete an assigned mission such as a sweep operation of houses or buildings in a wide area, containing features such as mountains, forests, or rivers. The group of soldiers may be divided into a number of battalions with each one having its own mission, especially in some critical situation (e.g., searching and attacking the enemies during a sweep operation or escaping from an unexpected enemy attack). This stressful overcharge of interactions, in conjunction with the geographically variable nature of the battlefield area and the unpredictable behavior in terms of wireless network topology state, thus increasing the susceptibility of network topology to decomposition in multiple components.
The network connectivity is worsened due to topology changes driven by wireless links and unpredictable mobility of soldiers. The highly dynamic in nature of dismounted soldiers' mobility on the battlefield and node failures are considered among the main reasons for this challenging problem. Therefore, the evaluation of the network performance during the execution of a tactical military scenario for such situations in the real world setting is in many cases not feasible since the cost can be too high and it is impossible to test in real world deployment of soldiers in the battlefield with the presence of enemies. Therefore, a simulation environment is very attractive for evaluating and studying the impact of soldiers' behaviors on the performance of the network and topology condition during military operations. Typically, the utilization of a simulation environment is intended to reflect and assess real-world scenarios accurately such as modeling the mobility and wireless communication of soldiers on the battlefield as realistically as possible. After the deployment of soldiers on the battlefield, the destination direction of each soldier is done via a mobility model in the simulation, including the velocity of each one and their interaction with one other.
In the literature, various group mobility models have been proposed for studying and simulation of MANET in a real-world scenario. Among them the reference point group mobility (RPGM) [START_REF] Hong | A group mobility model for ad hoc wireless networks[END_REF] and Group force mobility model (GFMM), which are the most commonly used group mobility models to simulate the battlefield scenarios. In [START_REF] Williams | Group force mobility model and its obstacle avoidance capability[END_REF] the authors focus on group interaction and movement, but also provides the capability to incorporate obstacles into a simulation area and give the groups the ability to maneuver around them. However, in most of the existing group mobility models, the authors did not focus explicitly on more complex situations such as investigating the effect of enemies' attacks on the battlefield. They focused in the most cases on communication in battlefields with the presence of geometric obstacles in the form of buildings or inter-vehicular communication. To the best of our knowledge, this study is the first one that proposes a modeling approach for a military scenario based on the presence of enemies on the battlefield in mobile ad hoc networks (MANETs).
In summary, the aim of this paper is to introduce a group mobility model to simulate wireless communication on the battlefield of a dismounted soldier group with and without the presence of enemy attacks as realistically as possible. For our purposes, we were interested in studying our model as follows:
• Evaluation of network performance of dismounted soldiers with the presence of perturbations factors (noise) used as a flexible modeling of obstacles.
• Network performance analysis in terms of tactical dynamics of dismounted soldiers on the battlefield in the presence of enemies. Moreover, we provide statistical evaluations based on the connectivity of soldier groups on the battlefield, especially when soldiers are trying to escape from enemy attacks in which unexpected network topology change may occur.
This paper introduces a group mobility model, in order to realize a performance study of a collective motion of dismounted soldiers in a MANET network with the commander (leader) defined as a sink node. This study is evaluated through various mobility scenarios and based on several metrics focusing on data throughput, path lifetime, packet loss, packets delivery ratio, the velocity of soldiers, etc. We perform NS-2 simulations to observe and analyze these network metrics. Here, the proposed model is implemented with tcl scripting language and works in Linux environment.
The rest of the paper is organized as follows. The related work is presented in Section 2. Section 3 describes the collective motion approach we have used in our group mobility model. Experimental results are presented in Section 4. Discussion and comparison of this work and other methods are carried out in Section 5. Section 6 concludes the paper and discusses the future research.
Related work
In this section, we will review Group mobility models which make efforts to simulate collective motion behaviors in real world situations. The second part of this section shows how nodes mobility can have the worst effect on the nature of the network topology and increasing data loss in the network; it also shows numerous solutions which have been proposed to this challenging issue.
Group mobility models
There have been many attempts to adopt group mobility models in cooperative tactical mobility and military ad hoc networks as the most suitable models to realistically model the mobility of nodes in MANETs. Group mobility models define how the mobile nodes distributed within a geographic area move as a group. Previously, several group mobility models based on a lead point have been proposed to simulate mobility in real world situations. The Reference Point Group Mobility Model (RPGM) [START_REF] Hong | A group mobility model for ad hoc wireless networks[END_REF] is the most widely known of the group mobility models. Each group has a logical "center" which defines the entire group's motion behavior, including location, speed, direction and acceleration. The trajectory of the logical center can be predefined or obtained based on a particular entity mobility model. In [START_REF] Wang | Group mobility and partition prediction in wireless ad-hoc networks[END_REF], the Reference Velocity Group Mobility Model (RVGM) extends the RPGM model by proposing two velocity vectors: group velocity of logical center or the lead point, and the local deviation velocity of each group member. In [START_REF] Williams | Group force mobility model and its obstacle avoidance capability[END_REF], the authors introduced the Group Force Mobility Model (GFMM) which is similar to RVGM in that the velocity of a group member follows the velocity of its lead point with a small random deviation. GFMM not only focuses on group interaction and movement but also provides the capability to incorporate obstacles into a simulation area and gives the groups the ability to maneuver around them. Authors in [START_REF] Blakely | A structured group mobility model for the simulation of mobile ad hoc networks[END_REF] proposed a Structured Group Mobility Model (SGMM), which parameterizes group structure and generates movement sequences for use in simulations. The SGMM uses reference point as in RPGM, which may be the geographical center of the group, the location of the leader, or the group's center of mass. In [START_REF] Ning | Diamond group mobility model for ad hoc network in military[END_REF] the authors introduced a new group mobility model called the Diamond Group Mobility (DGM), where a group of soldiers is restricted in a diamond region with one dismounted soldier at the center considered as (commander). In Diamond Group Mobility, a dismounted soldier group has a logical center as in RPGM. The center's motion defines the entire group's motion behavior.
While these models assume all nodes move based on a lead point considered as a logical center that exists within each group, in other cases group mobility models are based on the location of the next region or area to which the mobile nodes group is moving. For example, in the Reference Region-Based Group Mobility (RRGM) [START_REF] Ng | A mobility model with group partitioning for wireless ad hoc networks[END_REF], every group is associated with a reference region which is an area that nodes will move towards to and once they arrive, the nodes will move around within the region waiting for the arrival of other nodes. In [START_REF] Zhou | Group and swarm mobility models for ad hoc network scenarios using virtual tracks[END_REF], the authors proposed a virtual track based group mobility model (VT model) which closely approximates the mobility patterns in military MANET scenarios. It is reported that the proposed VT mobility model is suitable for both military and urban environment. In [START_REF] Fongen | A military mobility model for manet research, Parallel and Distributed Computing and Networks[END_REF], the authors proposed a mobility model (Hierarchical Group Mobility model) designed for modeling a military operation in order to study the connectivity of a MANET established through wireless communication between the moving objects (ships, vehicles or foot soldiers, and aircraft). In a hierarchical mobility model, every moving object will be a child of another moving object, and the coordinate system of its movements will have the parent object's position as its origin.
In recent years, several group mobility models approaches have been proposed to improve the performance of collective movement in MANETs. In [START_REF] Vastardis | An enhanced community-based mobility model for distributed mobile social networks[END_REF], an enhanced version of the Community Mobility Model was proposed which incorporates a feature that encourages group mobility. This model follows the preceding community-based approaches that map communities to a topological space.
In [START_REF] Misra | Bio-inspired group mobility model for mobile ad hoc networks based on bird-flocking behavior[END_REF], in order to avoid intra-group and inter-group collision and also to avoid collision with environmental obstacles, the authors proposed a novel group mobility model for mobile ad hoc networks (MANETs), named Bird-Flocking Behavior Inspired Group Mobility Model (BFBIGM), which takes inspiration from the mobility of a flock of birds flying in a formation. In [START_REF] Nguyen | Modelling mobile opportunistic networks-From mobility to structural and behavioural analysis[END_REF], the authors introduced STEPS, a generic and simple modeling framework for mobile opportunistic networks based on the principles of preferential attachment and location attractor. Furthermore, the STEPS Model is inspired by observable characteristics of the human mobility behavior, specifically the spatial-temporal correlation of human movements. In order to obtain meaningful performance results of real human movement simulation in MANETs, the authors in [START_REF] Zhao | N-body: A social mobility model with support for larger populations[END_REF] proposed the N-body mobility model for wireless network research that is capable of synthesizing the group-forming tendency observed in real human movements. Combined with a clustered network generation algorithm, the N-body model does not require detailed knowledge of the target scenario, but rather synthesizes mobility traces for large populations based on metrics extracted from a small sample trace.
Topology maintenance in (MANETs)
Several methods have been proposed for link failure protection in ad hoc networks due to unpredictable nodes mobility in highly dynamic topology. Examples of these are, partition prediction and service replication on the server nodes [START_REF] Wang | Efficient and guaranteed service coverage in partitionable mobile ad-hoc networks[END_REF] or data replication at multiple nodes and dynamically deploying these nodes to disconnected partitions of the network [START_REF] Karumanchi | Information dissemination in partitionable mobile ad hoc networks[END_REF]. However, both data replication and topology information updates undoubtedly increase memory and communication bandwidth overhead. In [START_REF] Wattenhofer | Distributed topology control for power efficient operation in multihop wireless ad hoc networks[END_REF], the authors proposed a distributed strategy based on the directional information, in which a node grows it transmission power until it finds a neighbor node in every direction. However, this strategy leads to a worse degradation of energy due to multiple attempts to change the transmission power level. In [START_REF] Mi | HERO: A hybrid connectivity restoration framework for mobile multi-agent networks[END_REF], a hybrid connectivity restoration framework (HERO) was presented by integrating the proposed connectivity restoration algorithm with a potential function based dynamic motion controller. HERO is able to restore the connectivity of mobile networks subjected to a single or multiple simultaneous agent failures.
Other works based on decentralized approaches are proposed to improve route recovery in MANET. The authors in [START_REF] Konak | A flocking-based approach to maintain connectivity in mobile wireless ad hoc networks[END_REF] present a decentralized approach for maintaining the connectivity of a MANET using autonomous and intelligent agents. However, this approach requires an adaptive strategy to guide agent behaviors, for which the effectiveness depends highly on the density of the network. In [START_REF] Manickavelu | Particle swarm optimization (PSO)-based node and link lifetime prediction algorithm for route recovery in MANET[END_REF], a particle swarm optimization (PSO)-based lifetime prediction algorithm for route recovery in MANET was proposed. This technique predicts the lifetime of link and node in the available bandwidth based on parameters such as the relative mobility of nodes and energy drain rate, etc. Each particle contains a local memory space to store the best position experienced by the particle until then. Using this information, the velocity of the particle can be estimated. Using predictions, the parameters are fuzzified and fuzzy rules have been formed to decide on the node status.
Based on the drawbacks of previous works, in [START_REF] Vodnala | A Backbone based multicast routing protocol for route recovery in MANETs[END_REF], a backbone based multicast routing protocol was proposed to recover the link failures in MANETs. This protocol is a hybrid protocol which combines the features of both trees-based and mesh based routing techniques.
Another class of approaches is based on using backup paths to reduce node failure and link failure in MANETs. For example, in [START_REF] Zadin | Maintaining path stability with node failure in mobile ad hoc networks[END_REF], to discover efficient stable communication channels with longer lifetimes and an increased number of packets delivered, a multi-path routing protocol was proposed to protect intermediate nodes of the path instead of just the links between two neighboring nodes. In [START_REF] Abujassar | Mitigation fault of node mobility for the MANET networks by constructing a backup path with loop free: enhance the recovery mechanism for pro-active MANET protocol[END_REF], several recovery mechanisms were proposed to reduce the impact of failure. In this work, the author compute the backup route through a routing table to alleviate the impact of this on the network by finding an alternative path ready to use to pass the traffic when the primary one fails.
In the current research, we analyze the effect of the collective motion of soldiers on the performance of a military wireless network in a simulation approach. Then, we propose different performance metrics in order to give more attention to network topology change during the mobility of soldiers in the battlefield. In addition, we analyze the effect of collective motion on the performance of the network with the presence of enemies in the battlefield. As far as we know, the present paper is the first work dealing with simulation of the collective motion of a soldier group with the presence of enemies in the wireless communication model on the battlefield.
Dismounted soldiers collective motion approach
This section presents the basic concept of our proposed group mobility model, describing how the movement of dismounted soldiers and enemies is modeled on the battlefield based on a set of simple rules. It illustrates how the superposition of these simple rules is used to govern the dynamics of autonomous agents belonging to two distinct groups (soldiers and enemies). The most important consideration of the group mobility model in trying to simulate the self-organizing behaviors of dismounted soldiers and enemies is due to the fact that this model is the most appropriate one for modeling and simulation of mobility on the battlefield area [START_REF] Hong | A group mobility model for ad hoc wireless networks[END_REF][START_REF] Zhou | Group and swarm mobility models for ad hoc network scenarios using virtual tracks[END_REF][START_REF] Fongen | A military mobility model for manet research, Parallel and Distributed Computing and Networks[END_REF]. In this paper, we assume that dismounted soldiers are equipped with wireless communication devices based on IEEE 802.11 and are able to communicate directly with the soldier leader or indirectly via multi-hop routing.
The collective motion behavior
Collective motion is a self-organized behavior of independent agents. Many categories of social cohesion and animals' aggregation behavior are found in nature (e.g. ant colonies, birds flock flying and fish swarms, soldiers in the battlefield) carry out their tasks collectively in order to contribute to a common goal. Even though individuals cooperate to accomplish a given global complex mission (e.g. foraging, migration, nest building, defense against enemies in the battlefield, etc.), an individual has only a local perception of the surrounding environment and displays specific behavioral tendencies which are governed by using a few simple rules. In addition, [START_REF] Reynolds | Flocks, herds and schools: A distributed behavioral model[END_REF][START_REF] Couzin | Collective memory and spatial sorting in animal groups[END_REF] introduced three basic rules.
• cohesion: attempt to stay close to each other.
• separation: behavior that avoids collisions by causing a soldier to steer away from all of its neighbors.
• alignment: behavior that causes a particular soldier to line up with soldiers close by.
In [START_REF] Reynolds | Flocks, herds and schools: A distributed behavioral model[END_REF][START_REF] Couzin | Collective memory and spatial sorting in animal groups[END_REF], to maintain a flock cohesion, the agent is able to make independent decisions and interacts with a fixed number of neighbors, rather than with all neighbors in the flock. In [START_REF] Couzin | Collective memory and spatial sorting in animal groups[END_REF], the authors demonstrated that individuals in a bird flock can change their position relative to others based only on local information.
In this article, we proposed a group mobility model based on a collective motion approach for military wireless communications on the battlefield, where a group of dismounted soldiers moves in a limited battlefield area. A potential field algorithm is used to generate movement for each soldier. The perceptual field of each dismounted soldier is divided into the zone of repulsion (ZoR), the zone of orientation (ZoO) and the zone of attraction (ZoA), as shown in Fig. 2. Each dismounted soldier attempts to maintain a minimum distance from others within the ZoR. Within the ZoO, a dismounted soldier aligns itself with its neighbors and within the ZoA, a dismounted soldier moves towards the group so as not to be on the periphery or be left behind. Soldiers cannot see too far, thus, there is no interaction with others located outside the ZoA.
The proposed strategy of collective motion in this article is similar to the rule-based process in Couzin's model [START_REF] Couzin | Collective memory and spatial sorting in animal groups[END_REF] and Reynolds's model [START_REF] Reynolds | Flocks, herds and schools: A distributed behavioral model[END_REF]. However, in order to model the interaction of soldiers and enemies, we need to define some new behavioral rules assigned to soldiers when detecting enemy attacks and vice-versa (see subsection 3.1). Fig. 2. Representation of a member in the model centered at the origin: zor=zone of repulsion, zoo=zone of orientation, zoa=zone of attraction, zore=zone of repulsion from enemies, α=field of perception ahead of the member.
3.2.
Behavioral rules assigned to dismounted soldiers and enemies 3.2.1. Behavioral rules assigned to dismounted soldiers 1) A Dismounted soldier attempts to maintain a maximum distance between himself and the enemies in his neighborhood, at all times regardless of his location zone. This rule has the highest priority.
2) The leader dismounted soldier attempts to move in any direction if there aren't enemies in his neighborhood. 3) If the dismounted soldier is not performing any previous rules, he tends to maintain a minimum distance between himself and the others soldiers within the zone of repulsion. This rule has also the highest priority (less than rule 1 or rule 2). 4) If the dismounted soldier is not performing rule 3 he tends to align himself with the leader dismounted soldier within the zone of orientation(ZoO), and towards the position of the leader dismounted soldier within the zone of attraction (ZoA). 5) If the leader dismounted soldier is neither in ZoO nor in ZoA, the dismounted soldier tends to align himself with his neighbors within the zone of orientation(ZoO), and towards the group within the zone of attraction (ZoA).
Behavioral rules assigned to enemies
1) The enemy leader attempts to move towards leader soldier, whenever he is within his neighborhood, in order to attack the soldiers group.
2) The enemy attempts to attack soldiers if there are one or more soldiers in his neighborhood, regardless of his location zone. For that, the enemy moves towards the group of soldiers within his neighborhood. This rule has the highest priority for enemies. 3) If an enemy is not performing any previous rules, he tends to maintain a minimum distance between himself and the others enemies within the zone of repulsion. This rule has also the highest priority (less than rule 1 or rule 2). 4) If the enemy is not performing rule 3, he tends to align himself with the leader enemy within the zone of orientation(ZoO), and towards the position of the leader enemy within the zone of attraction (ZoA). 5) If the leader enemy is neither in ZoO nor in ZoA, the enemy tends to align himself with his neighbors within the zone of orientation(ZoO), and towards the group within the zone of attraction (ZoA).
Fig. 3. Representation of the area around a dismounted soldier placed in the center: ZoR is the zone of repulsion, ZoO is the zone of orientation and ZoA is the zone of attraction. α degrees is the field of view, R is the communication range, d is the direction of movement for commander member (leader soldier).
Behavioral rules description
A simulation model of N soldiers and M enemies was created for which we assume that individuals move at a constant speed of v 0 units per second. Each soldier is characterized by his location p i (t) and velocity v i (t) = v 0 × d i (t) of direction d i (t) at time t. In each time step t, a member i assesses the position and/or orientation of neighbors in its local neighborhood within three non-overlapping behavioral zones (Fig. 2) to determine its desired direction of motion d i (t + dt) at time t + dt. After that, the member i turns towards the direction vector d i (t + dt) by the turning angle α i , where
α i = ϕ + ξ i (1)
where ϕ = σ × dt is the turning angle and σ is the turning rate. ξ i = θ × dt × rand(0, 1), is a random uncertainty variable and θ is a noise parameter. The location of the member i at time t + dt is given by:
p i (t + dt) = p i (t) + v i (t + dt)dt (2)
In addition, introducing the uncertainty in the movement of the military group may be useful as a means of analysis of the effects of various factors on the battlefield such as crossing in difficult terrains (mountains, forests, and rivers), surveying a military region or buildings from enemies.
Behavioral rules description of a dismounted soldier
Repulsion behavior from enemies
Each dismounted soldier i (this may be the leader soldier) attempts to maintain maximum distance from n en enemies in its neighborhood regardless of their location zone. In most cases, enemies are detected once they are within the zone of attraction (ZoA) of the dismounted soldier i. The direction of repulsion from enemies is given as follows:
d re (t + dt) = - nen j=1 r ij |r ij | (3)
where r ij is the unit vector from the location point of i in the direction of the enemy j. This behavioral rule has the highest priority in the model, so that if n en > 0, the desired direction
d i (t + dt) = d re (t + dt).
This repulsion behavior can be interpreted as soldiers avoiding danger space, or attacks from enemies. If no enemies are within any zone, the dismounted soldier responds to other behavioral rules.
Repulsion behavior from neighbors
A dismounted soldier i attempts to maintain a minimum distance from the others soldiers within a zone of repulsion (ZoR), modeled as a circle, centered on the dismounted soldier i, with radius R 0 . If n r neighbors are present in this zone at time t, the direction of repulsion from neighbors is given as follows:
d r (t + dt) = - nr j =i r ij |r ij | (4)
This behavioral rule has a priority compared to other behaviors, but, less than repulsion from enemies behavior priority. This zone can be interpreted as soldiers maintaining personal space, or avoiding collisions. Moreover, this repulsion behavior corresponds to a frequently observed behavior of animals in nature (Krause and Ruxton, 2002) [START_REF] Couzin | Collective memory and spatial sorting in animal groups[END_REF]. If no neighbors are within the zone of repulsion (n r = 0), the dismounted soldier responds to others rules within the zone of orientation (ZoO) and the zone of attraction (ZoA).
Cohesion behavior
Cohesion behavior is the opposite of repulsion behavior. This behavior encourages a dismounted soldier to move closer to other neighbors. The cohesion direction of a dismounted soldier i is given as follows:
d i (t + dt) = 1 2 d o (t + dt) + d a (t + dt) n o >= 1, n a >= 1, d o (t + dt) n o >= 1, n a = 0, d a (t + dt) n a >= 1, n o = 0.
where n o and n a are the number of neighbors in the zone of orientation (ZoO) and attraction (ZoA) respectively, d o (t + dt) is the direction of alignment with neighbors within the zone of orientation (ZoO), and d a (t + dt) is the direction of attraction towards the positions of soldiers within the zone of attraction (ZoA). The widths of zones (ZoO) and (ZoA) are defined as ∆r o = r or r and ∆r a = r ar o . Both r o and r a are used to determine the zones boundaries respectively. The cohesion direction is given as follows:
d o (t + dt) = no j=1 v j |v j | (5)
d a (t + dt) = na j =i r ij |r ij | (6)
3.5. Behavioral rules description of an enemy 3.5.1. Enemy attacks In the strategy attack of the enemies, that is considered in this paper, the leader enemy attempts to move towards the leader soldier, whenever he is within his neighborhood, in order to attack the soldiers group. Moreover, an enemy i attempts to attack soldiers if there are one or more soldiers in his neighborhood, regardless of his location zone. For that, the enemy moves towards the group of soldiers within his neighborhood. In the simulations, we suppose that this neighborhood is limited by the zone of attraction (ZoA). Hence, if n sol soldiers are within ZoA (centered at the position of the enemy i), the direction of attack is given as follows: where r ij is the unit vector from the location point of i in the direction of the dismounted soldier j. This behavioral rule has the highest priority in the model, so that if n sol > 0, then the desired direction
d at (t + dt) = n sol j=1 r ij |r ij | (7)
d i (t + dt) = d at (t + dt).
If no soldiers are within ZoA, the enemy responds to other behavioral rules.
Repulsion and Cohesion behaviors for the enemies
The repulsion behavior from neighbors (enemies) and the cohesion behavior are identical with those of the soldiers due to fact that these behavioral rules govern individual-level interactions within their group.
After the above process has been performed for every member (soldier and enemy), all members move towards the desired direction d i (t + dt) at time (t + dt) by a vector velocity v i (t + dt). We apply this process at each time step dt, where each member is able to independently perform a specific rule according to the significant interaction with other members in its neighborhood.
Simulation analysis
Performance Metrics
Here we define several metrics related to the performance of the network measurement. We monitor the change in topology status caused by the motion of soldiers in the battlefield with and without the enemy' presence. Our main concern is the dynamic nature of the network topology and its effect on the network performance. The measured performance metrics are described in details as follows.
• Metric 1: The throughput, defined as the average rate of successful packets delivery over a communication channel to sink node.
• Metric 2: The forwarded throughput, defined as the average rate of successfully forwarded packets through intermediate nodes over a communication channel.
• Metric 3: The packet loss, defined as the total dropped packets by source or intermediate nodes during transmission.
• Metric 4: The system speed, V a , defined as the average velocity of soldiers during the simulation, can be calculated as:
V a = 1 N v 0 | N i=1 v i (t)| ( 8
)
where N is the number of soldiers in the battlefield, and v 0 is the initial velocity assigned to soldiers at the beginning of the simulation execution.
• Metric 5: The path lifetime indicates how long the path is still valid before receiving a path update or path error message.
• Metric 6: The path length, defined as the total number of hops traveled by the packet to reach the destination.
• Metric 7: The packet delivery ratio, defined as the number of successfully received packets at the destination to the total number of packets which are expected to be received at the destination.
• Metric 8: The group size: defined as the number of soldiers who are geographically isolated from others by a distance d ig 1 and share a membership in the group.
Analysis of the model without presence of enemies
Our network contained N mobile nodes/soldiers deployed close to each other in a simulation area of 5000×5000 m 2 . The movement pattern is based on a collective motion approach. At each time step dt 2 , the soldiers/nodes are able to send data packets towards the sink node/commander via AODV routing protocol with a data generation rate (λ).
Effects of data generation rate λ
Fig. 4a-b show the effect of data generation rate on both the average throughput and the average packet loss with varying values of noise θ. The figures show that the throughput gradually increases as λ increases. It is quite reasonable according to the network bandwidth availability. On the contrary, the noise has an obvious impact on the throughput, i.e., it decreases drastically in proportion to the noise. The reason for this is that noise exerts a perturbation behavior on the mobility of soldiers. Therefore, topology change has a great impact on the routing protocol performance. As opposed to the throughput, packet loss is taken into account. Fig. 4b shows that the average packet loss increases significantly with λ, because of buffer-overflow and collisions due to network congestion. Furthermore, noise strongly affects the packet loss rate. The reason for this is that noise exerts an influence on the stability of links, by causing an unpredictable mobility behavior, which has the direct effect of increasing the packet loss, due to link failures and indirectly on the increasing buffer overflow because of unreachable nodes. In the next subsection, we will extend the analysis to noise effect on different network performance metrics to investigate how noise (θ) can affect on collective motion of soldiers and network topology, and then its impact on inter-node communication within the network resultant topology. 1 The distance between two isolated groups d ig = 80m. 2 The time step dt equals 1 second.
Effects of noise θ
It has been demonstrated that collective motion models [START_REF] Vicsek | Novel type of phase transition in a system of self-driven particles[END_REF] exhibit a phase transition which occurs when the noise is increased. Indeed, for small values of noise, the average system speed is approximatively equal to one (see Fig. 5c). This phase which is called "finite net transport phase" corresponds to a coherently moving phase where almost all nodes move with the same direction. However, for high values of noise, this mean velocity is approximately zero; reflecting the random aspect of the directions of the moving nodes. This phase is called, "no transport phase". Hence, a phase transition from "finite net transport phase" to "no transport phase" occurs at some critical value θ c ≈ 0.6 (see Fig. 5c). From Fig. 5a, we see that the average throughput decreases significantly versus noise. Hence, the throughput remains almost constant in the finite net transport phase; while it decreases rapidly towards small values in a no transport phase. The main effect on decreasing throughput results especially from increasing noise, which hardly causes topology change due mainly to perturbation of the collective motion model. Thus, an arbitrary partition of the network occurs when noise increases. Fig. 5b illustrates the effect of noise on average packet loss. It is found that the packets will begin to be removed from the network when the noise exceeds a certain level. Beyond this value, the network enters into the congested phase. The results show also that under highest noise, packet loss is high, due mainly to bad condition of network topology where the link failures rate is very important.
In wireless mobile networks, nodes move frequently and to cover the disconnected segments of the net-work, nodes may act as a router to forward packets to other nodes. In order to study the influence of noise on the appearance of relay nodes in the network, we show in Fig. 5d the average forwarded throughput versus noise under different generation rates values λ. The average forwarded throughput significantly increases with increasing noise until reaching a maximized value and then decreases at higher values of noise. If the noise value is very small, then all the nodes move closer to each other and there is no need for relay nodes to transfer the data to the sink node. Moreover, increasing noise will provoke a considerable dispersion of nodes in the network area and then lead to the appearance of long paths and link failures, whereas direct transmission may be insufficient to reach the destination. Hence, this will increase the appearance of rely nodes in the system. However, if the noise value is very high, the nodes become almost disconnected and then no communications could be achieved via relay nodes. This explains the decreasing of the forward throughput (see Fig. 5d). 6a). The reason for this is that both the commander and the soldiers move separately in a disordered fashion. Thus, smaller path lifetimes are due principally to the incapability of nodes to keep connectivity for a long time because of high dispersion of nodes. Furthermore, we see clearly from Fig. 6b that high values of noise lead to the establishment of long paths in order to reach the destination. However, these paths are frequently perturbed by unexpected movements of soldiers in the battlefield. As a consequence, this situation will lead to a significant reduction of the capacity of communication between the soldiers and leader soldier in the battlefield (see Fig. 5a).
Fig. 6c reports the performance of the network obtained in terms of packet delivery ratio under different values of θ. Fig. 6c shows that the network achieves an overall higher rate of received packets under lower values of noise. The reason for this is that under the low value of noise, soldiers move coherently in a collective motion structure (finite net transport phase) and then the communication paths are more stable and reliable. Therefore, most existing communication paths are those emanating from soldiers that are a single hop from a commander. Furthermore, under a low noise values, all nodes are located close to each other. So, obviously, there is no need for establishing longer paths. This optimizes the capacity of the overall communication network. But, under a higher values of noise, the network is segmented into several groups sizes which are changing frequently over time (see Fig. 6d). Hence, we see that at low values of noise, only one large group exists in the network. However, increasing noise may create the dispersion of nodes and then several groups of different sizes may appear in the system. In particular, high frequency of disconnected nodes is present in the network. Thus, this segmentation into different groups has a direct effect on the degradation of network performance as seen from (Fig. 5a and Fig. 6c).
Effects of initial velocity v 0
We assume that our network will be strongly affected by the initial velocity of agents, whereby the communication efficiency of agents may be reduced; and especially when associated with the increase of noise, in which case agents may unexpectedly lose connections with their neighborhood due to the relatively quick change in the direction of the velocity vector of the agents. It has been shown that if v 0 goes to infinity, the agents become completely mixed between two states, similar to the mean-field behavior of a ferromagnetic [START_REF] Vicsek | Novel type of phase transition in a system of self-driven particles[END_REF]. The first state corresponds to perfect alignment of the group while the second corresponds to a pure random state. Therefore, we expect that the average velocity of the group will be exactly equal to 0.5. While extreme of v 0 equaling zero, the agents are stationary where the agents do not move. To evaluate the effects of the mobility on the wireless network of soldiers, we analyzed, with simulations, the effect of initial velocity v 0 on different performances metrics with varying values of θ ={0.2,0.5}. The results of our simulation show that for θ = 0.2, increasing the initial velocity v 0 will affect slightly the mean velocity until it reaches v 0 ≈ 4 (see Fig. 7c). But beyond this value, it decreases strongly until reaches the value 0.5 for v 0 ≥ 8. However, for θ = 0.5, we found that the mean velocity decreases until it reaches a minimum and then increases until reaches the value 0.5. Moreover, just as it was predicted, we find that for higher values of v 0 , the mean velocity remains constant at the value 0.5 regardless of noise value.
Fig. 7a-b show that for lowest initial velocities, the network performs better, in terms of average through-put and packet loss, because low velocities provide favorable conditions that are sufficient for establishing very stable paths and also avoid network partition. On the contrary, it produces worse results under highest initial velocities because high velocities cause sudden and severe disruptions to ongoing network routing; resulting in a lower throughput and high packet loss. Fig. 7d shows that, for low values of noise, the forwarded throughput is decreased gradually when increasing initial velocity, while, in the case of high noise, it decreases exponentially. The impact of increasing both the initial velocity of soldiers and the noise provokes a drastic dispersion of the network topology, leading to a segmentation of the network topology into several small groups sizes or even into isolated nodes.
Fig. 8 plots the performance of the network, in the case of low noise value, in terms of the distributions of paths lifetimes, path length, hop length and group size. Fig. 8a-b show how the established paths can be affected by different values of v 0 (i.e., the initial velocity). We can see that when v 0 is low, communication channels of long lifetimes and short path lengths persist in the network. In addition, the appearance of communication channels of long lifetimes is principally linked to collective motion of nodes in the network. This can be seen clearly from Fig. 8d where only one large group exists in the system. However, increasing the mobility causes the appearance of a high number of established paths and most of them have short lifetimes. Thus, nodes aren't able to keep connectivity for a long time. Indeed, higher mobility will disperse the nodes into small groups, and data can be transferred to the sink only through communication channels having long paths. This confirms well the instability of the network due to a frequent topology change, where a lot of paths should be created in order to conduct packets towards the commander. Hence, much overheads is required to send traffic from the soldiers to the commander; affecting badly the network efficiency. To achieve better network efficiency, the soldiers must move slower in order to maintain the stability of their network. Indeed, we can see from Fig. 8c that when the initial velocity is low, packet delivery rate is improved and a higher percentage of packets are received by the commander through short paths. Moreover, to investigate the effect of initial velocity associated with a high level of noise, we repeated the same network scenario and calculations with different initial velocities and for θ = 0.5. This decision may highlight some features concerning the relationship between initial velocity and noise. We see from Fig. 9 that, for v 0 = 1 and θ = 0.5, the distribution of path lifetimes consistently shows a greater number of shortest lifetime paths compared with results in the case of v 0 = 1 and θ = 0.2 (see Fig. 9a). This is obvious since, as we have depicted before, noise invokes randomness and perturbations to the structure of the communication network. More interestingly, we show that when both the initial velocity and noise are very high, the nodes are rarely able to establish paths to the destination, due mainly to the high uncertainty of nodes' mobility within the network area (see Fig. 9b). In addition, we can expect that the association of higher levels of initial velocity and noise expands greatly the problem of the network partition and then causes severe topology destruction (see Fig. 9d). This can be seen clearly from experiment results shown in Fig. 9c, where the network efficiency is almost absent.
Analysis of the model with the presence of enemies
In this part, we use the same previous network configuration and the same group of soldiers, but, we add a varied enemy numbers in the battlefield near the group of soldiers. We assume that the enemy's leadership is able to move towards the soldier's leadership based on Military-intelligence, whereas the soldier's leadership is able to move randomly in the battlefield area. Furthermore, soldiers and enemies must necessarily follow the collective motion pattern separately. But, the enemies can attack soldiers in the case when an enemy remarks one or more soldier's existence in their field of view. Accordingly, enemies can't establish communication with soldiers or forward data flow to soldiers.
We analyzed the effect of increasing the enemy numbers (η) in the battlefield on different system metrics. In Fig. 12, we show snapshots of the model where we see clearly that increasing the enemy numbers disperse the collective motion of the soldiers. Fig. 10 shows that with increasing η, the average throughput at the sink node decreases almost linearly. It shows also that the average velocity exponentially decreased when increasing η (Fig. 10a and Fig. 10c). Obviously, the presence of enemies in the battlefield provokes the dispersion of the group of soldiers in order to escape the enemy's attacks. This can be seen clearly from the distribution of the group size in the system Fig. 11d, where maximized percentage of isolated nodes are present.
In addition, we see that both the decreasing of the forwarded throughput and the increasing of the packet loss, are done in accordance with the average velocity variation (see Fig. 10b-d). Indeed, with increasing η, we observe an exponential decrease of the average velocity while, on the other hand, an exponential increases and an exponential decreases are found, for the forwarded throughput and the packet loss respectively. Moreover, even when the average velocity is stabilized for η >= 44 they are also stabilized at the same value of η. Finally, we believe that the presence of enemies not only disperses the group of soldiers, but also destroys the communication network of the soldiers, so that the quantity of data flow transmitted to the commander decreases while increasing the enemy numbers.
Fig. 11 depicts the results of the distributions of path lifetimes, path lengths, hop lengths and group sizes under a different enemy numbers. Fig. 11a-b clearly show the appearance of a high number of long paths with short lifetimes in the wireless communication network of the soldiers. Thus, the presence of enemies in the battlefield causes a segmentation and disruption of the network topology and therefore causes a degradation of the network performance (see Fig. 11c). The enemy attacks are somewhat similar to the effect of noise parameter. However, the main difference lies in the fact that the enemy attacks are more realistic than the noise effect. While increasing the noise level decreases exponentially the throughput at the sink node, the increase of enemy numbers decreases proportionally the throughput. This implies that although the network may suffer from higher link failures and topology changes due to fact that soldiers are escaping from enemy. As a consequence, only the short paths may be established between the soldiers. Finally, the results show that the performance of the network was affected negatively by the presence of enemies in the battlefield. From these results, a high rate of dropped packets and low rate of packets delivery have been observed. Indeed, in the presence of enemies, the highly dynamic and the sparse distribution of soldiers in the battlefield, lead to a decrease in path lifetimes and an increase in path lengths, which sharply degrades the network performance. Thus, there is a need to develop new infrastructures based on mobile ad hoc network capable of routing the urgent data of soldiers in the battlefield in a short time.
Comparison with other existing mobility models
Our study used several monitoring metrics to reveal the performance of our group mobility model in a complex environment (the battlefield). This model is designed to simulate the movement of dismounted soldiers with a leader on the battlefield. To quantify the reliability aspects and features of our model, a comparison with other mobility models is expected. Since, the simulation of enemy attacks is not treated in most of the existing group mobility models including recent works; only a comparison without the presence of enemies can be performed in this section.
The unexpected mobility of soldiers in the battlefield leads to performance degradation of path length and path lifetime because speeding up or slowing down leads to changes in network topology. Comparing our model and other models in terms of path length and path lifetime is expected to give extra merit to their strengths and other features.
A performance comparison of our proposed Group Mobility Model along with some existing mobility models is provided for the perturbation factors of network topology similar to the noise effect defined in our model. Indeed, our mobility model was compared with RPGM [START_REF] Hong | A group mobility model for ad hoc wireless networks[END_REF], Nomadic [START_REF] Sánchez | ANEJOS: a java based simulator for ad hoc networks[END_REF], GFMM [START_REF] Williams | Group force mobility model and its obstacle avoidance capability[END_REF], STEMP3 [START_REF] Nguyen | Modelling mobile opportunistic networks-From mobility to structural and behavioural analysis[END_REF] and SLAW4 [START_REF] Lee | Slaw: A new mobility model for human walks[END_REF]. For this comparison, we have implemented the above mobility models with a scenario that consider a special node as the sink node (Commander) whereas the other members are considered as senders (Soldiers). In order to provide the randomness and perturbations to the structure of the network topology of existing mobility models as provided in our model via the noise parameter (θ), we identified through simulation the most appropriate parameters which could provoke a perturbation of nodes mobility for each mobility model as follows:
• Reference Point Group Mobility Model (RPGM): In each group, every member moves to a randomly chosen location within a circular neighborhood of radius R around its reference point location. The movement around the reference point is based on the Random Waypoint Model. The analysis and the simulation experiments demonstrated that the greater the radius the more the fluctuation and the uncertainty in the mobility is important.
• Nomadic Community Mobility Model: This is similar to RPGM where every member moves also to a randomly chosen location within a circular neighborhood of radius R around its reference point location. Nomadic is considered as a special case of the RPGM model.
• Group Force Mobility Model (GFMM): This supports the same logic design of the noise parameter which is defined as a speed deviation ratio. This parameter is used to control the deviation range of speeds of a node from the Group speed under GFMM.
• Spatio-Temporal Parametric Stepping (STEPS): A simple parametric mobility model which is inspired by observable characteristics of the human mobility behavior, specifically the spatio-temporal correlation of human movements. It supports an attractor power parameter named (α). We are interested in the variation of α from low to high value, where under the highest value of α, the nodes have a higher probability to stay close to each other and so the preferential zone plays the attraction role instead of a repulsion one.
• Self-similar Least Action Walk (SLAW): This expresses the human walking patterns based on synthetic mobility traces. Under SLAW, people would always randomly choose places to visit in a random order. These places are defined as a set of waypoints and the choices of the next places or destinations to visit are completely random. Simulation experiments have demonstrated that the increase in number of waypoints (β) is a determinant factor of perturbation degree of network topology.
Fig. 13 shows the average throughput as well as the packet delivery ratio in terms of different perturbation factors of network topology. As can be observed from Fig. 13, our model, RPGM and Nomadic exhibit the highest throughput as well as having the highest packet delivery (≈100%) under low value of noise and radius R, whereas the other mobility models exhibit the worst performance results.
By looking at Fig. 14a and Fig. 15a, the benefit of long path lifetimes and the cohesion between soldiers could be seen clearly for our model, RPGM and Nomadic. Furthermore, the existence of long path lifetimes, in these three mobility models, is an indication that there is an important mobility coherence between soldiers. This may leads to a significant increase in the number of relaying packets and therefore to optimizing the use of network resources. Fig. 15a also shows that GFMM exhibits similar results in terms of group size compared with the three previous models. However in terms of network performance, it exhibits a medium level of throughput as well as the packet delivery ratio (≈50%). This can be explained because nodes under GFMM are able to overlap or collide with each others; and then can generate collisions because of reception of high number of packets within a limited coverage area.
As the level of perturbation factors increases, the network performance exhibits a degradation for all models except for the STEMP model, where it shows a very slight increase in the network performance when α >= 5. However, when α is below the value of 5, indicating that the network in STEMP model becomes scattered, the nodes move in a highly unpredictable manner causing a segmentation of the network.
Observing the results presented in Fig. 13, Fig. 14 and Fig. 15, it is seen that the SLAW model depicts the worst network performance as β increases. This can be explained because the mobility of nodes under the SLAW model is based on a purely random strategy where each node randomly tries to visit a selected waypoint from the total number of waypoints (β). In this situation, the topology is quite unstable, and consequently the communication links between nodes will be unstable or may even become disconnected. On other hand, our model still exhibits the best performance compared to the other mobility models in terms of throughput, packet delivery ratio and path lifetime under medium value of noise. Fig. 15b shows clearly that the highly stable communications in our model are achieved by wide group distributions of nodes within the network area. This keeps the stability of links between nodes such that the network topology is effectively static. Under high enough values of perturbation factors of network topology, the link quality is more unstable, and thus the probability of transmission failure increases, thereby increasing the packet loss probability. Therefore, the appearance of disconnected network segments Fig. 15c is particularly due to degradation of the cohesion behavior between nodes. On other hand, it could be also noticed that both RPGM and Nomadic outperform our model in terms of packet delivery and path lifetimes (see Fig. 14c). This is because the topology connectivity in the case of our model depends on the dynamic of neighbors, whereas in the case of RPGM or Nomadic the lead point has a significant impact on each group member. This leads to keeping the stability and the availability of communication links for a long time as shown in (see Fig. 14c). However, both RPGM and Nomadic depict a high frequency of isolated nodes, where the probability to get large group size is very small (see Fig. 15c). Finally, under very high values of perturbation factors, all the mobility models lead to a worst performance degradation. This is due principally to the fact that the nodes move in a highly unpredictable manner with randomly chosen speed and direction within the network area. Therefore, unexpected network segmentation may occur as the network topology and link capacities dynamically change over the time as shown in (Fig. 15d and Fig. 14d).
From the obtained results, we conclude that our model displays a realistic behavior compared with other mobility models. Only the nearby nodes to the leader are directly affected by its trajectory, whereas the others nodes are affected by their neighbors as in the real world. On the contrary, under RPGM and Nomadic, every node is affected by the lead point considered as a leader member. GFMM do not support repulsion between the neighbors within the same group which is considered as a natural human behavior. Under both SLAW and STAMP, the nodes can move freely according to a random mobility model without considering some aspects of cohesion or collective motion behavior.
Conclusions
We have presented several computational statistics on the reliability of the battlefield collective motion model for dismounted soldier groups moving with or without enemy presence, including path lifetime, path lengths, packet delivery ratio, throughput and packet loss. Simulations demonstrate the effectiveness of collective motion for more reliable paths and improved stability of network topology. However, noise strongly affects the network topology state, causing partitioning and nodes dispersion. More interesting are the effects of simultaneously increasing soldiers' noise and initial velocity. Then, the network's global state is severely affected, leading to segmentation into isolated and small group nodes, and in turn to a strong degradation of communication channels. The enemies' presence means the throughput of packets received by the commander decreases as the enemy numbers increase due to limitations in terms of relay nodes and insufficient link quality, which are mainly caused by the high dynamic of soldiers. This model's advantage is that it allows members to make their autonomous mobility based on behavioral rules. Importantly, this allows natural (real-world) selection of rules based only on local information. This model can also simulate two different member groups (soldiers and enemies), where interactions between members of the same group are based on a collaborative approach. With very simple behavioral rules, our model can be extended to simulate more critical real-world situations. For example, it can incorporate soldiers' cognitive abilities based on memory and real-time interaction with instructions, where the soldier may be able to make a tactical battlefield decision.
Finally, simulations show that the effectiveness of the network communication depends on dismounted soldier' capability for intra-group coherence. This task, not easy in delicate battlefield situations, suggests that an adaptive and intelligent strategy for maintaining network topology might be beneficial. Another study, of whether the army's mobility improves energy consumption, will be reported in future.
Fig. 1 .
1 Fig. 1. Illustration of dismounted soldier group in military modern communications infrastructure.
Fig. 4 .
4 Fig. 4. Experiment 1: Effect of increasing the generation rate (λ) with parameters v 0 = 1 (m/s): (θ) the noise probability parameter.
Fig. 5 .
5 Fig. 5. Effect of increasing noise θ with parameters v 0 = 1 (m/s): (a) Average throughput, (b) Average packet loss, (c) Average velocity, and (d) Average forwarded throughput.
Fig. 6 .
6 Fig. 6. Effect of increasing noise θ with parameters v 0 = 1 (m/s), and λ = 0.8: (a) Average path lifetime, (b) Average path length, (c) packet delivery ratio, and (d) Group frequency Fig. 6a-b illustrate the distributions of the path lifetime and the path length in the communication soldiers network. The results show that for higher values of noise, paths with the shortest lifetimes are the most prevalent in the network (see Fig.6a). The reason for this is that both the commander and the soldiers move separately in a disordered fashion. Thus, smaller path lifetimes are due principally to the incapability of nodes to keep connectivity for a long time because of high dispersion of nodes. Furthermore, we see clearly from Fig.6bthat high values of noise lead to the establishment of long paths in order to reach the destination. However, these paths are frequently perturbed by unexpected movements of soldiers
Fig. 7 .
7 Fig. 7. Effect of increasing initial velocity v 0 with parameters λ = 0.5: (a) Average throughput, (b) Average packet loss, (c) Average velocity of soldiers, and (d) Group frequency.
Fig. 8 .
8 Fig. 8. Effect of increasing initial velocity v 0 with parameters θ = 0.2, and λ = 0.5: (a) Average path lifetime, (b) Average path length, (c) packet delivery ratio, and (d) Group frequency.
Fig. 9 .
9 Fig. 9. Effect of increasing initial velocity v 0 with parameters θ = 0.5, and λ = 0.5: (a) Average path lifetime, (b) Average path lifetime, (c) Average connection lifetime, and (d) Groups frequency.
Fig. 10 .
10 Fig. 10. Effect of increasing the enemy numbers η in the battlefield with parameters θ = 0, λ = 0.5, and v 0 = 1 (m/s): (a) Average throughput, (b) Average forwarded throughput, (c) Average velocity, and (d) Average packet packet loss.
Fig. 11 .
11 Fig. 11. Effect of increasing the enemy numbers (η) in the battlefield with parameters θ = 0, λ = 0.5, and v 0 = 1 (m/s): (a) Average path lifetime, (b) Average path length, (c) Packet delivery count, and (d) Average groups size.
Fig. 12 .
12 Fig. 12. Simulation with different enemy numbers η in the battlefield with parameters θ = 0, λ = 0.8, and v 0 = 1 (m/s): (a) η = 4, (b) η = 12, and (c) η = 24.
Fig. 13 .
13 Fig. 13. Comparative evaluation between different mobility models in terms of the network performance, with parameters v 0 = 1 (m/s), λ = 0.8, α = 10 × (1θ) and β = (100 × θ) + 10.
Fig. 14 .
14 Fig. 14. Comparative evaluation between different mobility models in terms of the path lifetime distributions. OM denotes our model.
GFig. 15 .
15 Fig.[START_REF] Zhao | N-body: A social mobility model with support for larger populations[END_REF]. Comparative evaluation between different mobility models in terms of the group size distributions.
Table 1 .
1 Simulation configuration.
Parameter Symbol Value Parameter Value
Number of soldiers N 50 Simulation area 5000×5000 m 2
Enemy numbers η 4-52 Transmission range 80 m
Zone of repulsion r r 10 m Propagation model TwoRayGround
Zone of orientation Zone of attraction Turning rate ∆r o (r o -r r ) 40 m ∆r a (r a -r o ) 150 m σ 0.1 Interface queue model PriQueue Queue size 64 Routing protocol AODV
Noise θ [0,1] Transport protocol UDP
Data generation rate λ [0,1] Packet generator CBR
Initial velocity of nodes v 0 1 (m/s) Packet size 1000 bytes
At the beginning of each simulation, the set of nodes is deployed within an area of 500×500 m 2 . As the value of α decreases, the nodes will be distributed over the whole area.
SLAW was evaluated over a coverage area of 1000×1000 m 2 . | 64,292 | [
"20516"
] | [
"487477"
] |
01477353 | en | [
"shs"
] | 2024/03/04 23:41:46 | 2014 | https://shs.hal.science/halshs-01477353/file/comp_lex_submitted.pdf | Volker Gast
Ekkehard König
Claire Moyse-Faurie
Comparative lexicology and the typology of event descriptions: A programmatic study 1
come L'archive ouverte pluridisciplinaire
Introduction
It is a well-known fact that the vocabularies of individual languages are structured very differently. Even if it is always possible to translate a certain utterance from one language into another, it is rarely, if ever, possible to say that all or even some lexemes making up an utterance in one language correspond perfectly and completely to the lexemes rendering that utterance in another. In most cases the content cut out from the amorphous mass of notions and ideas by one lexeme A may be similar to the content identified by some translational counterpart in another, but there is hardly ever complete identity and what we find is partial overlap at best. The consequence of this basic observation for structuralists was that semantic analysis in one language amounts to describing the structural relations between the lexemes of a language in terms of oppositions (antonymy, complementarity, converseness, etc.), superand subordination, meronymy, etc. (cf. [START_REF] Lyons | Structural Semantics: An Analysis of Part of the Vocabulary of Plato[END_REF][START_REF] Cruse | Lexical Semantics[END_REF], Löbner 2002, etc.), and that comparative semantics or comparative lexicology was the comparison between these networks of structural relations.
More recent theorizing about semantics, especially the ideas associated with the theory of Generative Grammar or with the basic assumptions of Cognitive Linguistics, is less agnostic about the semantic or propositional substance underlying the vocabularies of individual languages and has led to a wide variety of comparative studies in semantics or lexicology, 2 and even to attempts at formulating lexical typologies. These studies agree with the structuralist view that each language carves up conceptual space in a different manner, butin clear analogy to morpho-syntactic typology -the cuts are assumed not to be completely random and not to differ without limits. What we find, then, are two extreme views and 1
In the publications of Sebastian Loebner, to whom we dedicate this article on the occasion of his 65 th birthday, comparative studies on lexicology and meaning have played an important role (see for instance Löbner 2002: 153ff.). 2 several shades of grey in between. On the one extreme, there is the view that there are innate lexical concepts and constraints arising from the structure of the mind or the world. The other extreme is the view that languages differ arbitrarily in their semantic organization of conceptual domains. The middle ground is held by positions which accord some role to biases in perception and cognition as well as to communicative constraints and cultural practices, still underlining the importance and necessity of arbitrary linguistic conventions (cf. Narasinhan et.al. 2012).
A closer look at the lexical typologies currently available reveals the difficulties and limits of such cross-linguistic lexical studies. They are typically based on ontological domains easily identifiable across languages (e.g. body parts, colors, temperatures, possession, kinship terminology, motion, perception, eating, placing and displacing, etc.), on comparatively small samples of languages, or on both. There is a bias towards nominal or adjectival denotations, a bias which can also be observed in fieldwork on lesser described languages (cf. Evans 2011a on the neglect of verbs in elicitation, as well as some reasons for it). Moreover, the typological distinctions are not really analogous to those developed for morpho-syntactic properties. In most cases gradual rather than clear-cut distinctions are found between comparable lexical subsystems of different languages, and only in rare cases do we find implicational generalizations or connections between different variant properties.
There are (at least) two ways of making generalizations in lexical typology. On the one hand, different ways of carving up a specific semantic space (perception, possession, temperatures, body parts, etc.) may be compared in terms of their encoding by different lexical items. One of the best-known and most frequently cited examples is the typology for verbs of motion developed by [START_REF] Talmy | Lexicalization patterns: Semantic structure in lexical forms[END_REF][START_REF] Talmy | Toward a Cognitive Semantics[END_REF]. According to this analysis six semantic components can be distinguished in the meaning of verbs of motion: the FACT OF MOTION, the FIGURE, the GROUND, the PATH (directionality), the CAUSE and the MANNER OF MOVEMENT, and languages may differ in the number of components they encode in the verb and outside of the verb.
Talmy distinguished two main types of languages, viz. (i) satellite-framed languages, which encode the FACT OF MOTION together with the MANNER OF MOVEMENT in their verbs, signaling the PATH outside of the verb, best exemplified by Germanic languages like German (fahren 'move with the help of a vehicle', gehen 'moving on foot', etc.); and (ii) 'verb-framed languages' lexicalizing the FACT OF MOTION together with the PATH in the verb. Romance languages, as well as East Asian ones like Japanese and Korean, exemplify this second type. Verbs like aller, venir, entrer, sortir, monter, descendre, etc. in French only provide information on the PATH without saying anything about the MANNER.
Subsequent work (e.g. by Dan Slobin and others) has shown that there are hardly any pure types and that we might want to distinguish at least a third type which has verbs of both kinds. More specifically, [START_REF] Slobin | The many ways to search for a frog. Linguistic typology and the expression of motion events[END_REF] added a group of 'equipollently framed languages', where both PATH and MANNER have equal status in their formal encoding. Alternatively we could regard Talmy's types as extreme points on a scale. English, which was first seen as a pure satellite-framed language, has in fact verbs of both types (roll, walk, run, jump, etc. vs. go, come, climb, enter, etc.). And even German, a representative of the satellite-framing type par excellence, has at least one verb -kommen 'come' -which is completely neutral with regard to MANNER. A second way of doing lexical typology is to focus on those semantic contrasts which are associated with specific formal properties (derivational or grammatical processes) in one type of language but have no counterparts in others. This type of typological investigation focuses on varying degrees of differentiation. For example, Romance languages, and partly also English, differ from Germanic languages in lacking certain types of separable and nonseparable prefixes that are used to express general properties of activities and their associated PATIENTS, such as the distinction between affected vs. effected objects and other differentiations of the result of an event. In [START_REF] Plank | Verbs and objects in semantic agreement: Minor differences between English and German that might suggest a major one[END_REF] various contrasts between German and English are mentioned which provide examples of this type: ein Bild malen 'to paint a picture' vs. eine Wand an/be-malen 'to paint a wall' (cf. also König & Gast 2012: Ch. 14 for more examples of contrasts in lexical differentiations). A clear example of such a contrast between French and German is the following: siffler la Marseillaise could be translated into German as die Marseillaise pfeifen, i.e. 'to produce the relevant tune with your lips' or as die Marseillaise auspfeifen, i.e. 'whistle in protest at the playing of the Marseillaise'. The distinction exemplified by these two examples is a very general one, opposing Germanic languages to Romance ones and, interestingly enough, we may find parallels between Germanic languages and Oceanic languages as far as the use of lexical prefixes is concerned (cf. [START_REF] Ozanne-Rivierre | Verbal compounds and lexical prefixes in the languages of New Caledonia[END_REF].
In this contribution we propose a framework for the cross-linguistic comparison of verbal meanings (event descriptions) and their lexicalization patterns which is more of the second type distinguished above, insofar as it focuses on differences in the degrees of semantic differentiations made in specific domains of meaning. The central questions to be addressed are the following: What aspects or components of verbal meanings are typically lexicalized across languages? What differentiations are found, and what types of generalizations we can make? The study is programmatic insofar as it points out possible avenues for future typological research, rather than presenting well-founded cross-linguistic generalizations. We start with some theoretical background assumptions that are needed for a lexical typology of verb meanings (Section 2). In Sections 3 and 4, we present some case studies, i.e., comparisons of verbal inventories for the domains of eating and drinking (Section 3), and for verbs of physical impact (Section 4), i.e., verbs of killing, beating and cutting. Section 5 contains some thoughts on possible explanations for the patterns and limits of variation that we can observe. Section 6 contains a summary and the conclusions.
2 Some theoretical background assumptions
On (Neo-)Davidsonian event semantics
In keeping with basic assumptions of Davidsonian event semantics, we regard events as entities with the same ontological status as objects. Like objects, events can thus be predicated over, i.e., they can have properties. Just as we can say 'This object is an apple', we can say 'This event is a birthday party'. And just as noun phrases have a 'referential argument', in terms of [START_REF] Löbner | Understanding Semantics[END_REF], so do verbs. The referential argument of the noun phrase the president of France is (currently) a person called François Hollande. This person, who can be represented by a variable x, is both the argument of the (nominal) predicate president of France -even though this property is presuppositional rather than assertive -and the referent of the noun phrase the president of France. It is important to note that nominal denotations (properties of nouns, intensions) and reference (a function of noun phrases, extensions) need to be kept apart. For example, we could also use the noun phrase the fellow with the glasses to refer to François Hollande. In this case, the nominal denotation would be different while the referent would be the same. In terms of [START_REF] Frege | Über Sinn und Bedeutung[END_REF], we can use different 'modes of presentation' (Arten des Gegebenseins) to refer to any given individual.
The assumptions about nouns and noun phrases and their two-fold (predicational and referential) function sketched above can also be made about verbal denotations (cf. Löbner 2002: Sect. 6.3.2). We can even find a parallel to the opposition between denotation and reference by assuming that lexical items denote classes of events or event types, while finite verbs refer to (more or less specific) events (cf. Klein 1994 on the role of finiteness as introducing a 'Topic Time', thus anchoring an utterance in time). The verb collapse in (1), for instance, at the same time introduces an event -its referent -and attributes a property to that referent, namely that of being a collapsing event. This event can be expressed with a nominalization. The sentence in (1) could therefore roughly be paraphrased as [START_REF] Majid | The semantic categories of CUTTING and BREAKING events: A crosslinguistic perspective[END_REF].
(1) The Tacoma Narrows Bridge collapsed in 1940.
(2) The collapse of the Tacoma Narrows Bridge took place in 1940.
Events, thus, can have properties, just like objects or persons. We can distinguish two major types of properties of events. The first type of property could be called 'essential' or perhaps 'intrinsic'. It makes an event what it is. The intrinsic property of the event described in (1) is that of being a 'collapse'. We could refer to that event without attributing any (intrinsic) property to it. The pronoun something can be used to existentially quantify over objects as well as events. A sentence such as (3) is therefore possible, even though it is obviously quite uninformative:
(3) In 1940, something happened.
Having introduced an event as a 'discourse referent' (in the sense of [START_REF] Karttunen | Discourse referents[END_REF]) in (3), we can now attribute properties to it. This is explicitly done in (the somewhat technical sentence) (4):
(4) [And what happened in 1940?] The event that I was referring to is the collapse of the Tacoma Narrows Bridge.
The property of being a collapse -more specifically, the collapse of the Tacoma Narrows Bridge -is the essential or intrinsic property of the event described in (1). Given the highly abstract and "fleeting" (Evans 2011b: 512) nature of events, they are hardly conceivable without such a property. We will call the intrinsic property of an event -the property which singles out the event in question from the amorphous mass of happenings in the world -the 'primary event predicate'.
In addition to the (primary) property of being an event of collapsing, the event described in (1) is attributed an extrinsic property as well, namely that of having taken place in 1940. This property is extrinsic insofar as does not make the event described what it is, and the same event may have taken place at a different time. Another type of extrinsic property of events is, obviously, the place at which it took place. While the Tacoma Narrows Bridge could only collapse at the place where it was built (in Washington), other event types are more 'mobile'.
For example, the explosion described in (5) could have happened anywhere and the locative specification is quite informative here:
(5) The car bomb exploded on 6 th Street.
As is common practice in Davidsonian (as well as Neo-Davidsonian) event semantics, we will represent the referential arguments of verbal predicates -the events -with a variable e. The primary event predicate of an event is simply represented as a predicate which is said to be true of the relevant event. Let us consider a simple example:
(6) It is raining now.
The sentence meaning of (6) can be regarded as a conjunction of the primary event predicate 'be a raining event' and the extrinsic property 'taking place right now', both of which are attributed to some event e. This can be represented as shown in (7). The primary event predicate is represented as RAIN, taking the (existentially bound) variable e as its argument, and the extrinsic property of 'taking place right now' is represented as a relationship of inclusion between the time of the event (t e ) and the moment of speaking (t 0 ). ( 7) ∃e [RAIN(e) ˄ t e ⊃ t 0 ] 'There is an event e such that e is a raining event (RAIN(e)) and the temporal extension of e fully includes the moment of utterance (t e ⊃ t 0 ).' Finally, there are also extrinsic properties which specify the primary event predicate further.
For example, the adverb steadily in (8) expresses a manner specification. Like temporal and locative specifications, such adverbials can also be regarded as predicates taking the relevant variable as an argument, and (8) can, in a simplified form, be represented as shown in (9). Obviously, the predicate STEADY -roughly, indicating 'temporal stability' of an eventinteracts closely with the main predicate, i.e. 'rain'. Such interactions between the various components of meaning in an event description are a central part of our framework for crosslinguistic lexical comparison.
The raining example illustrates the meaning of a sentence on the basis of an a-valent verb, i.e. a verb which does not take any nominal arguments or participants. This is different in the case of (1) and ( 5) above. In a Neo-Davidsonian framework (cf. [START_REF] Parsons | Events in the Semantics of English. A Study in Subatomic Semantics[END_REF], participants are represented as entities that stand in a thematic relation to the event argument e. For example, in (1) there is one argument/participant, i.e., the Tacoma Narrows Bridge. This bridge can be regarded as a PATIENT of the event in question (note that participant roles will be printed in small caps in the following, indicating that they are used as technical vocabulary, rather than natural language items). The sentence can thus be represented as shown in (10).
(10) ∃e [COLLAPSE(e) ˄ PATIENT(TNB,e) ˄ t e ⊂ t 1940 ] 'There is an event e such that e is a collapsing event, the Tacoma Narrows Bridge (TNB) is he PATIENT of e, and the temporal extension of e is fully included in the year 1940.' It is difficult to tell whether participants -in particular, internal arguments -are intrinsic or extrinsic properties of events. We will regard them as extrinsic.
According to the Neo-Davidsonian framework of sentence semantics sketched above (at least) four major aspects of event descriptions can thus be distinguished, all of which are represented as predicates of the event variable e:
• the primary event predicate (e.g. RAIN(e)) • participant roles (e.g. PATIENT(x,e))
• temporal and locative specifications (e.g. t e ⊂ t 1940 )
• MANNER specifications (e.g. STEADY(e))
These four aspects of sentence meaning will provide the cornerstones of our typology. In addition, we will consider some more fine-grained distinctions relating to matters of aktionsart or actionality. In particular, event descriptions often make differentiations according to the RESULT of the event in question. In most cases, the lexical specifications relating to the RESULT of an event concern properties of the THEME or PATIENT (more generally speaking, of the UNDERGOER of an event; cf. [START_REF] Van Valin | Syntax: Structure, Meaning and Function[END_REF]. Consider the example in ( 11):
(11) The thief was shot dead.
(11) says that there was a shooting event which resulted in the thief's death. This can be represented as in ( 12):
(12) ∃e[SHOOT(e) ˄ PATIENT(T,e) ˄ t e < t 0 ˄ ∀t[ t > t e → DEAD(T) AT t]] ' … for any point in time after t e , the thief was dead.' As pointed out above, the different types of information about a given event interact in various ways. The MANNER in which a given event takes place obviously has a strong impact on the event description itself. For instance, modifying the verb collapse in (1) with either quickly or slowly would specify the event in question with respect to its internal structure.
Similarly, the types of participants involved may have considerable influence on the type of event described. For example, it makes a difference whether a bridge collapses or a house.
Even more 'peripheral' participants like INSTRUMENTS interact closely with the primary event predicate. Killing someone with a rope is quite different from killing that person with a gun. By contrast, the TIME and PLACE at which an event takes place normally have a rather minor impact on the event itself. As we will show, the type of interaction between extrinsic properties of event descriptions and the primary event predicate depends on the domain of meaning under comparison, and determining such domain-specific interactions is one central concern of this study.
Parameters of a lexical typology of verbal meanings
If we abstract from specific notional domains and their encoding in lexical subsystems, generalizations of a higher order can be made. The major generalizations made in Evans (2011b), for example, are formulated not so much in terms of lexical subsystems but in terms of four general properties of nominal denotations or event descriptions: We find differences in the GRANULARITY of lexical distinctions, in the BOUNDARIES between lexical categories, in the GROUPING and the DISSECTION of semantic components. The first two parameters concern meronymical relations while grouping and dissection refer to levels of generalization and the expression of sub-aspects of a given (internally complex) denotation. We will therefore consider granularity and boundaries on the one hand, and grouping and dissection on the other, in one section each.
Granularity and the setting of boundaries
The parameter of 'granularity' concerns the degree of 'ramification' in a meronymical tree.
With respect to the literal or concrete meaning of the word tree, we can for instance notice that English makes a distinction between branch and twig, which is not made in other languages (e.g. Georgian, which only has t'ot'i for both 'branch, twig'). This situation can be represented as shown in (13). The nodes 'D' and 'E' correspond to terms found in one language but not in another. The second major aspect distinguished by Evans (2011b) in the organization of meronymical systems concerns the location of boundaries between sub-components of an object. Evans (2011b: 512) points out that "the Savosavo 'leg' category begins at the hip joint (and encompasses the foot), whereas Tidore yohu -roughly, 'leg' -cuts off three-quarters of the way up to the thigh". Different ways of partitioning an object or event A into sub-components B and C are shown in ( 14). ( 14) A
B C
A common problem for learners of European languages can be illustrated with the different ways of partitioning a time span. European languages differ considerably in the boundaries that they draw between the parts of the day (cf. Coseriu 1981). This problem has practical implications because the time of the day is commonly referred to in greetings. Italian buona sera could thus be rendered in German as either guten Tag ('good day'), which covers the time from noon to the early afternoon in Germany, or else guten Abend, which is used in Germany from around 6pm onwards. Such different partitionings lead to serious translation problems, and it is sometimes impossible to determine the exact translational equivalent of an expression. For example, a translator translating a novel would have to (roughly) know the time at which buona sera is uttered in order to translate it into German.
In the domain of events, such problems in the organization of meronymical systems concern complex events, i.e. events which consist of several sub-events of different types. We will illustrate this with two examples from sports. There is a discipline called 'triple jump' in English or, alternatively, 'hop, step and jump'. In German this discipline is called Dreisprung, in French triple saut. In this particular case we find special lexemes in English for a complex event (hop, step, jump), which is expressed by a single lexeme in German or French (Sprung, saut), and which can be decomposed into three sub-events, as is indicated by the numeral in German and French, in English expressed by different lexemes. Of course, the three subevents can be decomposed further into three stages, i.e., (i) moving off the ground, (ii) going through the air and (iii) landing. It is in this final stage that the three sub-events differ:
Hopping means that one starts from and lands on the same foot (cf. OED, s.v. hop: 'to spring or leap on one foot'). Stepping, by contrast, means that one lands with the other foot and jumping means that one lands with both feet, the rest being identical in all three cases.
The relationship of meronymy holding between (the event of) 'triple jump' and the sub-events 'hop', 'step' and 'jump' is illustrated in ( 15). In the case of hop-step-jump, one language (English) makes distinctions between types of sub-events which are not made in other languages (French, German). In German, all subevents are lexicalized as springen, and the entire event is conceptualized as 'springenspringen-springen', i.e., Dreisprung. A slightly different situation is found in another sports discipline, i.e. weightlifting. There is a technique which is called clean and jerk in English.
'Cleaning' is the process of lifting the bar over one's shoulders, 'jerking' the process of lifting it overhead. In German, the differentiation is not commonly made, and the entire process is normally subsumed under one term, i.e., stoßen (though in technical vocabulary a distinction is also made between umsetzen and ausstoßen). This situation is described in ( 16).
(
) E clean jerk G stoßen 16
Unlike in the case of the triple jump, there are no (common) terms for the component events of the clean-and-jerk process in German. While a triple jump implies three instances of (the general German verb) springen, the clean-and-jerk process does not imply two instances of stoßen.
We can thus have event predicates for sub-events in one language which are missing in another, or at least not commonly used. Similarly, we may have predicates in one language which 'bundle' several sub-events and which are not found in another language. Evans (2011b) mentions the example of the (semantically complex) predicate 'gather (wood)', which is expressed as 'go hit get X come put' in the Papuan language Kalam. In such cases, a complex event is made up of sequential sub-events. Similar, though less fine-grained distinctions can be observed in European languages. English and German have verbs for the sequence 'go -take X -come back (with X)', i.e., fetch and holen, respectively. Spanish only has a verb for 'take X -come (with X)', i.e., traer ('bring'; cf. also Engl. go get). Accordingly, fetch the book can only be rendered as ve a traer el libro 'go to bring the book' in Spanish.
These different ways of 'bundling' sub-events can be represented as shown in ( 17).
(17) go take X come back (with X)
Sp ir traer
E/G fetch/holen
Note that further distinctions can be made within bundling verbs of the type of fetch. In French, as in English, only one verb is necessary for the entire sequence 'go -take X -come back (with X)', but a distinction has to be made depending on whether it is a thing or a person which is brought: Fr. rapporter (e.g. tu rapporteras le journal 'you will go and fetch the newspaper') vs. ramener (tu ramèneras Pierre 'you will go and fetch Peter'). German holen can be used for both things and persons, but there is a specialized verb for persons as well, i.e.
abholen ('call/come for sb.').
Grouping and dissecting
So far, we have been concerned with different organizations of meronymical systems, i.e.
with part-whole relations. Cross-linguistic differences can, of course, also be observed in the level of generality at which a given category is located ('grouping', in terms of Evans 2011b).
In the domain of concrete objects, taxonomies play a very important role. Lower-level categories can be characterized or defined as conjunctions of the (next) higher-level category (genus proximum) and additional properties distinguishing the lower-level categories from each other. The same method can be used for event descriptions. In this case it is the extrinsic properties that can be used to differentiate some hyponymic event description further (the differentia specifica). For example, the predicate terms eat and drink differ with respect to the MANNER of consumption and the types of objects that are consumed (food vs. liquid). There is thus a superordinate predicate -say, ingest -and two more specific types of predicates, eat and drink.
As an example of 'grouping' in the domain of body-parts, Evans (2011b) considers terms for 'finger' and 'toe'. English does not have a cover term for these body parts. Other languages, by contrast, do not distinguish lexically between them. For instance, Serbo-Croatian uses the same term for fingers and toes (prst), as does Spanish (dedo). While being located at different parts of the body as far as meronymical organization is concerned, these languages 'group' them together because of their similarities with respect to their position, form, function, etc.
An analogous example from the domain of event descriptions can be given by considering verbs of washing. Washing can be regarded as comprising two basic phases, i.e., applying some kind of cleaning agent (in different ways) and then removing it. In the case of body care, we could use the verb soap for the first phase and rinse for the second. The verb rinse could also be used for the second phase of doing one's laundry. In German, by contrast, different verbs would have to be used. The process of getting the soap off one's body is lexicalized as (Seife) abspülen, while the process of removing the detergent from laundry is (Wäsche)
spülen. Obviously, the two processes of washing soap off the body and from the laundry are quite different, but English (as well as Spanish) considers them similar enough to be encoded by the same lexical item. French makes a slight difference, and uses the plain verb rincer for laundry and the reflexive or middle form se rincer for body care.
Evans' parameter of 'dissection', finally, concerns the ways in which "complex phenomena are decomposed into parts" (Evans 2011b: 514). Specific domains of meaning are inherently multi-dimensional. Evans (2011b) mentions the example of motion verbs, referring to [START_REF] Talmy | Lexicalization patterns: Semantic structure in lexical forms[END_REF][START_REF] Talmy | Toward a Cognitive Semantics[END_REF] classical typology, which was already mentioned in Section 1. Unlike the parameter of 'granularity', which concerns the availability of verbs for specific types of sub-events, 'dissection' refers to the ways in which several properties of the same event are distributed over elements of the sentence. An event of 'enter(ing) (the house) walking' or, alternatively 'go(ing) into (the house)', attributes two properties to the event in question, that of being a walking event and that of being directed into the house. In such cases it is hard to tell which of the event predications is primary. A motion event without a MANNER of motion is probably more easily conceivable than a motion event without a direction, so the latter property might be more essential than the former. The event structure of an event of 'walking in' can accordingly be represented as shown in ( 18 The type of cross-linguistic variation discussed in this section may lead to a one-to-many relation between lexical items of one language as opposed to those of another. This situation can be represented schematically as shown in ( 19).
(
) Language A lexeme A 1 lexeme A 2 lexeme A 3 (Context: C 1 ) (Context C 2 ) (Context C 3 ) Language B lexeme B (contexts C 1 , C 2 , C 3 ) 19
Towards a typology of verb meanings
As has been shown, varying degrees of differentiation may primarily result from two sources, those having to do with different degrees of specificity or generality, and those resulting from different degrees of granularity expressible by a (mostly sequential) arrangement of subevents. The former type of variation is normally associated with differences in the extrinsic properties of events. The latter type of differentiation could be taken further notionally without necessarily corresponding to different lexemes in a language. This is then no longer of interest to linguists.
In our comparison, we will mainly be concerned with cross-linguistic variation that concerns the levels of generality at which event descriptions are lexicalized, i.e., grouping and dissection. The most important parameters of variation therefore concern the extrinsic properties of events mentioned above, i.e., MANNER predications, the participants involved, and the TIME and PLACE at which an event takes place. As has been pointed out, differences between extrinsic properties often imply differences between intrinsic properties. Such interactions between the components of event descriptions are a central aspect of our typology.
What is of interest to us is not only additional observations about differentiations in the verbal vocabulary of different languages. It is also our goal to systematize the conditions for such differentiations. We will address questions like the following: In what ways is the type of differentiation observable in cross-linguistic comparisons related to the meaning of a verb?
Which differentiations go together with which types of meanings? How can such observations be used to set up a typology of lexical differentiation, and what types of generalizations can be made?
It is our goal to go beyond a list of interesting data illustrating extensive lexical differentiation and to raise explanatory questions of a more general kind. In discussing our cases we will look for pervasive, more general factors determining differentiations as opposed to very specific ones, as well as at possible connections between the basic meaning of a verb and the semantic components that determine the differentiations. These connections are assumed to differ from one domain of meaning to the next, and it is our goal to formulate hypotheses about the patterns of lexicalization found in specific domains of meaning. To illustrate this with an example, we will inquire why it is that the counterparts of the English verb eat, if there is differentiation, mostly depend on the substance eaten (cf. Section 3), whereas differentiations found for the verb kill or the notion of killing seem to depend more on the INSTRUMENT or MANNER used in the action (cf. Section 4.1).
As far as the empirical basis of our study is concerned, we have partly selected domains known to manifest differential degrees of generality at least in two languages on the basis of previous work. As far as languages are concerned, we have primarily selected our native tongues as well as languages one of us has studied in detail. The starting point is invariably provided by observations on clear distinctions in the lexical organization of certain conceptual domains. Attempts to find the counterpart of certain verbs like eat, cut, kill, beat, for instance, reveal that some languages have a wide variety of possible translations depending on event parameters (like properties of AGENTS and/or PATIENTS) which play no role in English and these languages may even lack a general term such as we find in English.
In our investigation of the role of contextual conditions for lexical differentiations we will start from the argument frames associated with certain verbs in order to see what kinds of differentiation is conceivable and actually found in the languages under comparison. The verbs and semantic domains to be discussed below represent partly well-known cases of and partly new observations on remarkable lexical differentiations found across languages. We will discuss the following notions and verb meanings: eat and drink (Section 3), verbs of killing and exerting physical force killing (e.g. kill, hit/beat, cut, cf. Section 4). The depth and breadth of these discussions will not always be the same. In some cases we have data from a variety of languages manifesting high degrees of lexical differentiations, in others our observations are more of a contrastive kind, being based on two or several European languages.
3 Verbs of eating and drinking
The basic parameters of variation
Let us take as a first example the English verbs eat and drink, since it has been pointed out that these verbs and their counterparts in other languages manifest remarkable properties and do not behave like ordinary transitive verbs (cf. [START_REF] Naess | The grammar of eating and drinking verbs[END_REF]. A schematic representation of the frame associated with these verbs will roughly take the form of ( 20). It turns out that all of the arguments and circumstantial relations shown in (20) may be lexicalized in verbs of eating and drinking in specific languages and that languages may differ with respect to these lexical components. A first type of variation concerns selectional restrictions on the AGENT and the PATIENT. For the AGENT, some languages have different verbs for humans and animals. German is of this type, as it distinguishes between essen (human) and fressen (animals) for eating, and between trinken (humans) and saufen (animals) for drinking. English does not make any such distinction and uses eat and drink for animals alike. In an extended sense, Germ. fressen and saufen can also be used with human subjects if the MANNER of food consumption (quantity, noise produced, etc.) is more like that associated with animals (Karl frisst wie ein Schwein 'Karl eats like a pig').
Much more variation can be found when we consider selectional restrictions on the PATIENT.
Note first that the basic verbs of English -eat and drink -already exhibit selectional restrictions insofar as they can only be used with (more or less solid) food and liquids, respectively. Some languages (e.g. Kalam, Walpiri) have only one verb for both activities (cf. [START_REF] Wierzbicka | All people eat and drink. Does this mean that 'eating' and 'drinking' are universal human concepts[END_REF], Naess 2011: 415), roughly corresponding to the English expression 'take in/consume food/liquid'. Such verbs can also be found in European languages in more formal or specialized, e.g. medical, registers (cf. Fr. ingurgiter, Germ. etwas zu sich nehmen). In East Uvean, there is a honorific term (when one speaks to/of the king) for both types of activity, i.e.
taumafa, but there are two different terms in the ordinary language (inu 'drink' and kai 'eat').
While the difference between eating and drinking can be regarded alternatively as concerning the MANNER of consumption or the substance of the PATIENT, some languages differentiate more clearly according to the substance of what is consumed and, accordingly, group soups with beverages rather than meals. This is what we find in Japanese (cf. Finally, there are also verbs of eating that are used when both starch food and fish or meat is consumed. Xârâgurè haakéi/xaakéi means (roughly) 'eat as accompaniment to protein food', and the meanings 'food eaten with another food as relish' or 'meat or fish provided to eat with vegetable food, relish' are expressed by the verbs kīnaki (Māori), kīkī (East Uvean), kiki (Tuvaluan), and (kai)kina (East Uvean, West Uvean), all deriving from PPn *kina. Even more specifically, the verb kītaki (East Futunan, East Uvean) denotes an event of eating both starch food and coconut flesh or ripe bananas.
Obviously, food can also be combined with beverages, and given the highly specific verb meanings mentioned above it is perhaps not surprising to see that there are also verbs for food-beverage combinations. The East Uvean verb omaki (< PPn *omaki) and the Tuvaluan verb peke mean 'dunk food into water before eating it'. East Uvean fono (< PPn *fono) is used when food is eaten with kava.
We will conclude this overview of the rich inventories of verbs of eating found in Melanesian and Polynesian languages with examples of verbs that do not denote eating actions, but the desire to eat specific things, i.e. terms meaning 'feel like eating specific kinds of food'. East
Futunan gā and Haméa treu mean 'crave for proteins (i.e. fish or meat)', and East Uvean as well as Tongan 'umisi (< Proto-Fijian *kusima) means 'crave for fish/seafood'.
Towards cross-linguistic generalizations
Obviously, it is very difficult to make generalizations in lexical typology in general, and even more so in the (highly) abstract domain of verbal meanings. We will propose hierarchies which rank (extrinsic) properties of event descriptions in terms of the (hypothesized) likelihood that these properties will be lexicalized in specific verbs. The hierarchies will rank pairs of parameters that make similar contributions to the predication in question. Before formulating such hierarchies, we will consider the various parameters individually, however.
In the languages that we have looked at, the most important extrinsic property that is lexicalized in eating verbs seems to be the type of food or beverage consumed (the PATIENT).
In Europe (as well as probably in most other parts of the world), there are consistent differentiations between eating and drinking, and languages that do not make a distinction here at all seem to be rare. As the Melanesian and Polynesian languages discussed in Section 3.2 have shown, there are hardly any limits on the level of specificity found in differentiations according to the type of food consumed.
The AGENT has been found to be relevant in German. We have not investigated whether there are distinctions according to age, but it seems likely to us that cross-linguistic studies will reveal that at least some languages use specific eating verbs for children. Still, distinctions according to properties of the AGENT are clearly less prominent than distinctions according to properties of the PATIENT, in terms of both the number of languages which make such distinctions, and the number of distinctions made in the languages that do (basically, human vs. non-human).
A property of eating verbs that has been found to be relatively prominent concerns the MANNER of consumption. Note that this parameter is obviously not totally independent of the type of food consumed or selectional restrictions on the AGENT. It makes a difference who eats what. In many cases it is probably difficult to tell apart whether it is primarily the MANNER of eating or the type of food that is lexicalized in a given case. Soups are liquid but they are 'eaten' in English, perhaps because they are consumed with a spoon and with specific portion sizes. As was pointed out in Section 3.1, Japanese treats soups in the same way as beverages and thus seems to distinguish more clearly on the basis of substance rather than the MANNER of eating (cf. also Fr. manger la soupe vs. boire le potage/bouillon). The INSTRUMENT of eating, by contrast, seems to be less commonly encoded, and we have noticed that the relevant verbs are often interpreted metaphorically in German. Lexical distinctions have also been found with respect to the RESULT of eating or drinking events (e.g. overeat).
Verbs of eating which lexicalize the TIME of eating are widespread in Europe, perhaps because different types of meals are consumed at specific times of the day (cf. Section 5 on explanations). A verb like Germ. frühstücken 'have breakfast' is thus quite informative, as it conveys information not only about the TIME of eating but also about the food that is typically consumed. The PLACE of eating, by contrast, is hardly ever lexicalized, and given that there is not much variation possible it is not surprising to find that this parameter is of minor importance in the present context.
On the basis of the considerations made above, we propose the following hierarchies of properties associated with eating and drinking events:
(24) a. PATIENT > AGENT b. MANNER > INSTRUMENT c. TIME > PLACE
The hierarchies in (24) are intended as hypotheses about the tendencies for specific properties of events to be lexicalized in the world's languages. Obviously, such hierarchies can only be probabilistic, as they are certainly, at least partially, culture-specific, and they are not intended to represent implicational relations, but rather tendencies. Those properties located to the left are more likely to be lexicalized in verbs of eating or drinking than those on the right.
Verbs of physical impact
We will now turn to an entirely different group of verbs, which are associated with different frames and call for different generalizations and explanations, i.e. verbs of physical impact.
We have chosen the three groups 'verbs of killing', 'verbs of beating' and 'verbs of cutting' because the relevant verbs seemed to exhibit interesting differentiations in the languages investigated by us. Needless to say, there are certainly many more interesting verbs belonging to this group, and the discussion in this section is far from exhaustive.
Verbs of killing
The concept of 'killing' is expressed by prototypical transitive verbs like Engl. kill, Germ.
töten, Fr. tuer, etc. We can use a valency frame similar to the one used for verbs of eating.
However, an additional aspect of meaning is relevant to the group of verbs under discussion in this section. While we do not expect to find any noticeable differences with respect to the MOTIVATIONS of an eating or drinking event -people mostly eat in order to appease their hunger, though they may also just gourmandize -this is an important aspect in any action of killing. We will see that the MOTIVATION of a killing event is in fact often encoded in verbs of killing. The MOTIVATION can be regarded as preceding an action, as opposed to the RESULT, which follows the acion. The complete valency frame to be considered in this section can thus be represented as is shown in ( 25). The TIME and PLACE of a killing event are hardly, if ever, encoded lexically in verbs. We will therefore disregard these parameters in the following discussion.
Taking again the selection of AGENTS as a point of departure, we can see that in many European languages there is a neutral verb, such as the three verbs mentioned above, that can be used irrespective of the exact nature of the AGENT, i.e., for human and non-human AGENTS alike. In more specialized registers, however, terms may be available for specific animals as far as subjects are concerned, e.g. German reißen (of lions, tigers, wolfs, etc.) and schlagen (of predator birds). Moreover, certain verbs like shoot require a human AGENT for nonlinguistic reasons, as shooting implies an intentional AGENT with certain fine motor skills (and it is questionable if we would use the verb erschießen if an animal -say, a cat -accidentally shot a person by playing with a gun). Disregarding such more or less specific distinctions, the AGENT of killing does not seem to be a prominent factor in the lexicalization patterns of verbs of killing in European languages.
If we consider the selectional restrictions concerning the PATIENT, we find, again, some interesting cases of differentiation, like Engl. slaughter or Germ. schlachten, Fr. abattre, etc., which are used for killing animals (for food production), and this seems to be the only restriction found in that domain, unsurprisingly so, since only animals and human beings can be killed. 5 An interesting and subtle difference in the lexical inventories of English and German, however, is described by [START_REF] Plank | Verbs and objects in semantic agreement: Minor differences between English and German that might suggest a major one[END_REF]. There are as many as five possible To mention only the most important of the relevant restrictions, schießen and abschießen are only used with game animals, the main difference between these two verbs consisting in the MOTIVATIONS of killing (cf. below). Erschießen and niederschießen are only used with 5 Of course there are metaphorical extensions, such as to kill time, Fr. tuer le temps, Germ. die Zeit totschlagen.
human objects and perhaps higher animals. Finally, the difference between erschießen and totschießen, on the one hand, and niederschießen, on the other, is the RESULT, the death of the victim being implied only in the former two cases (i.e., niederschießen is not a verb of killing;
the survival of the object would even be assumed by implicature). The English verb shoot is completely neutral with regard to all these selectional restrictions and resultative implications.
While exhibiting differences with respect to properties of the PATIENT -human vs. nonhuman as well as further distinctions in the class of non-human referents -the verbs in ( 26)
can also be used to illustrate a second important parameter of variation, i.e. the MOTIVATION of killing. When animals are killed, there are two major MOTIVATIONS, i.e. food production and elimination for other reasons. When game is shot for food production, the verb schießen is normally used; when it is shot to reduce the population, the term abschießen is more common. Totschießen as in ( 26)d. could be used if danger is to be avoided, or if an animal is killed ad hoc, i.e. if the killing event is not motivated by any specific or systematic reason.
For the killing of persons, three major MOTIVATIONS can be distinguished: persons may be killed for criminal reasons (e.g. murder), for political or ideological reasons (e.g. assassinate), and they may be killed 'legally' (e.g. execute). Note that the two cognate verbs assassiner in French and assassinate in English have different implications with respect to both the PATIENT and the MOTIVATION of a killing event. While the former permits any kind of human object, the latter is restricted to public figures, roughly expressing 'to kill for ideological reasons'.
Given that killing is an ethically highly sensitive action, it is not surprising to find that languages indicate why someone is killed. As pointed out above, this distinguishes verbs of killing from verbs of eating. As we will see below, the MOTIVATION is also rarely encoded in verbs of beating or cutting (cf. also Section 5 on explanations The most remarkable fact is perhaps that there is no cover term for all these verbs, i.e. no hyperonym that is unmarked for the MANNER of killing (though a euphemism may be used, i.e. sa 'hit'; see also Section 4.2).
If we compare the specific (related) pairs of parameters that may be encoded lexically, as we did in the discussion of verbs of eating and drinking in Section 3, we can postulate the following hierarchies for verbs of killing:
(30) a. PATIENT > AGENT b. INSTRUMENT > MANNER c. MOTIVATION > RESULT
Again, the PATIENT is more prominently encoded than the AGENT. However, unlike in the case of eating events, INSTRUMENTS seem to be more prominently lexicalized than MANNERS.
Obviously, the INSTRUMENT of a killing action predetermines the MANNER to a considerable extent, e.g. insofar as one cannot shoot a person slowly or excessively. Finally, the 29 MOTIVATION is more prominent than the RESULT, which is basically the same in all cases (the PATIENT is dead), though specific distinctions can be made with respect to the 'shape' of the dead person or animal (cf. zerstückeln 'hack to pieces').
Verbs of beating
Our next semantic domain and the relevant subsets of basic vocabulary also have to do with more or less unfriendly interactions between man and his fellow human beings or with his environment. The cover term 'verbs of beating' subsumes verbs which denote actions in which force is exerted manually, with fast movements on another object, typically with a body part or blunt INSTRUMENT. It is probably not surprising that the aspects of meaning that we find encoded in the relevant verbs are similar -though not identical -to those that we found in the domain of killing. Again we will use English, German and French as starting points and turn to Oceanic languages for examples of more extensive differentiations. The domain of 'verbs of beating' includes at least the following expressions in English: hit, beat as the most general expressions; crash, smash, trash, smite, slay, knock, which incorporate an element of great force (an aspect of MANNER) and characterize the RESULT as devastating; In German we also have de-nominal verbs expressing the INSTRUMENT directly (prügeln 'beat with a club',6 auspeitschen 'whip'), but such lexical differentiation as we find is mainly based on formal modifications of the basic general verbs schlagen and hauen through separable and inseparable prefixes, the most common strategy of lexical differentiation in typical Germanic languages. Many of these formations (an-schlagen, ab-schlagen, vor-schlagen, auf-schlagen, unter-schlagen, über-schlagen, um-schlagen, etc.) are nowadays mainly restricted to metaphorical or idiomatic usage. The set of semantic aspects additionally expressed by the other verbs includes only two: the RESULT (zer-schlagen, er-schlagen, be-schlagen, zusammen-schlagen, ab-schlagen), and the DIRECTION (ein-schlagen, aus-schlagen, zu-e. fîèmè 'kill by hitting with a stick' f. fîwi 'hit on s.th. so that it falls'
Finally, a number of verbs can be derived from the roots sö-'hit with a circular movement of the hand or arm'. A major difference to the verbs of killing seems to be that the MOTIVATIONS for an action of beating do not seem to be encoded in verbs. Specific verbs are typically used for educational measures, e.g. Germ. einen Klaps geben 'smack', eine Ohrfeige geben 'slap', but they are also used in other contexts. Some highly specific verbs like auspeitschen 'whip', which make reference to the INSTRUMENT used, explicitly denote some type of punishment. In comparison to verbs of killing, the MOTIVATION of a beating action is nevertheless probably a minor factor in the semantics of beating verbs.
Using the same pairs of parameters that we compared for verbs of eating and drinking and verbs of killing, we can postulate the following hierarchies: The hierarchies are, obviously, similar to those proposed for verbs of killing, but there is an important difference. Languages seem to put more emphasis on the RESULT than on the MOTIVATION of beating.
Verbs of cutting
The action of cutting, i.e., of using a of sharp INSTRUMENT to change the physical integrity of an object, is just as dramatic an act of interference into the existence and shape of living organisms or objects as the actions discussed before, but in contrast to the last two domains this action is typically associated with creative activities such as preparing food, constructing, repairing sth., etc. (for a comparative study, cf. the special issue of Cognitive Linguistics edited by [START_REF] Majid | The semantic categories of CUTTING and BREAKING events: A crosslinguistic perspective[END_REF], in particular Majid et al. 2007).If we look at our three European languages again which provide the starting point for our investigation, we note that there is not much differentiation in the basic vocabulary of English. In addition to the most general and most versatile verb cut, and its combinations with particles (across, off, out, up, through, lenghthwise) (beschneiden, zerschneiden, abschneiden, anschneiden, aufschneiden, ausschneiden). The verb most closely corresponding to découper in French is zuschneiden.
In Oceanic languages we find a wide variety of verbs of cutting whose choice depends f. cha-pèrè 'cut efficiently' (-pèrè/-bèrè 'efficiently') g. cha-pöru 'cut the bark from every part of the stem' (pöru/-böru 'peel') h. cha-puru 'cut in two' (-puru/-buru 'cut in two vertically')
(ii) Choice depends primarily on the PATIENT (material to be cut)
In the following examples from East Futunan the choice of the verb depends primarily on the PATIENT, i.e. on the material to be cut (e.g. hair, grass, wood, etc.), even though the INSTRUMENT may also be implied.
( First, it is obvious that the PATIENT plays a more prominent role than the AGENT. With respect to the relation between INSTRUMENT and MANNER, we can note that there seems to be little difference between the two parameters in the languages investigated by us. European languages care little about either of them, and the Oceanic languages that we have considered make distinctions according to both parameters. In lack of further comparative evidence, we will therefore assume that both parameters are ranked equally. The RESULT, finally, is clearly a very prominent aspect of meaning and is certainly more prominent than the MOTIVATION of an action, since manipulation of and interference with the integrity of an object is usually goal-directed.
The hierarchies characterizing the domain of cutting verbs can thus be represented as follows: As has been mentioned, these hierarchies are basically identical to those characterizing verbs of beating, with the exception that there does not seem to be any noticeable difference between INSTRUMENT and MANNER in the class of cutting verbs.
Some generalizations
We have been rather cautious in formulating our generalizations and have only opposed pairs of parameters to each other which make a similar contribution to the predication -AGENT vs.
PATIENT, INSTRUMENT vs. MANNER, MOTIVATION vs. RESULT. One generalization that emerged from all verb classes -quite unsurprisingly -is that the PATIENT is encoded more prominently than the AGENT. The following hierarchy can thus be assumed to be more or less general (cf. also Kratzer 1996 and many others on the different statuses of AGENTS and PATIENTS in predications):
(40) PATIENT > AGENT Distinctions according to the PATIENT have been found in all classes of verbs under consideration, and given that the nature of the PATIENT has a considerable impact on the intrinsic properties of an event, this is not surprising. We can make the following generalization:
(41) Generalization I:
Restrictions on, or implications about, the nature of the PATIENT are more commonly lexicalized than restrictions on, or implications about, the AGENT. While all of the activities have in common that they imply the use of some INSTRUMENT, they differ in their internal event structures. Eating and drinking are complex events, with specific sub-events, e.g. biting, chewing and swallowing in the case of eating. Beating events, by contrast, are basically punctual and 'monolithic', i.e., they do not comprise sub-events but are typically carried out with a single movement (with the arm). Killing events are also basically punctual, or are at least conceived as such -as a matter of fact, intrinsically so, because by their very nature they focus on the endpoint of the action. Cutting events are located in between eating verbs and beating events with respect to the internal complexity of their event structure. For example, cutting often implies repeated movements in opposite directions and can thus also been broken down into sub-events.
The generalization that emerges from the considerations made above is the following:
(43) Generalization II:
The MANNER of an event is lexicalized more commonly in verbs denoting internally complex events, i.e., events comprising clearly distinguishable subevents.
Let us now turn to the parameters MOTIVATION and RESULT. These parameters are considered together because they correspond to the initial and the final stage of an event, respectively.
We have found the following hierarchies: The difference seems to be that killing is an action which, by its very nature, can be assumed to carry ethical implications. One cannot kill just like that, and any killing event needs to be motivated in some way. This is obviously different for eating and cutting, though beating, too, may require some ethical justification at times.
Towards explanations
We have discussed some dimensions of variation along which specific verb classes differ, and we have made some generalizations on the basis of examples from a rather selective sample of languages. We will now consider possible explanations for the patterns and limits of variation that can be observed in the domain of event descriptions under discussion. The generalizations made so far lend themselves to three types of explanations, two of them 'system-internal' and one 'system-external'. First, we can assume that there is a general tendency for verbs to encode 'more intrinsic' properties to a greater extent than 'more extrinsic' ones. In other words, the stronger the impact of a parameter on the internal make-up of a given event, the more likely the relevant parameter will be encoded lexically. This principle accounts for the fact that PATIENTS are more prone to be encoded lexically than AGENTS, and that INSTRUMENTS and MANNER specifications are more likely to be encoded than TIME and PLACE. The explanatory principle of this tendency is perhaps one of 'encoding economy': Intrinsic properties of events lead to more homogeneous ('natural') classes of events, and homogeneous or natural classes of events will occur more often in conversation than highly specific ones. The degree of homogeneity of an event description can thus be assumed to be reflected in lexicalization patterns, and we propose the following explanation:
(45) Explanation I:
The more closely a parameter of event description interacts with the intrinsic properties of the event in question, the more likely it will be encoded lexically, because lexical items tend to correspond to natural classes recurring in natural discourse, and events form natural classes on the basis of intrinsic, rather than extrinsic, properties.
The second principle concerns the compatibility of events or event descriptions with specific types of modification. MANNER predicates specify the internal organization of a given event.
In order to be susceptible to such modification, there must be a certain 'leeway' for ways in which an event can take place. For example, a punctual event like an explosion does not lend itself to 'internal' modification; only the 'force' of the explosion provides some room for variability. An eating event, by contrast, implies a specific way of putting food into one's mouth, with or without biting, a specific type of chewing as well as relations between such sub-events (e.g. simultaneity vs. sequentiality). This type of 'internal complexity' leaves room for modification; one can eat noisily or quietly (in the chewing phase), one can chew with an open or closed mouth, one can eat fast or slowly (predicated of the chewing sub-events and the succession of swallowing sub-events), etc. This observation provides the basis of the explanation in ( 46):
(46) Explanation II: Descriptions of complex events, i.e., descriptions of events comprising several (more or less clearly distinguishable) sub-events, lend themselves more to MANNER modification because a higher number of sub-events (and relations between sub-events) implies a higher number of aspects of an event description to which MANNER predicates can apply.
Finally, we have seen that there is at least one explanatory factor that is 'system-external', in the sense that it does not concern the relationship between form and meaning, but the relation between the speech community and the linguistic system. As has been pointed out, languages tend to encode the MOTIVATION of a killing event to a greater extent than they encode the MOTIVATION of any other event type that we have considered. This is intuitively plausible, as the MOTIVATION of a killing event is an important piece of information, certainly much more important than the MOTIVATION for cutting an onion or a piece of meat. More generally speaking, we can explain this tendency by assuming that languages tend to lexicalize those aspects of event descriptions that 'matter most' to a given speech community. This is perhaps a trivial finding; at the same time, however, it leads over to matters of linguistic relativity, a highly controversial and certainly non-trivial topic. The following formulation is an attempt to find a balance between a more or less trivial observation and a strong -linguistically relative -claim. It makes reference to [START_REF] Grice | Logic and conversation[END_REF] Cooperative Principle:
(47) Explanation III:
Languages tend to lexicalize those aspects of event descriptions which affect the social life of the relevant speech communities, because important information is frequently provided, satisfying the Cooperative Principle, and thus tends to be conventionalized and lexicalized to a greater extent than unimportant information.
While the three explanations given above emerged more or less directly from the generalizations made in Section 4.4, we would finally like to discuss an additional factor which has not been mentioned so far. It seems to us that the amount of information conveyed by a given parameter plays an important role in the probability of that parameter being lexicalized in a given language. A parameter can be assumed to be informative to the extent that it allows the hearer to make inferences about other parameters. Languages can be expected to lexicalize those parameters that allow speakers to make as many inferences as possible.
Let us illustrate this point with eating verbs. Given that eating is a rather heterogeneous
2
Cf. the special issue of Linguistics, 50.3, 2012, edited by M. Koptjevskaya-Tamm and M. Vanhove for a recent survey, especially the introduction (Koptjevskaya-Tamm 2012).
( 8 )
8 It is raining steadily. (9) ∃e [RAIN(e) ˄ STEADY(e) ˄ t e ⊃ t 0 ]
), and languages differ in the ways in which they distribute the two predications WALK(e) and DIRECTED.INTO(e,house) over the sentence. (18) John walked into the house ∃e[WALK(e) ∧ DIRECTED.INTO(e,house) ∧ Theme(hohn,e) ∧ t e < t 0 ]
in their lexical entries for the concept 'take in food and liquids' in the selectional constraints imposed by their core arguments (AGENT, PATIENT) as well as adjuncts and their semantic counterparts 'circumstantial relations' (MANNER, TIME, PLACE, INSTRUMENT), which may also be components of verbs of eating and drinking. Moreover, the RESULT of an eating or drinking event may be encoded lexically.
translations for the English verb shoot in German, depending on the PATIENT and on the RESULT of the activity. Consider the examples in (26): (26) a. schießen Karl hat in der letzten Jagdsaison 10 Wildscheine geschossen. 'Charles shot 5 wild boars during the last hunting season.' b. abschießen Jäger sollen noch mehr Wild abschießen. 'Hunters are urged to shoot more game.' c. erschießen Die Terroristen haben vier Zivilisten erschossen. 'The terrorists shot 4 ordinary civilians.' d. totschießen Wir mussten den entlaufenen Löwen totschießen. 'We had to shoot the escaped lion.' e. niederschießen Der Polizist wurde auf offener Straße niedergeschossen. 'The police man was shot in the street.'
kick (foot), punch (hand), slap (hand), smack (hand), cane (stick), whip, flog (whip, rod), lash, flail, which incorporate a reference to the INSTRUMENT of the action. The last five of these are de-nominal verbs indicating the INSTRUMENT explicitly and are typically found in contexts of punishing. What we find essentially in these English verbs of exercising physical force is thus differentiation according to the parameters RESULT and INSTRUMENT.
primarily on the INSTRUMENT (including body parts) used, on the RESULT and the MANNER of the action, as well as on the PATIENT of the activity. The following list is a first attempt to systematize the relevant factors relevant for the choice of a verb. (i) Choice depends primarily on the INSTRUMENT In Xârâcùù (New Caledonia), the first part of the verbal compound indicates the INSTRUMENT or the body part involved in the cutting event. The following expressions are examples of such first parts: ki-< kiri 'saw', kwi-'cut with a tool in the hand, from top to bottom', pwâcut or split with a warclub', cha 'cut with an axe or a saber held in the fist'. The second part of a compound typically refers to the MANNER or the RESULT of the cutting. (35) Xârâcùù cha-'cut with an axe or a saber held in the fist' a. cha-cöö 'cut the bark vertically' (cöö 'break into fibers') b. cha-chëe 'miss a cut, cut across' (-chëe 'miss') c. cha-gwéré 'succeed in cutting with an axe' (-gwéré 'succeed') d. cha-körö 'cut into pieces' (-körö/-görö 'break into pieces') e. cha-nyûû 'pierce' (-nyûû 'pierce')
As has been mentioned, verbs of killing carry implications about the RESULT, i.e., the PATIENT is dead after the event has taken place. Still, differentiations could be made with respect to the 'physical appearance' of the PATIENT (e.g. zerstückeln 'hack to pieces'). The MOTIVATION of a killing event, by contrast, is an important factor. This is different in the other verb classes considered in the present study. Verbs of eating, beating and cutting focus more on the RESULT of the action than on the MOTIVATION, which is hardly encoded at all.
Table 1
1
). 3
Table 1 :
1 Verbs of eating and drinking in JapaneseIn many languages differentiation of verbs according to the substance of what is consumed is taken much further, and there are even languages that have no 'generic' eating verb of the type commonly found in European languages. Navajo has different verb stems for eating hard, compact things, leafy things, meat, marrow and mushy things, among others (cf.[START_REF] Rice | Athabascan eating and drinking verbs and constructions. The linguistics of eating and drinking[END_REF].A particularly rich inventory of lexical differentiations depending on the type of food taken in is found in East Futunan (cf.[START_REF] Moyse-Faurie | Dictionnaire futunien-français[END_REF]. Some examples of highly specific verb meanings are given in (21). A remarkable phenomenon in this language is also the differentiations drawn between eating certain food alone or in combination with other dishes, as in (21)b. We will return to such differentiations in Section 3.2, where some particularly interesting differentiations found in Melanesian and Polynesian languages are discussed. 'ota 'eat raw things, Tahitian salad' e. otai 'eat certain fruit (grated guava mixed with grated coconut)' f. mafana 'drink the juice of the dish su before eating it' So far we have focused on the core participants (AGENT and PATIENT) for the description of cross-linguistic differentiation of lexical inventories. Let us now turn to the other parameters of variation.The TIME of eating is expressed in such lexemes as déjeuner, goûter, dîner, souper in French as zaftrakat', obedat', uzhinat', etc. in Russian and dine and sup in English or their complex counterparts have breakfast, dinner, tea, supper. The PLACE of eating is rarely expressed, except for cases like piqueniquer 'eating outside' in French. Some languages make lexical differentiations concerning the RESULT of eating, i.e. the effect either on the PATIENT (Germ. aufessen 'eat up', austrinken 'drink up') or the AGENT (sich vollessen, sich sattessen 'eat one's fill', sich überessen 'overeat').4 Having pointed out some general parameters in the lexicalization patterns of eating verbs, we will now turn to a group of languages that exhibits particularly rich inventories of verbs of eating, i.e. selected Melanesian and Polynesian languages.
(21) East Futunan
a. fono'i 'to practice cannibalism'
b. kina 'eat two things together (starchy food and side dishes)'
c. kītaki 'eat starchy food or ripe bananas with coco'
d.
The MANNER of eating is clearly expressed in verbs like wolf down, devour, slurp in English and chipoter, picorer, dévorer, engloutir in French or schlingen, herunterwürgen in German.
More often than not these expressions seem to be based on MANNERS of eating observable in the behavior of animals. As mentioned above, in German the verbs used with animal subjects may also be used with human subjects to describe immoderate eating and drinking.
INSTRUMENTS are rare lexical components of verbs of eating. Examples that come to mind are auslöffeln 'face the music', aufgabeln 'pick/dig up' in German, verbs that are primarily used in metaphorical extensions.
More fine-grained distinctions in Melanesian and Polynesian languages
Some of the parameters discussed in the preceding section can be illustrated with examples from East Futunan (cf.[START_REF] Moyse-Faurie | Dictionnaire futunien-français[END_REF]. In this language a generic verb (kai) corresponding to eat is available and is used both transitively and intransitively. This verb, which is mainly used to describe remarkable manners or habits of eating, can be combined complete complementarity between the generic verb kai and the specialized verbs listed in (21) above. We find kai with one or two objects referring to food and there are also cases of specialized verbs referring to the MANNER of eating, e.g. those in the following Drehu öni, Xârâcùù xwè, Ajië oi). New Caledonian and Polynesian languages have verbs of eating that are restricted to the consumption of sugarcane, orange and all other fruits that are sucked (Xârâcùù xwii, Ajië wa, East Uvean/East Futunan/Tuvaluan gau). Polynesian languages have verbs for raw food (fish, meat, shells), i.e.While such degrees of specificity are surprising from the perspective of European languages, it is probably even more uncommon to find specific verbs which relate not to the type of food, but to the number of types of food consumed. In Polynesian languages there are verbs that are used when only one thing is eaten, i.e., either starch food or bread without any meat or fish, or
examples:
(23) a. ma'ama'aga 'eat excessively' 'ota (East Futunan, East Uvean) and ota (Tuvaluan), deriving from PPn *'ota.
b. pakalamu 'chew well; eat noisily (of people)'
If we broaden out our perspective from the case of East Futunan to Melanesian languages of
New Caledonia and Polynesian languages in general, we get a more or less uniform general
picture, in spite of some differences between New Caledonian Mainland languages (several
with objects denoting various types of food (e.g. kai samukō 'eat only fish and meat/proteins'), but it is just as often used with modifiers indicating manners, quantities and results of eating. Consider the following examples: (22) a. kai fakavale 'to overeat' b. kaikoko 'eat all kinds of things' c. kai mākona 'eat one's fill' d. kai okooko 'eat moderately' e. kai tauvalo 'eat constantly good things' f. kai vasuvasu 'eat in accordance with what is customary' 4
Cf.
[START_REF] Putnam | The syntax and semantics of excess: OVER-predicates in Germanic[END_REF]
for a semantic analysis of 'excess predicates' like overeat.
20
There is no specific terms), the languages of the Loyalty islands (general eating term versus meat/fish distinction) and Polynesian languages (raw versus cooked, only one sort of food or different sorts). Before looking at the more fine-grained and, from the perspective of European languages, remarkable examples, let us briefly consider the higher-level eating terms that are available. As pointed out in Section 3.1, East Uvean has a (honorific) verb which is used for both eating and drinking (taumafa). A more or less general term for 'eat' (kai), which is used both intransitively ('have a meal') and transitively, is found in East Uvean and Tongan, in addition to East Futunan. On the Loyalty Islands there are terms used intransitively and for eating starch food, fruits, vegetables (but not for meat): kaka/kakan in Nengone, and xen in Drehu. The New Caledonian Mainland languages have a term for 'eat' which is used intransitively and for most fruits and salad (but not for bread, coconut, banana or meat), i.e.
Xârâcùù da and Ajië ara.
We can use examples from East Uvean to illustrate some eating verbs relating to the MANNER of food consumption. There is a verb for 'stuffing oneself', i.e. fa'apuku/ha'apuku. If food is swallowed without chewing (ripe bananas), or if an eater has no teeth, momi is used. Noisy eating habits, compared to those of animals, are implied by the verb pakalamu. Finally, there is a verb for enjoying food, i.e. 'unani.
More specialized verbs of eating are typically differentiated into those requiring starch food
(yam, taro, sweet potatoe, rice, banana, manioc, bread)
and those requiring meat, fish or related types of food (e.g. animal products). The first class is found in the New Caledonian Mainland languages Xârâcùù (kê) and Ajië (kâi). All New Caledonian languages have verbs that are used with meat, fish, coconut (perhaps as a metaphorical extension of flesh), as well as egg and milk products (Nengone ia/ian, vice versa. These verbs are also used for leftovers (non-protein food): hamu/hamuko (East Uvean), (kai) samukō (East Futunan), and samusamu (Tuvaluan), all deriving from PPn *hamu.
).
bo 'hit with a stick or a bludgeon'
b. chaamè 'kill s.o. with an axe'
cha 'cut with an axe or a saber'
c. chuuamè 'kill with a fist'
chuu 'hit, pound (with a downward motion, with fist)'
d. fîèmè 'kill with a stick'
fî-< fîda 'hit with an instrument'
e. kwiamè 'kill with a downward movement'
kwi-'kill with an instrument and a downward movement'
f. pwââmè 'kill, beat unconscious with a stick'
pwâ-'action of throwing a war club'
The examples in (26) above also illustrate a further parameter of variation, i.e. the
INSTRUMENT of killing. The English verb shoot and the stem appearing in all its German
counterparts, viz. schießen, denote actions in which a rifle, gun or pistol is used. Consider
now the following additional examples from German and French, where some other
INSTRUMENT is employed:
g. söamè (~ söömè) 'kill, beat unconscious with your hand' sö-'hit, make a circular movement with your hands' h. taamè 'kill with gun, arrow' ta-'shoot, throw a long object' g. tèèmè 'kill with hands, or with a long object' tè-'action with hands'
In addition to the encoding of an INSTRUMENT, RESULT or MANNER of an action, we find occasional restrictions to specific types of PATIENTS. In particular, languages tend to have verbs for beating persons, such as Germ. verprügeln and zusammenschlagen 'beat up'. Verbs restricted to specific types of AGENTs seem to be rare, however. Like verbs of killing, those of beating do not seem to lexically encode the TIME or PLACE of an action at all.
There is, thus, a MANNER component encoded in these verbs:
(33) a. sö-'hit with a circular movement of hand or arm'
b. söchëe 'try to hit with hands'
c. söchèpwîrî 'turn over by hitting'
d. söchö 'bend s. th. by hitting with hand'
e. sögwéré 'throw s.th. on s.o.'
f. sökai 'wipe out with hand (a mosquito)'
g. söpaari 'remove weeds'
h. söpisii 'wipe away'
there are verbs like chop,clip, prune, hew, carve, trim, slit, slice, nearly all of them incorporating some characterization of the RESULT of the action, as well as a few very specialized 'synonyms' such as mow (grass), amputate (leg or arm) exhibiting specific collocational distinctions. Examples of more specific verb meanings are provided by the verb hew, which typically implies an axe as INSTRUMENT and stone or wood as PATIENTS, and the verb slice, which exclusively expresses the RESULT of an action typically corresponding to the use of a knife.In French the major distinction in the corresponding basic vocabulary are the ones between couper, hacher, fendre, émonder, tailler and découper. The first verb is the most general and versatile one and implies neither the use of specific INSTRUMENTS, nor any specific RESULTS. Découper un article means to rearrange the sections of the article, couper un article means to cut or drop the article. In the remaining verbs the RESULT is lexicalized: fender 'separate, create two parts', tailler 'cut with a specific shape in mind', hacher 'cut into small pieces', émonder 'prune a tree'. In German, differentiation between certain subtypes of the general action is again achieved through the use of separable or inseparable prefixes. The resultant distinctions mostly relate to the RESULT of an action
Découper, by contrast, is associated with a specific purpose or goal (i.e., MOTIVATION) and
expresses the process of cutting according to a specific plan (découper l'étoffe, carton) in
order to create something.
Choice depends primarily on the RESULT or MANNER of cutting
36) East Futunan a. autalu 'to cut the weeds with a knife', 'to weed' b. fakainati 'to cut meat into portions'(inati 'parts, portions of meat')As the examples given above show, languages may vary considerably in the extent to which they lexicalize parameters of variation. The European languages that we have considered have rather poor vocabularies in the domain of cutting verbs and basically distinguish between different RESULTS achieved by a cutting action. Other distinctions, in particular distinctions relating to the nature of the AGENT, the PATIENT or the INSTRUMENT, are rare. The MANNER of cutting is of course closely related to the RESULT, but otherwise not prominently encoded in verbal meanings.A completely different picture emerges when we look at Oceanic languages. As has been demonstrated with examples from Xârâcùù, these languages make numerous and highly specific distinctions according to the parameters INSTRUMENT, PATIENT and RESULT, and the MANNER of cutting is also often implied. Even though this diversity renders any generalization in the domain of cutting verbs difficult, we will, again, rank the pairs of dimensions that we also used for the other types of verbs.
c. fakasāfuni 'cut and adorm the hair of the bride'
d. kati'i 'cut (sugar cane, coconut) with teeth'
e. koto 'cut off leaves (of the taro) from their stem by hand'
f. lovao 'cut plants alongside roads'
g. moli'i 'cut off a small piece of something'
h. mutusi 'amputate, cut off the tail of a pig'
i. paki 'cut off leaves or bananas'
j. tā 'cut wood for construction'
k. tā'i 'cut off, harvest (bananas')
(iii) The RESULT of cutting is primarily lexicalized in examples like the following from Xârâcùù
(the second component often incorporates an element of MANNER):
(37) Xârâcùù
pieces')
(iv) chapuru 'cut in two' (-puru/-buru 'cut in two vertically')
(v) chapwîrî 'cut aimlessly' (-pwîrî 'without a method')
(vi) chatia 'split, chop' (tia/-dia, 'split')
(38) ji-'shorten, cut to a specific shape'
a. jikai 'cut up', jikakai 'cut up in pieces'(-kai 'reduce to crumbs')
b. jimîîdö 'sharpen' (mîîdö 'pointed')
a. sërù 'cut into small pieces', sësërù 'cut into very small pieces' b. cha 'cutting with the help of a machete, leading to the following results:
(i) chachëe 'cut crosswise' (-chëe 'miss') (ii) chagwéré 'cut successfully with an axe' (-gwéré avec succès) (iii) chakörö 'cut up into small pieces' (-körö/-görö 'break/cut up into small c. jipöru 'cut off bark, skin, to peel d. jipuru 'slice', 'cut in two' e. jitia 'cut lengthwise'
If we move on to the more 'peripheral' parameters of variation, we note that INSTRUMENT and MANNER are more prominently encoded than TIME and PLACE. This is, again, unexpected, as the TIME and PLACE at which an event takes place are (genuinely) extrinsic, while the MANNER and INSTRUMENT have a stronger impact on the primary event predicate. It is likely that TIME and PLACE will only be encoded in verbs denoting activities that are habitually carried out by a considerable number of a speech community. Eating is such an activity, and we have pointed out that there are in fact lexical distinctions according to the PLACE and TIME of an eating event in European languages.Making an internal differentiation between the INSTRUMENT and the MANNER of an event is tricky, as the two aspects of interpretation often overlap -the use of different INSTRUMENTS implies differences in the MANNER in which an action is carried out. The difference is that an
INSTRUMENT is a 'genuine' participant of an event, while a MANNER is a property of (some
aspect of) the event in question. We consider as INSTRUMENTS only concrete objects,
including body parts. The MANNER of an event thus basically subsumes all those extrinsic
properties which are not related to the use of a specific INSTRUMENT, e.g. the type of
movement made (e.g. straight vs. circular, upward vs. downward, cf. the Xârâcùù examples in
(29)), the 'speed' of movement, etc. We have proposed the following hierarchies for the
classes of verbs investigated by us:
(42) a. verbs of eating/drinking
MANNER > INSTRUMENT
b. verbs of killing and beating
INSTRUMENT > MANNER
c. verb of cutting
INSTRUMENT ≈ MANNER
If liquid food or medication is given to babies or elderly people one can also use boire 'drink' in French (boire le médicament à la cuillère). In Turkish, the same verb (içmek) can be used for drinking and smoking (bir sigara içmek 'to smoke a cigarette').
The verb prügeln, while being a derivate of the noun Prügel historically speaking, is also used generically today, i.e., as a common verb of beating. It implies a high degree of force, however.
schlagen, an-schlagen). The two parameters are hard to keep apart, however, as the DIRECTION of a hitting action -for instance, ein-'in(to)', aus-'out' -has primarily implications on the RESULT, e.g. insofar as hitting 'into' a window implies that the window breaks (ein Fenster einschlagen 'break a window'), and einen Zahn ausschlagen means that a tooth was lost. The originally directional prefixes have thus assumed basically aspectual functions and German verbs of beating thus seem to focus on the RESULT.
In French, frapper, taper, battre are the more general terms, but there are also several specific terms, such as gifler 'slap' (with hand, in the face) or claquer 'beat lightly (with hand)', cogner 'punch', 'bang', 'knock' (hit with fist or instrument in fist), fouetter 'whip', rosser 'thrash (beat in a violent manner)'.
Turning to Melanesian languages, we find that in Xârâcùù, the relevant subset of the vocabulary manifests a higher degree of differentiation than in the two European languages just discussed. As far as the formal expression is concerned, we find an interesting similarity with processes of derivation in Germanic. The verbs to be discussed are compounds where the first element is a prefix derived from a verb of exercising force by reducing all but the first syllable. In addition to the basic general verb sa 'hit, beat', there is a wide variety of verbs exhibiting this basic structure, all expressing variations in the semantic domain of hitting and beating. Interestingly enough, all of these express the semantic dimension INSTRUMENT in addition to the fact of hitting or beating and the RESULT of this activity. The examples in (31) are based on the verb dù-'hit with the fist, punch':
(31) a. dù-'hit with the fist, punch' b. dùchëe 'fail to hit with a punch' c. dùkari 'punch gently' d. dùkè 'box,punch' In (32), some examples are provided of verbs based on the root fî-'hit with an instrument':
(32) a. fîda 'hit with an instrument > fî-reduced form in compounds b. fîakè 'hammer in' c. fîatapö 'hitting on s.th. to explode it' d. fîburu 'break s.th. by hitting' activity, the (more) intrinsic properties of eating events are, to a considerable extent, a function of the (more) extrinsic properties. The type of food consumed (the PATIENT) is the most informative parameter, because it conveys information about the MANNER of eating and the AGENT as well, e.g. insofar as meat is consumed in a different way than soup, and insofar as humans tend to eat different things than animals (e.g. schnitzel with salad vs. raw meat).
Depending on cultural differences, we can also expect specific types of food to be consumed at specific times of the day. It is thus not surprising to find that there is such enormous variation in the domain of eating verbs depending on the properties of the PATIENT.
While the fact that PATIENTS are encoded prominently in eating events is not specific to that class of verbs, we have noticed that eating verbs, unlike all of the other classes considered in this study, sometimes also encode the TIME of eating. This observation might be related to the fact that the TIME of eating is also a relatively good predictor of other parameters, at least in European speech communities. Depending on the country or region, one can more or less safely predict what is eaten (the PATIENT) at specific times of the day. Note that the relevant verbs are also restricted to human AGENTs. The amount of information contained in a sentence like Bill is having breakfast is thus considerable -it tells us that Bill is a man (rather than a dog), that he is probably having coffee or tea with his meal, and -assuming that he lives in France -he is likely to have baked goods -baguette or croissants -on his plate.
Summary and Conclusion
Building on earlier contrastive and cross-linguistic work (e.g. [START_REF] Leisi | Der Wortinhalt. Seine Struktur im Deutschen und Englischen[END_REF][START_REF] Plank | Verbs and objects in semantic agreement: Minor differences between English and German that might suggest a major one[END_REF], we hope to have made some new observations on differences in the lexical inventories of different languages for identical or at least similar notional domains, i.e., descriptions of events of eating and drinking, and of physical impact (killing, beating, cutting). What are the general conclusions we can draw from the preceding comparative observations?
The first, somewhat trivial, conclusion is that the semantic parameters differentiating between similar lexical items and similar lexical inventories differ in many more and much more subtle ways than we find in comparing grammatical items. It is for this reason that lexical typology is so much more difficult than morpho-syntactic morphology. Still, we have noted that specific dimensions of variation -those relating to restrictions on, or the encoding of, participant relations, temporal and locative specifications as well as the MANNER and RESULT of an action -allow for certain generalizations. In particular, we have proposed hierarchies ranking pairs of event parameters which make similar contributions to the meaning of a sentence. Thus we found that all types of verbs considered in our study tend to encode the PATIENT to a greater extent than the AGENT, that the lexicalization of the MANNER and INSTRUMENT seems to be more common than that of TIME and PLACE (in the event types investigated by us), and that there are differences, in particular, between the relative rankings of MANNER and INSTRUMENT, depending on the specific verb class investigated.
A second, probably not totally unexpected finding is that languages may differ strikingly in the differentiations they manifest. There are only few verbs of eating and drinking in most European languages, but there seem to be many such verbs in Polynesian languages. A similar contrast is found with respect to verbs of cutting; there are few such verbs in the European languages considered, but a wide variety of them is found in Oceanic languages. We have not discussed any explanations for these differences, and we have refrained from making a point for linguistic relativity in this context. While it is tempting to assume that speech communities with a broader range of dishes will make more relevant distinctions in the verbal lexicon, we are fully aware that such claims are easily falsified, e.g. when speech communities with similar eating and dressing habits differ considerably in their lexical inventories. As has been shown by [START_REF] Plank | Verbs and objects in semantic agreement: Minor differences between English and German that might suggest a major one[END_REF], English has general terms for putting on or taking off clothes, while German lacks such terms. Does that mean that Germans pay more attention to their clothes than Englishmen do? It certainly doesn't.
Even so, we have proposed one explanation that makes reference to habits of a speech community, i.e., the special status of verbs of killing. Killing is such a fundamental action for any speech community, and it is likely to be evaluated in such different ways depending on the MOTIVATIONS of that action -killing can make one a hero (in war), or it can cost one one's live (in the case of murder) -that we can expect the MOTIVATION of a killing event to figure prominently in descriptions of the relevant actions.
In addition to that 'system-external', perhaps partly relativistic, explanation, we have proposed three 'system-internal' explanations, all of which could be regarded as boiling down to matters of economy in the relationship between form and function. First, we have argued that the degree of 'intrinsicness' of an event parameter correlates positively with the probability of that property being encoded lexically, as intrinsic aspects of event descriptions can be assumed to lead to natural classes more easily than extrinsic ones (for instance, it is more likely to find a specialized lexical item for 'raining heavily' than for 'raining in Spain').
Second, we have pointed out that the internal organization of an event -its degree of complexity -has implications for the likelihood with which that event will be modified by a MANNER specification. The more 'sub-aspects' there are of a given event, the more MANNER specifications are conceivable. Finally, we have argued that 'informativity' may play a role, and that languages may tend to encode those parameters lexically that allow hearers to make inferences about other parameters.
We are fully aware that the observations and suggestions made in this study are tentative, which is why we have added the hedge 'programmatic' to the title of this contribution. We have proposed a framework allowing for the formulation of generalizations by ranking pairs of event parameters, based on a Neo-Davidsonian event semantics, hoping that this method will prove useful when more data is considered. This is, obviously, our main task for future studies. | 93,320 | [
"754846"
] | [
"554401",
"307126",
"406905"
] |
01477423 | en | [
"info"
] | 2024/03/04 23:41:46 | 2016 | https://theses.hal.science/tel-01477423/file/these_archivage_3370238o.pdf | L 'université Pierre
M Jabier Martinez
Dr Jean-Marc Jézéquel
Dr Klaus Schmid
Dr Jacques Klein
Dr Pascal Poizat
Dr Yves Le Traon
Dr Mikal Ziane
Co-Directeur
Dr Tewfik Ziadi
THESE DE DOCTORAT DE
Keywords: Location Benchmarking framework, 91
Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets associated to the features, to allow systematic and planned reuse. SPLs are reported to have numerous benefits such as time-to-market reduction, productivity increase or product quality improvement. However, the barriers to adopt an SPL are equally numerous requiring a high up-front investment in domain analysis and implementation. In this context, to create variants, companies more commonly rely on ad-hoc reuse techniques such as copy-paste-modify.
Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Extractive SPL adoption allows the migration from single-system development mentality to SPL practices. Several activities are involved to achieve this goal. Due to the complexity of artefact variants, feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed as well. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations (e.g., one feature requires or excludes another). Then, the reusable assets associated to the feature should be constructed. And finally, to facilitate the communication among stakeholders, a comprehensive feature model need to be synthesized. While several approaches have been proposed for the above-mentioned activities, extractive SPL adoption remains challenging. A recurring barrier consists in the limitation of existing techniques to be used beyond the specific types of artefacts that they initially targeted, requiring inputs and providing outputs at different granularity levels and with different representations. Seamlessly address the activities within the same environment is a challenge by itself. This dissertation presents a unified, generic and extensible framework for mining software artefact variants in the context of extractive SPL adoption. We describe both its principles and its realization in Bottom-Up Technologies for Reuse (BUT4Reuse). Special attention is paid to model-driven development scenarios. A unified process and representation would enable practitioners and researchers to empirically analyse and compare different techniques. Therefore, we also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We also present visualisation paradigms to support domain experts on feature naming during feature identification and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments. hanks. Thanks to my parents Chuchi and Josefa. Gracias por el amor y apoyo incondicional, por vuestro trabajo duro que me dio la oportunidad y la libertad de elegir mi camino. No podré agradecer suficiente todo lo que habeis hecho por
Context
This dissertation focus on Software Product Lines (SPLs). Special emphasis is given to the challenges lying in the process of their adoption. Concretely, we concentrate in the case where a company can leverage existing variants to help during the adoption process that migrates from single-system development mentality to systematic and planned reuse [START_REF] Northrop | Software product line adoption roadmap[END_REF].
Software product lines and extractive adoption
SPLs are a mature paradigm for variability management in software engineering [NC + 09, PBL05, [START_REF] Apel | Feature-Oriented Software Product Lines -Concepts and Implementation[END_REF][START_REF] Van Der Linden | Software product lines in action -the best industrial practice in product line engineering[END_REF]. They enable to define a family of product configurations and to later systematically generate the associated product variants. The inspiration for proposing this engineering practice is commonly attributed to the manufacturing industry where different predefined reusable components are usually combined to satisfy different customer needs. An SPL is formally defined as "a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission, and that are developed from a common set of core assets in a prescribed way" [NC + 09].
Achieving large-scale productivity gains and improving time-to-market and product quality are some of the claimed benefits of SPLE [NC + 09, vdLSR07]. Some acknowledged examples can be found in the SPL hall of fame [START_REF] David M Weiss | Software product line hall of fame[END_REF] which reports commercially successful implementation of the SPL paradigm in companies from different domains ranging from avionics and automotive software, to printers, mobile phones or web-based systems.
Despite of these benefits, the barriers for SPL adoption are numerous [START_REF] Northrop | Software product line adoption roadmap[END_REF][START_REF] Krueger | Easing the transition to software mass customization[END_REF]. Firstly, compared to single-system development, variability management implies a methodology that highly affects the life-cycle of the products as well as the processes and roles inside the company. Secondly, it has to be assumed that SPL adoption is a mid-long-term strategy given that preparing the reusable assets will be, at least in the first product deliveries, more costly than single-system development. It is often the case -around 50% according to a survey with industrial practitioners [BRN + 13] -that companies already have existing products that were implemented using opportunistic ad-hoc reuse to quickly respond to different customer needs. We illustrate this case in the left side of Figure 1.1 where the goal is to change from next-contract vision to a strategic view of a business field through the adoption of an SPL [START_REF] Van Der Linden | Software product lines in action -the best industrial practice in product line engineering[END_REF], as shown in the right side of the figure.
Variability management for systematic reuse
Software Product Line Extractive SPL adoption
Next-contract vision
Single-system development mentality with ad-hoc reuse
Opportunistic reuse
Satisfying the needs of a market segment
Context
In SPL Engineering (SPLE), we can distinguish two main processes, domain and application engineering, illustrated in the horizontal layers of Figure 1.2. Domain engineering consists in analysing the domain for managing a feature model with the identified features in the product family and the constraints among them [KCH + 90]. Then, a solution for this domain is implemented with the preparation of the reusable assets related to the features. In application engineering, customer requirements are analysed to create product configurations through the selection of features. Later, configurations are used to automatically derive the products with a mechanism that makes use of the reusable assets. To highlight the difference between domain and application engineering, the former is commonly called development for reuse while, the latter, development with reuse.
Derivation mechanism
Feature model
Configurations
Domain engineering
Products Reusable assets For the adoption of these SPLE practices, having legacy product variants can be seen as an enabler for a quick adoption using an extractive approach [START_REF] Krueger | Easing the transition to software mass customization[END_REF]. However, in practice, extractive SPL adoption is still challenging. Mining the artefact variants to extract the feature model and the reusable assets gives rise to technical issues and decisions which need to be made by domain experts. Several activities are identified while mining artefact variants. If there is no complete knowledge of the common, alternative, and optional features within the product family, then the artefact variants should be leveraged for feature identification and naming. Also, we will need to identify the implementation elements associated to each feature. If features are known, feature location techniques can directly be used to obtain these associations. In addition, feature constraints discovery should be performed to guarantee the validity of certain feature combinations, for example, to avoid structurally invalid artefacts by combining incompatible features. Finally, the feature model conceptual structure need to be defined for efficient communication among stakeholders, and, certainly, reusable assets should be extracted to obtain an operative SPL.
Domain analysis Domain implementation
Product derivation Requirements analysis
Application engineering
Overview of challenges and contributions
This section presents the faced challenges, the summary of contributions and the organization of the dissertation.
Challenges
We faced four challenges in the context of SPL extraction and analysis. Three of them fall under the umbrella of extractive SPL adoption and the other is related to SPL analysis. We illustrate them in Figure 1.3 and we describe them below.
Harmonization: The lack of unification of extractive SPL adoption techniques
Practitioners lack end-to-end support for chaining the different steps of extractive SPL adoption. Addressing, within the same environment, the different activities starting from feature identification and location up to actually extracting the SPL, is a challenge by itself. Each approach requires inputs and provides outputs at different granularity levels and with different formats complicating the integration of approaches in a unified process. In addition, the proposed approaches are often related to specific types of software artefacts. There is a large body of algorithms and techniques for achieving the different objectives in SPL adoption (e.g., [RC13a, RC12a, YPZ09, LAG + 14, SSD13, ASH + 13c, ZHP + 14, RC13b, AV14, LMSM10, LHGB + 12, CSW08, HLE13, DDH + 13, TDAJ16, SRA + 14, HLE13, BABBN15, BBNAB14, IRW16]). Unfortunately, because the implementation of such algorithms is often specific to a given artefact type, re-adapting it for another type of artefacts is often a barrier. There is however an opportunity to reuse the principles guiding these existing techniques for other artefacts. When such algorithms are underlying in a framework, they could be transparently used in different scenarios. If there exists a set of principles for building a framework to integrate various algorithms and to support different artefact types this challenge can be overcome.
Artefact variants
How can I leverage these legacy products for SPL adoption?
Experimentation: The need for case studies and benchmarks for extractive SPL adoption techniques
It is commonly accepted that high-impact research in software engineering requires assessment in realistic, non-trivial, comparable, and reproducible settings [START_REF] Sim | Using benchmarking to advance research: A challenge to software engineering[END_REF]. In the extractive SPL adoption context, which shows a high proliferation of techniques, non-confidential featurebased artefact variants are usually unavailable. If available, establishing a representative ground truth is a challenging and debatable subject. Artefact variants created "in the lab", such as variants directly derived from an SPL which is used as ground truth, might provide findings that are not generalisable to software development. The challenge is to continue the search of case studies "in the wild" to draw more representative conclusions about the different techniques. In this dissertation we have concentrated on enabling intensive experimentation on feature identification and location techniques.
Visualisation: The lack of support for domain experts during the extractive SPL adoption process
Visualisation reduces the complexity of comprehension tasks and helps to get insights and make decisions on a tackled problem [START_REF]Readings in Information Visualization: Using Vision to Think[END_REF]. In extractive SPL adoption activities, the proposed approaches in the SPL literature are not focused on providing and validating appropriate visualisation support for domain experts. Mining artefact variants for SPL adoption is an arduous process and it is very optimistic to envision it as a fully automatic approach that does not require human stakeholders' decision-making skills. Concretely, we tackle two visualisation challenges. The first one assists in naming during feature identification. Feature identification techniques that analyse artefact variants do not have a visualisation paradigm for suggesting feature names. The second challenge, once the features are identified, consists on designing a visualisation for feature constraints discovery in order to refine the envisaged feature model.
Estimation: The need to understand and foresee the end users' expectations regarding the SPL products
Once the SPL is operational in a company, independently of whether the SPL was extracted or engineered from scratch, there is a need to satisfy the end users of the software products. This implies the exploration of different design alternatives. Exploring alternatives is specially relevant when considering products that have Human Computer Interaction (HCI) components. Variability management, as proposed in SPLE, enables to express and derive the different alternatives. In SPLs deriving human-centered products, there are important barriers to understand users' perception of the products. First, the users cannot assess all possible products as the possibilities can be prohibitively large. Also, human assessments are subjective by nature. In addition, getting end user assessment (e.g., usability tests) is resource expensive. These facts make the selection of the more adequate products for the customers challenging.
Summary of contributions
The harmonization, experimentation and visualisation challenges are addressed in this dissertation by providing a framework and a process for mining and analysing existing products.
That means that all of them are seamlessly integrated in the same conceptual and technical environment called BUT4Reuse.
Harmonization:
• Bottom-Up Technologies for Reuse (BUT4Reuse): A unified and extensible framework for extractive SPL adoption supporting several artefact types.
• Model Variants to Product Line (MoVa2PL): A complete example of using BUT4Reuse for extractive SPL adoption in the case of model variants.
Experimentation:
• Eclipse Feature Location Benchmark (EFLBench): A benchmark for intensive evaluation of feature location techniques in artefact variants.
• Android Application families identification (AppVariants): An approach for discovering families of Android applications in large repositories in order to experiment with feature identification techniques.
Visualisation:
• Variability word Clouds (VariClouds): A visualisation paradigm to support domain experts in feature naming during feature identification.
• Feature Relations Graphs (FRoGs): A visualisation paradigm to analyse feature relations and identify feature constraints.
The estimation challenge is addressed by providing an approach based on analysing the configuration space.
Estimation:
• Human-centered SPL Rank (HSPLRank): An approach to rank configurations based on the expected acceptance of the products by a profile of end users.
Organization of the dissertation
The dissertation is organized in six parts. The first part presents background information. The next four parts describe the contributions of this thesis, and the last part presents concluding remarks and outlines open research directions.
Part I: Background and state of the art
The technical background is presented in Chapter 2 and related work from the state of the art is presented in Chapter 3.
Part II: Mining artefact variants for extractive SPL adoption
Harmonization: Chapter 4 introduces the BUT4Reuse unifying framework for extractive SPL adoption. Its principles, genericity for supporting different artefact types and extensibility options are presented. This framework is the basis on which Part III and Part IV are built.
Chapter 5 focuses on reporting the complete realization, usage and validation of the framework in model-driven development scenarios with model variants (MoVa2PL).
Part III: Collecting artefact variants for study and benchmarking
Experimentation: Chapter 6 presents the rationale and interest of using variants of the Eclipse plugin-based system to benchmark feature location techniques in artefact variants (EFLBench). Chapter 7 presents the mining of Android application repositories in the search for families of mobile applications to experiment with feature identification techniques (AppVariants).
Part IV: Assistance and visualisation in artefact variants analysis
Visualisation: This part presents two contributions regarding visualisation in extractive SPL adoption. Chapter 8 presents a visualisation paradigm to support feature naming during feature identification using word clouds (VariClouds). Chapter 9 presents a novel visualisation paradigm for feature relations analysis helping with constraints discovery (FRoGs).
Part V: Configuration space analysis for estimating user assessments
Estimation: Chapter 10 presents the principles and steps of a semi-automatic approach to rank the possible configurations of a given SPL regarding the expected acceptance of the products by a profile of end users (HSPLRank). Then, Chapter 11 presents and discusses two case studies conducted in two different scenarios.
Part VI: Conclusions
Chapter 12 presents the conclusions and outlines open research directions.
Part I
Background and state of the art
2.1
Software reuse
Software reuse is the process of creating new software by reusing pieces of existing software rather than from scratch [START_REF] Krueger | Software reuse[END_REF]. It may be performed simply for convenience, as in the cases of the opportunistic copy-paste-modify approach and the project clone-and-own technique, or it may represent more thoughtful solutions to complex engineering problems, as in the case of Software Product Line Engineering (SPLE) [NC + 09, PBL05, ABKS13, vdLSR07].
2.1.1
The history of software reuse
It is natural for humans to apply or adapt a known solution to similar problems. In software engineering, the analysis and continuous improvement of software reuse paradigms were of utmost importance for productivity and quality gains in the development of software. It is considered that this research started in the late 1960s mainly by proposing the reuse of subroutines and presenting the potential benefits of establishing a sub-industry and market for software components [START_REF] Malcolm | Mass-produced software components[END_REF]. Nowadays, reusing methods, libraries or off-the-shelf components in software development is familiar to any software engineering practitioner and it had become part of our daily activities.
In the development of single systems, opportunistic copy-paste-modify is also an ad-hoc reuse technique massively used. This ad-hoc technique provides very short term productivity gains, even if potential clones introduced by this practice are known to have a cost in maintenance [START_REF] Juergens | Do code clones matter?[END_REF]. We are assisting also to an increasing practice of the clone-and-own reuse paradigm (also known as fork-and-own) mainly because of the vast proliferation of public software repositories. These repositories allow us to take entire software projects and modify them to fit our needs. Other advanced reuse techniques that we benefit are the services reuse (e.g., Software as a Service) or generators (e.g., transformations in Model-Driven Development). However, already in the mid 1970s the software engineering community started to discuss about program families for the realization of systematic, planned and strategic reuse [START_REF] Lorge | On the design and development of program families[END_REF]. Inspired by the manufacturing industry, mass customization started to be a reality in software engineering. This fact gave rise to the formalization in 1990 of a Feature-Oriented Domain Analysis method (FODA) which is considered an important milestone in the history of software reuse [KCH + 90].
Software product lines
Software Product Lines (SPLs) were presented in Section 1.1 and illustrated in Figure 1.2. With an SPL, a company can benefit from an automatic derivation of a family of products through configurations based on customer requirements. This is possible after the analysis of the domain and its implementation through reusable assets. We provide in this section more details about these processes.
Feature models and configurations: Features are the prime entities used to distinguish the individual products of an SPL [BLR + 15]. In the mentioned FODA method, the definition of feature was established: A feature is defined as "a prominent or distinctive user-visible aspect, quality, or characteristic of a software system or systems" [KCH + 90]. In this context, feature models (FM) are widely used in SPLE to describe both variability and commonalities in a family of product variants [START_REF] Benavides | Automated analysis of feature models 20 years later: A literature review[END_REF]. A given FM is a hierarchical decomposition of features including the constraints among them. Constraints are formalized in the FM to guarantee the correctness of valid products. Also, even if a product might be structurally valid, a feature combination can be semantically invalid in the targeted domain requiring the formalization of constraints to capture this domain knowledge.
The feature diagram notation is shown in Figure 2.1a regarding an illustrative and simplified example of an electronic shop. The E-Shop FM consists of a mandatory feature Catalogue, two possible Payment methods from which one or both could be selected, an alternative of two Security levels and an optional Search feature. Apart from the implicit constraints defined through the hierarchy (e.g., BankTransfer requires Payment), cross-tree constraints can be defined (e.g., CreditCard requires a High level of security) [START_REF] Benavides | Automated analysis of feature models 20 years later: A literature review[END_REF].
Given a FM, a configuration is a selection of features that satisfy all the constraints. Figure 2.1b shows a configuration example where CreditCard and High security are selected as well as the mandatory features. On the contrary, BankTransfer, Standard security and Search are not selected. Table 2.1 shows eight different configurations of the E-Shop FM. Conf 1 in the table corresponds to the configuration in Figure 2.1b. If we consider the existence of an SPL for the domain represented in this FM, the product variant derived from Conf 1 will allow the end user to use the CreditCard and, for example, insert the credit card information. However, in the variant derived from Conf 3, the user will not have the possibility to pay by credit card.
The eight configurations presented in Table 2.1 purposely represent an exhaustive enumeration of all the possible valid configurations. This set containing all the possible configurations from a FM is referred to as the configuration space. The configuration space can be prohibitively large. For example, researchers have reported the challenges of dealing with the more than eight thousand features of the Linux kernel [STE + 10], or the more than five hundred possible configurations of an SPL for elevators [START_REF] Classen | Symbolic model checking of software product lines[END_REF]. If we consider n optional and independent features, the configuration space consists of 2 n possible configurations. For instance, with n equal to 33 we could have a different configuration for every human alive today.
Product derivation: If we focus on the domain analysis, features are only symbols or labels [START_REF] Czarnecki | Mapping features to models: A template approach based on superimposed variants[END_REF], however, a feature is an abstraction that should usually have a counterpart or an impact in the SPL derivation process. Therefore, the reusable assets are the implementation of these abstractions. In order to derive products from feature configurations, negative and positive variability are the two main approaches conditioning the nature of the reusable assets [START_REF] Völter | Product line implementation using aspect-oriented and model-driven software development[END_REF]. Figure 2.2a illustrates negative variability using the E-Shop example. In negative variability, a product is derived by removing the parts associated to the non-selected features from a "maximal" system containing all the features (also known as 150%, single-copy or single-base SPL representation [START_REF] Rubin | Combining related products into product lines[END_REF][START_REF] Rubin | N-way model merging[END_REF], or "overall" system [START_REF] Völter | Product line implementation using aspect-oriented and model-driven software development[END_REF]). In positive variability, illustrated in Figure 2.2b, the derivation process starts with a minimal core of the system, and then, a mechanism is able to add the parts associated to the selected features. Examples of negative variability mechanisms are annotative approaches. For instance, preprocessor directives are used in source code artefacts to exclude source code fragments according to the selected features. Figure 2.3 shows an illustrative example of annotations using Antenna [START_REF] Pleumann | Antenna[END_REF] in a Java class managing the E-Shop user interface (UI). The source code is annotated between #if and #endif clauses. #if Search indicates the source code that should be included in case that Search is selected in the configuration. On the right side, we present the result of a derivation where Search is not selected and, therefore, the annotated source code that implemented Search is excluded. Another example of concrete annotative technique is Javapp which is used in the ArgoUML SPL [START_REF] Vinicius Couto | Extracting software product lines: A case study using conditional compilation[END_REF] to annotate 31% of the total number of lines of code (LOC) of ArgoUML. Concretely, close to forty thousand LOC were annotated using preprocessor directives. Also, in the C code of the Linux kernel, KConfig annotations [START_REF] Sincero | The linux kernel configurator as a feature modeling tool[END_REF] are used to manage this huge SPL. Regarding positive variability, compositional approaches enable the addition of implementation fragments in specified places of a system. A relevant example is FeatureHouse which is based on source code superimposition and merge [START_REF] Apel | FEATUREHOUSE: Languageindependent, automated software composition[END_REF]. Figure 2. 4 shows the same Java class regarding the E-Shop UI but using FeatureHouse instead of Antenna. With the compositional approach we define two separated reusable assets that are composed during derivation when
Search is selected.
Many technical solutions have been proposed dealing with the SPL derivation mechanism. We can highlight generative programming [START_REF] Czarnecki | Generative programming -methods, tools and applications[END_REF], feature-oriented programming [START_REF] Prehofer | Feature-oriented programming: A new way of object composition[END_REF][START_REF] Batory | Scaling step-wise refinement[END_REF], aspect-oriented programming [KLM + 97] or delta-oriented programming [SBB + 10]. In addition, it is also possible to combine compositional with annotative techniques, therefore, hybrid approaches can support simultaneously negative and positive variability (e.g., implementations of the Common Variability Language (CVL) [START_REF] Haugen | CVL: common variability language[END_REF][START_REF]CVL Tool[END_REF][START_REF] Haugen | BVR -better variability results[END_REF]).
Soft constraints: Apart from the feature constraints presented before, there are other type known as soft constraints whose violation does not lead to invalid configurations [START_REF] Czarnecki | Sample spaces and feature models: There and back again[END_REF]. They are often used with the objective of warning the user, providing suggestions during the configuration process [START_REF] Czarnecki | Sample spaces and feature models: There and back again[END_REF] or as a way to capture domain knowledge related to trends in the selection of features. Therefore, the existence of these soft constraints is motivated by the suitability of the selection of features in product configurations.
We can find many references regarding soft constraints in the literature. Soft constraints are introduced in feature modeling to help stakeholders in the configuration process providing advices in the selection of features or feature combinations [START_REF] Barreiros | Soft constraints in feature models[END_REF]. In this way, hints constraints were introduced in order to suggest features during this decision-making process [START_REF] Streitferdt | Formal Details of Relations in Feature Models[END_REF][START_REF] Bühne | Modelling dependencies between variation points in use case diagrams[END_REF]. We can also find their opposite, the hinders constraints [START_REF] Bühne | Modelling dependencies between variation points in use case diagrams[END_REF]. They serve as a way to formalize the positive or negative influence of a given feature combination. The commercial feature modeling tool pure::variants has support for soft constraints through the encourages and discourages constraints [START_REF] Beuche | Modeling and building software product lines with pure::variants[END_REF]. All these works indirectly introduced the intuition that soft constraints describe a certain level of uncertainty about the quality of the result of some feature combination.
Figure 2.5 presents an illustrative Car FM [START_REF] Czarnecki | Sample spaces and feature models: There and back again[END_REF]. The hierarchy of features includes mandatory features (Gear, Car), optional features (DriveByWire, ForNorthAmerica), alternative features (Manual, Automatic), and a cross-tree constraint (DriveByWire ⇒ Automatic). Based on domain expert knowledge or on previous experiences (e.g., existing configurations from previous customers), selecting the feature ForNorthAmerica could suggest the selection of Automatic gear. In this dissertation we use a notation for soft constraints consisting in encapsulating the constraint inside a soft constructor. This way encourages and discourages can be formalized as follows:
• encourages: sof t(A ⇒ B). Feature A encourages the presence of feature B.
• discourages: sof t(A ⇒ ¬B). Feature A discourages the presence of feature B.
Using this notation, a domain expert could explicitly formalize the above-mentioned soft constraint (i.e., sof t( ForNorthAmerica ⇒ Automatic )). Therefore, soft constraints enable the capture of this relevant SPL domain knowledge during domain analysis.
Soft constraints are relevant entities in Chapter 9 where we propose a visualisation for domain experts to discover constraints including soft constraints. Also, in Part V, they play an important role in a process to search optimal configurations where we suggest the formalization of soft constraints by domain experts to narrow the configuration space.
It is not only about source code
The potential of reuse is not only restricted to source code, as already suggested in the 1980s [START_REF] Freeman | Reusable software engineering: Concepts and research directions[END_REF]. Most of the attention is paid to source code reuse but documentation, designs, models or components are examples of software artefacts that are being reused too.
Many works targeting different artefact types have been proposed in software reuse research to identify similar fragments by comparing them. For example, clone detection has been proposed not only for source code [START_REF] Kumar | A survey on software clone detection research[END_REF], but also for other artefact types such as models [DHJ + 08], graphs [PNN + 09], dataflow languages [START_REF] Gold | Issues in clone classification for dataflow languages[END_REF] or DSLs implemented under the executable metamodeling paradigm [MGC + 16] to name a few.
Derivation mechanisms for SPLE have been also proposed supporting different artefact types.
In the modeling domain, we can find examples of annotative approaches such as in UML class models [START_REF] Ziadi | Software product line engineering with the UML: deriving products[END_REF][START_REF] Czarnecki | Mapping features to models: A template approach based on superimposed variants[END_REF], sequence models [START_REF] Ziadi | Software product line engineering with the UML: deriving products[END_REF], activity models [START_REF] Czarnecki | Mapping features to models: A template approach based on superimposed variants[END_REF] or statechart models [START_REF] Rubin | Combining related products into product lines[END_REF]. Another generic approach for dealing with models is CVL [START_REF] Haugen | CVL: common variability language[END_REF] where a base model of a given domain-specific language (DSL) can be modified by removing or adding model fragments. There are plenty of specific techniques such as for model transformations reuse [START_REF] Chechik | Perspectives of Model Transformation Reuse[END_REF].
Part II deals with the challenge of proposing a generic approach in mining artefact variants that could support different artefact types. Within this part, Chapter 4 discusses the principles guiding this generic approach and presents how diverse artefact types are supported.
Chapter 5 presents all the details of the use of this generic approach in the case of models.
Approaches for software product line adoption
As mentioned in the introduction, SPL adoption is defined as the process for migrating from some form of developing software-intensive systems with a single-system mentality to developing them as an SPL [START_REF] Northrop | Software product line adoption roadmap[END_REF]. This section presents the approaches of SPL adoption.
2.2.1
Adopting a software product line
The SPL adoption strategy highly depends on the company scenario [START_REF] Northrop | Software product line adoption roadmap[END_REF][START_REF] Krueger | Easing the transition to software mass customization[END_REF]. The adoption factory pattern is a generic adoption roadmap describing several practice areas [START_REF] Northrop | Software product line adoption roadmap[END_REF]. However, it is intended to be instantiated and customized for each case. Despite that there is no predefined path that works in any situation, Krueger distinguished three categories of SPL adoption strategies [START_REF] Krueger | Easing the transition to software mass customization[END_REF]:
• Proactive: An SPL is designed taking into account the current and future customer needs. The organization proactively identifies and analyses the complete set of envisioned features and create their corresponding reusable assets.
• Reactive: In contrast to the proactive approach where the whole SPL aims to be created before starting the production, the reactive approach aims to create a minimal operative SPL, and later incrementally extend it to adapt to new customer needs. In terms of effort, the reactive approach requires less initial effort than the proactive one.
• Extractive: Existing product variants are leveraged to identify the features and the reusable assets to create the SPL. Mining existing assets is a practice area for establishing SPL production capability and operating the SPL [START_REF] Northrop | Software product line adoption roadmap[END_REF]. Notably, existing assets are project entry points that can be reused to seed, enrich, or augment the different building blocks needed to adopt an SPL [BFK + 99].
Given that this dissertation is mainly focused on mining artefact variants, we pay special attention to the extractive approach .
Extractive software product line adoption
The extractive SPL adoption approach is compatible with both proactive and reactive approaches. For the proactive approach, the features in the existing variants can be leveraged while the envisioned ones will have to be created from scratch. For the reactive approach, the company can use an extractive approach to create an SPL that focuses on exactly the same products than the existing artefact variants, or they can only consider a subset of the features of the variants (e.g., those with higher payoff).
Figure 2.6 illustrates important activities in extractive SPL adoption. It is worth mentioning that we have concentrated in technical activities relevant for this dissertation, therefore we 2.2. Approaches for software product line adoption are not considering other important factors in SPL adoption such as organizational change [START_REF] Bosch | Software product lines: Organizational alternatives[END_REF], economic models [START_REF] Sarmad | A comparative survey of economic models for software product lines[END_REF] or advanced scoping [START_REF] Schmid | Scoping Software Product Lines[END_REF].
Feature identification and naming
Feature location
Feature constraints discovery
Reusable assets construction
Feature model synthesis
Features and their associated implementation elements
Feature constraints
Artefact variants
Extractive SPL adoption Given the complexity of the products, a complete upfront knowledge of the existing features throughout the artefact variants is not always available. Several domains of expertise are often required to build a product and different stakeholders are responsible for different functionalities. In this context, we can assume that domain knowledge about the features of legacy variants is scattered across the organization. In some cases it may not be properly documented or in a worse scenario, part of the knowledge could even be lost.
In an scenario of incomplete information, feature identification should be performed. As discussed by Northrop [START_REF] Northrop | Software product line adoption roadmap[END_REF], "if an organization will rely heavily on legacy assets, it should begin the inventorying part of the mining existing assets", a practice area which consists in "creating an inventory of candidate legacy components together with a list of the relevant characteristics of those components" [NC + 09]. In feature identification, we aim to obtain the list of features within the scope of the input variants as well as the implementation elements related to each feature. During the identification of features there is the feature naming sub-activity to define the names that will be used in the FM. In another scenario, if the features are known in advance, feature location can be performed directly to identify the implementation elements associated to each feature.
The activity feature constraints discovery is important to guarantee the validity of configurations in the envisioned FM. Then, the activity feature model synthesis consists in defining the structure of the FM to be as comprehensible for the SPL stakeholders as possible. This is because the same configuration space can be expressed in very different ways in a FM (e.g., features hierarchy). Then, the implementation elements associated to each feature obtained during feature identification or location, are important for the reusable assets construction which prepares the assets targeting a predefined derivation mechanism.
Part II, III and IV deal with extractive SPL adoption proposing respectively: a generic and extensible framework, support for the experimentation of techniques in this research domain, and assistance to domain experts through visualisation and interaction paradigms.
Software product lines and end users
In many application domains, and regardless of the approach used for SPL adoption, SPLs derive products that end users should interact with. In SPLE scenarios intensively based on Human Computer Interaction (HCI) components, end users are key stakeholders. In general, HCI plays an increasingly important role in the success of software products. Historically, there are several milestones that have drastically changed the software engineering field regarding its relation with users: We can go back to the arrival of personal computers in the 1960s, the huge expansion of the Internet in the mid-1990s, the mass adoption of smartphones in the 2000s and, more recently, the promises of a globalization of ambient intelligence, internet of things, cloud-based workspaces, human-robot interaction or cyber-physical systems with humans in the loop. Computing has progressively evolved towards personalized customization and initiating a software engineering project increasingly requires to account for the human aspects of the potential customers.
Human centered SPLs (HSPL) are the inevitable confluence of the HCI and SPL engineering fields. From a human centered development perspective, there is the need to satisfy the end user of a software product, and this implies the exploration of different design alternatives. Variability management, as proposed in SPLE, enables the definition and derivation of the different alternatives. In HSPLs, three important barriers for understanding users perception of the product family variants are found: 1) It is not feasible that users assess all the possible variants in large configuration spaces, 2) human assessments are subjective by nature and 3) getting user assessment is resource expensive so there is an interest in minimizing it.
Part V focuses on this challenge by proposing an approach to rank and identify the most appropriate configurations in an HSPL within the boundaries of its configuration space.
3
Related work
Mining artefact variants in extractive SPL adoption
We present related work about the extractive SPL adoption activities described in Section 2.2.2: Feature identification, naming, feature location, constraints discovery, feature model synthesis and reusable assets construction.
Towards feature identification
The assumption that no domain knowledge is available about the features is perhaps too pessimistic and is thus limited to a few scenarios. However, it is also too idealistic to assume that an exhaustive list of the features and their associated implementation elements can be easily elicited from domain knowledge. ]. There are SPL adoption scenarios where the SPL wants to be extracted from a single product by separating its features. However, in this dissertation we concentrate on the case of several artefact variants. To distinguish features and their associated elements, researchers have proposed to analyse and compare artefact variants for the identification of their common and variable parts [YPZ09, ASH + 13c, ZHP + 14, RC12a, FLLE14]. We refer to each of such distinguishable parts as a block. A block is a set of implementation elements of the artefact variants that are relevant for the targeted mining task. Examples of existing techniques to identify blocks are based on static analysis, dynamic analysis or information retrieval techniques [START_REF] Klewerton | Feature location for software product line migration: a mapping study[END_REF]. Independently of the technique or artefact type, a block is an intermediary abstraction representing a candidate set of elements that might implement a feature.
In this dissertation we will use the term block but, in the literature, we can find the same concept with different names. 13c] and atomic blocks [START_REF] Ra | Mining Feature Models from the Object-Oriented Source Code of a Collection of Software Product Variants[END_REF].
Each calculated block cannot be directly considered the implementation of a feature. We distinguish the block and the feature identification activities considering that blocks help in feature identification but they still need to be refined with domain expertise.
In Chapter 4, we present how blocks are important entities of our generic and extensible framework, and how existing feature identification techniques are complementary to this framework.
3.1.2
Feature naming during feature identification Domain knowledge, which is characterized by a set of concepts and terminology understood by practitioners in the area of expertise is a sensible subject [NC + 09]. During the SPL adoption process, the organization must unify a vocabulary that will enable the stakeholders to share a common vision of the SPL. The vocabulary related to the product characteristics is formalized in the FM which is key in engineering SPLs. The feature naming step during feature identification is an overlooked topic in the literature of extractive SPL adoption. Current automated blocks identification processes are not focused on the naming problem neither on supporting final users via visualisation paradigms.
Davril et al. presented a feature naming approach as part of their automatic feature model extraction method [DDH + 13]. As input they used large sets of product descriptions in natural language. Itzik et al. presented a similar automatic approach but using product requirements [START_REF] Itzik | Variability analysis of requirements: Considering behavioral differences and reflecting stakeholders' perspectives[END_REF]. While these approaches are automatic, in Chapter 8 we propose a visualisation paradigm to include the domain experts early in the naming process, which could be used together with an automatic process if desired. We present relevant excerpts from the literature in extractive SPL adoption which acknowledge that current approaches still implement feature naming as a manual task that might be costly: )" And then, they "decided to add more meaningful names in order to improve understanding" by manually assigning the names of the identified features [START_REF] Manuel Ballarín | Leveraging feature location to extract the clone-and-own relationships of a family of software products[END_REF].
If naming should be done manually, a lack of support in state-of-the-art approaches for feature identification will have to be faced. This is an important threat to their efficiency and challenges their end-to-end usage. Ziadi et al. further admitted that checking the mapping of blocks to actual features "is a manual step and thus it requires a lot of effort to be accomplished" [ZHP + 14]. In fact, migration scenarios can vary in number of blocks, features, stakeholders and in the degree of availability of the domain knowledge.
In Chapter 8 we present a visualisation paradigm and a support tool for assistance during feature naming in feature identification. Practitioners having some knowledge about the domain will quickly validate/modify the propositions of our approach, thus speeding up the naming process. Our visualisation paradigm is motivated by the necessity to 1) close the gap for providing support for the manual task of naming during feature identification by leveraging legacy variants and 2) to speed up and improve the quality of feature identification.
3.1.3
Feature location
Compared to the feature identification process, the assumption in feature location is that the features are known upfront. Therefore, feature location focus on mapping features to their concrete implementation elements in the artefact variants. Feature location techniques in software families also use to assume that feature presence or absence in the product variants is known upfront [START_REF] Fischer | Enhancing clone-and-own with systematic reuse for developing software variants[END_REF]. where, as input, the domain expert manually needs to point the system to relevant fragments of an artefact with respect to a feature [START_REF] Kästner | Variability mining: Consistent semi-automatic detection of product-line features[END_REF]. Then, the approach automatically expands this user selection using information about element dependencies.
Depending on the type of the artefacts, feature location can focus on code fragments in the case of source code [RC12b, FLLE14, ZHP + 14, ASH + 13a], model fragments in the context of models [START_REF] Font | Automating the variability formalization of a model family by means of common variability language[END_REF] or software components in software architectures [ACC + 14, GRDL09]. Therefore, existing techniques are composed of two phases: An abstraction phase, where the different artefact variants are abstracted, and the location phase where algorithms analyse or compare the different product variants to obtain the implementation elements associated to each feature. Despite these two phases, the existing works differ in:
• The way the product variants are abstracted and represented. Indeed, each approach uses a specific formalism to represent product variants. For example, AST nodes for source code [START_REF] Fischer | Enhancing clone-and-own with systematic reuse for developing software variants[END_REF], model elements to represent model variants [START_REF] Rubin | A survey of feature location techniques[END_REF] or plugins in software architectures [ACC + 14]. In addition, the granularity of the sought implementation elements may vary from coarse to fine [START_REF] Kästner | Granularity in software product lines[END_REF]. Some use fine granularity using AST nodes that cover all source code statements while others use purposely a little bit bigger granularity using object-oriented building elements [ASH + 13a] like Salman et al. that only consider classes [START_REF] Eyal | Feature location in a collection of product variants: Combining information retrieval and hierarchical clustering[END_REF].
• The proposed algorithms. Each approach proposes its own algorithm to analyse product variants and identify the groups of elements that are related to features. For instance, Fischer et al. used a static analysis algorithm [START_REF] Fischer | Enhancing clone-and-own with systematic reuse for developing software variants[END_REF]. Other approaches use techniques from the field of Information Retrieval (IR). Xue et al. [START_REF] Xue | Feature location in a collection of product variants[END_REF] and Salman et al. [START_REF] Eyal | Feature-to-code traceability in a collection of software variants: Combining formal concept analysis and information retrieval[END_REF] proposed the use of Formal Concept Analysis (FCA) to group implementation elements in blocks and then, in a second step, the IR technique Latent Semantic Indexing (LSI) [DDL + 90] to map between these blocks and the features. Salman et al. used hierarchical clustering to perform this second step [START_REF] Eyal | Feature location in a collection of product variants: Combining information retrieval and hierarchical clustering[END_REF].
In Chapter 4, we propose a unified representation of product variants in order to use existing feature location techniques for different artefact types. In Chapter 6, we propose a benchmarking framework for intensive experimentation of feature location techniques.
Constraints discovery
After identifying features from a set of variants, we need to infer feature constraints to build a FM that accurately defines the validity boundaries of the configuration space. The literature proposes approaches for mining feature constraints from existing artefacts, although they focus on specific artefact types, such as C source code [START_REF] Nadi | Mining configuration constraints: Static analyses and empirical results[END_REF] or Java [RPK11, AmHS + 14, SSS16]. Some approaches do not rely on the internal elements of the artefact variants. For example, they rely on the existing feature configurations from the initial artefact variants to reason on the constraints [LMSM10, LHGB + 12, HLE13, CSW08, HLE13]. Other approaches use existing documentation [DDH + 13] or machine learning techniques along with an oracle that dictates when a given configuration is valid [START_REF] Temple | Using Machine Learning to Infer Constraints for Product Lines[END_REF].
In Chapter 4 we propose an approach for constraints discovery that can leverage these specific techniques. Thus, they can be generalized to other artefact types. In addition, they can be simultaneously used to aggregate their results. In Chapter 9 we present a visualisation to assist domain experts in reasoning about feature relations to discover feature constraints.
Feature model synthesis
Once the features and the constraints are identified, the objective of feature model synthesis is to create comprehensive feature diagrams for the domain experts. constraints. Its structure should be heuristically defined and feature model synthesis has been proven to be an NP-hard problem [SRA + 14].
Some approaches rely on the constraints defined in propositional formulas [SRA + 14, HLE13]. Bécan et al., trying to approximate more to the domain experts' expectations, embed ontological information of the domain [START_REF] Bécan | Breathing Ontological Knowledge Into Feature Model Synthesis: An Empirical Study[END_REF]. WebFML is a framework specialized in feature model synthesis [START_REF] Bécan | WebFML: Synthesizing Feature Models Everywhere[END_REF]. Itzik et al. combine semantic and ontological considerations via mining requirement documents [START_REF] Itzik | Variability analysis of requirements: Considering behavioral differences and reflecting stakeholders' perspectives[END_REF].
In Chapter 4 we consider this activity in extractive SPL adoption.
Reusable assets construction
Once the features are identified from the artefact variants, the final step towards SPL adoption is to construct the reusable assets that are associated to the features. This construction should enable the creation of new artefacts by composing or manipulating the associated reusable assets of the features. As mentioned before, approaches for identifying and extracting features from single software systems has been proposed [START_REF] Kästner | Variability mining: Consistent semi-automatic detection of product-line features[END_REF]. In the case of artefact variants, other approaches constructed reusable assets based on source code abstract syntax trees [FLLE14, ZHP + 14]. Méndez-Acuña et al. constructed reusable language modules [MGC + 16].
Finally, other approaches focused on defining a framework of n-way merging of models to create SPL representations [START_REF] Rubin | N-way model merging[END_REF].
The extensible framework and the notion of adapters explained in Chapter 4 offer the possibility of leveraging and integrating different approaches.
3.2
The model variants scenario
Several approaches have been proposed to study the mining of model variants for extracting Model-based SPLs (MSPL). We consider an MSPL as an SPL whose final products are models [FFBA + 14]. In this model-driven engineering (MDE) scenario, CVL is a modeling approach specialized in implementing MSPLs [START_REF] Haugen | CVL: common variability language[END_REF]. Figure 3.2 shows the CVL process for enabling a language-independent solution. CVL provides the necessary expressiveness to support systematic reuse for the derivation of models [CGR + 12]. Concretely, the Variability model of CVL consists of two layers:
• Variability definition layer: This layer defines the variability of the domain in a very similar fashion as feature modeling does [KCH + 90], i.e., this layer represents the features of a product family and the constraints among them.
• Product realization layer: This layer defines the modifications to be performed in a Base Model according to the features of the variability specification (e.g., adding or removing model fragments if a specified feature is selected). CVL was thus designed to simplify a top-down approach to the adoption of MSPL, where practitioners directly define and implement the assets for defining the MSPL. In Chapter 5 we focus on providing an extractive approach for MSPL adoption.
Visualisation
As reported by In Chapter 8 we present a visualisation for feature naming during feature identification.
In Chapter 9 we present a visualisation paradigm for supporting domain experts in manual analysis of feature relations that enable free exploration for constraints discovery.
Benchmarks and case studies
In SPL engineering several benchmarks and common test subjects have been proposed. Lopez-Herrejon et al. proposed evaluating SPL technologies on a common artefact, a Graph Product Line [START_REF] Roberto | A standard problem for evaluating product-line methodologies[END_REF], whose variability features are familiar to any computer engineer. The same authors proposed a benchmark for combinatorial interaction testing techniques for SPLs [LFC + 14]. Also, automated FM analysis has a long history in SPLE research [START_REF] Benavides | Automated analysis of feature models 20 years later: A literature review[END_REF]. FAMA is a tool for feature model analysis that allows to include new reasoners and new reasoning operators [TBC + 08]. Taking as input these reasoners, the BeTTy framework [SGB + 12], built on top of FAMA, is able to benchmark the reasoners to highlight the advantages and shortcomings of different analysis approaches.
Feature location on software families is also becoming more mature with a relevant proliferation of techniques. Therefore, benchmarking frameworks to support the evolution of this field are in need. Different case studies have been used for evaluating feature location in software families [START_REF] Klewerton | Feature location for software product line migration: a mapping study[END_REF]. For instance, ArgoUML variants have been extensively used [START_REF] Vinicius Couto | Extracting software product lines: A case study using conditional compilation[END_REF]. However, none of the presented case studies have been proposed as a benchmark except the variants of the Linux kernel by Xing et al. [START_REF] Xing | A large scale linux-kernel based benchmark for feature location research[END_REF]. This benchmark considers twelve variants of the Linux kernel from which a ground truth is extracted with the traceability of more than two thousands features to code parts. This benchmark does not provide a framework to support experimentation by easily integrating feature location techniques.
Also, case studies for experimenting with feature identification techniques are needed. Github is trending in the mining software repositories community. However, its expansion as public repository has degraded or complicated its practical exploitation in research [KGB + 14], mainly because the results may not be generalizable to software engineering practices in industry. In Chapter 7 we focus on Android application (app) markets to try to identify families of apps. Without detracting from the validity of research using Github, we consider that Android apps at the app markets have more guarantees of not being merely personal projects.
Reuse practices in Android markets have been studied [LBKLT16, RAN + 14] and the research field on malware detection is creating advanced and scalable methods for mining app markets [LBP + 16]. Their objective is to heuristically define if a legitimate app has a malware counterpart. The Diversify project has conducted several large-scale studies regarding software libraries usages in different contexts such as JavaScript websites, WordPress or Maven i . Their objective is to analyse the diversity as an enabler to adapt a software product during its evolution.
In Chapter 6 we present a benchmark for feature location based on variants of the Eclipse integrated development environment (EFLBench). The Linux kernel benchmark can be considered as complementary to advance feature location research because EFLBench a) maps to a project that is plugin-based, while Linux considers C code, and b) the characteristics of the natural language terminology is different from the Linux kernel terminology. This last point is important because techniques based on information retrieval techniques should be evaluated in different case studies. EFLBench is integrated with BUT4Reuse which is extensible for feature location techniques making easier to control and reproduce the settings of the studied techniques.
Generic and extensible frameworks in SPLE
SPLE is a maturing field that has witnessed a number of contributions, in particular in the form of frameworks for dealing with different aspects of variability management. Unfortunately, general approaches for mining existing artefacts are still not mature enough, delaying realworld SPL adoption. A contribution of this thesis is a unified, generic and extensible framework for extractive SPL adoption that harmonizes the adoption process (presented in Part II) but we can highlight other similar efforts for harmonizing other activities in SPLE.
Tools such as Pure::Variants [START_REF] Beuche | Modeling and building software product lines with pure::variants[END_REF] and Gears [KC] provide variability management functionalities targeting not only source code but also other artefacts including DOORS requirements, Microsoft Rational or Microsoft Excel spreadsheets to name a few. Handling new artefact types while sharing most of their core functionality for variability management is possible through extensions in Pure::Variants and bridges in Gears. The Feature IDE tool [TKB + 14] also provides extensibility for including composers dealing with different artefact types. Among the default composers, this tool includes FeatureHouse [START_REF] Apel | FEATUREHOUSE: Languageindependent, automated software composition[END_REF] which by itself is already a generic and extensible composition framework providing support for several programming languages. As another example, we already presented CVL in Section 3.2. CVL defines the variability on models as well as the composition of model elements in a meta-modelindependent way [START_REF] Haugen | CVL: common variability language[END_REF]. That means that any DSL can be enriched with variability management and variants derivation functionalities.
All the mentioned tools do not tackle the extractive SPL adoption by themselves. The feature model and the reusable assets are designed and developed inside the tool from scratch without support for mining existing artefact variants. On the contrary, our framework is focused on helping domain experts in the extractive SPL adoption process. Of course, the mentioned tools can be used to leverage and manage the mined variability and the extracted reusable assets to create an operational SPL [START_REF] Peter Jepsen | Minimally invasive migration to software product lines[END_REF][START_REF] Rubin | Managing cloned variants: A framework and experience[END_REF].
ECCO (Extraction and Composition for Clone-and-Own) [START_REF] Fischer | Enhancing clone-and-own with systematic reuse for developing software variants[END_REF] or VariantSync [PTS + 16] are proposed as variability-aware configuration management and version control systems. As part of their functionalities, they perform feature location on artefact variants and they were evaluated in source code case studies. The objectives of all these frameworks focus on specific activities of extractive SPL adoption.
Clone detection has been also used as an approach to identify features in a set of artefact variants. Most clone detection techniques are focused on the peculiarities of a targeted programming language [START_REF] Kumar | A survey on software clone detection research[END_REF] or to a specific meta-model in the case of models (e.g., simulink models [SASC12, DHJ + 08, Pet12] or graph-based models [PNN + 09]). The clone analysis workflows of ConQAT [START_REF] Juergens | Clonedetective -a workbench for clone detection research[END_REF] or JCCD [START_REF] Biegel | JCCD: a flexible and extensible API for implementing custom code clone detectors[END_REF] support several languages and provide extensibility for adding new types of artefacts while reusing visualisations and other analysis workflow elements. Using ConQAT, we introduced the cross-product clone detection approach to deal with source code artefact variants [START_REF] Martinez | Collaboration and source code driven bottom-up product line engineering[END_REF]. However, the tool was still not adapted to extractive SPL adoption since this was part of feature identification which is only an activity of the process. MoDisco [START_REF] Bruneliere | Modisco: A model driven reverse engineering framework[END_REF] is another example of an extensible framework to develop model-driven tools to support software modernization. However, these tools do not specifically target sets of artefact variants and focus only on single systems.
SPL configuration spaces and end users
In this section we present related work to Part V that assumes the existence of an SPL with Human-Computer Interaction (HCI) components.
Interactive analysis of configuration spaces
Instead of developing a software platform representing a solution for a wide range of user profiles, it is interesting to perform feature-based analysis to identify the most adapted solutions for specific needs [START_REF] Salam Sayyad | On the value of user preferences in search-based software engineering: a case study in software product lines[END_REF]. To achieve this objective, we should consider end user assessments as a driver for SPLE processes. Ali et al. presented the vision of social SPLs where the main goal is to leverage user feedback over time to derive products that maximise the satisfaction of a given user profile or collective [ASD + 11]. In this context, it seems mandatory to evaluate how users respond to the proposed artefact variants (e.g., usability evaluation). Therefore, there is a need to analyse the SPL and its configuration space.
Selecting optimal SPL product variants based on some criteria has been studied in SPLE. Unfortunately, if we consider end users, they provide subjective assessments which require feedback aggregation mechanisms (e.g., combining many assessments to draw some form of conclusion). Also, the assessments are performed for a product as a whole, indeed, users are not necessarily aware of the variability mechanisms nor of the specific features behind the assessed variants.
Genetic algorithms have been used for guiding the analysis of the configuration space [START_REF] Ensan | Evolutionary search-based test generation for software product line feature models[END_REF][START_REF] Salam Sayyad | On the value of user preferences in search-based software engineering: a case study in software product lines[END_REF]. A key operator for evolutionary genetic algorithms is the fitness function representing the requirements to adapt to. In other words, it forms the basis for selection and it defines what improvement means [START_REF] Eiben | Introduction to Evolutionary Computing[END_REF]. In our case, the fitness function is based on user feedback which is a manual process in opposition to automatically calculated fitness functions (e.g., the sum of the cost of the features). Because we deal with these user assessments as part of the search in the configuration space, we leverage Interactive Genetic Algorithms (IGA) where humans are responsible to interactively set the fitness [START_REF] Eiben | Introduction to Evolutionary Computing[END_REF][START_REF] Takagi | Interactive evolutionary computation: Fusion of the capabilities of EC optimization and human evaluation[END_REF]. This technique has been already used, specially in cases dealing with high subjectivity as we discuss in next subsection.
In Chapter 10 we present an approach to analyse the configuration space using an interactive evolutionary approach followed by a data mining technique.
Dealing with high subjectivity
Most probably, the highest subjectivity levels are obtained when dealing with human perceptions. For example, deciding if an artwork is interesting or not uses to be a debatable subject. Concretely, the challenges for leveraging user feedback to improve the results of the derivation of products have been already tackled in the computer-generated art community. Concretely, evolutionary computing applied to computer-generated art is the main technique used to leverage user feedback. In this way, IGAs have been used for aesthetics evaluation [START_REF]The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music[END_REF][START_REF] Dorin | Aesthetic selection and the stochastic basis of art, design and interactive evolutionary computation[END_REF].
Eiben presented two works on computer generated art that takes into account user feedback: the Mondriaan and Escher evolvers [START_REF] Eiben | Evolutionary reproduction of dutch masters: The Mondriaan and Escher evolvers[END_REF]. His objective was to mimic the artist styles by asking people how close they consider that a generated painting conforms to the painter style. The configuration space was explored through an IGA. In our case, the objective is not necessarily to approach to "style fidelity". The objective is to maximize the changes of collective acceptance of the derived products, therefore, the result can be far from the expectations of the artist.
The most relevant difference of our work compared with evolutionary art approaches is that they only rely on the evolution phase. Their assumption is that the most adapted art products found at the end of the evolution should be the best ones. However, in a standard situation, not all the possible configurations were assessed by the users given time and resource limitations. We present a second phase in order to predict the expected user feedback of the non-assessed products and the creation of a ranking.
Regarding previous work on SPLs with artistic content that might deal with subjectivity, we find that they did not made use of user assessments to reach their objectives. For example,
Part II
Mining artefact variants for extractive SPL adoption
Introduction
There are still research challenges that go beyond the specific techniques for the extractive SPL adoption activities presented in Section 2.2.2. Without detracting from the importance of improving and proposing techniques for feature identification, location, constraints discovery, feature model synthesis or reusable assets construction, this chapter faces the challenge of designing a common framework to unify the extractive SPL adoption process.
Causes for the lack of unification of extractive SPL adoption techniques.
• The proposed approaches are often related to specific kinds of software artefacts. As we presented in Section 3.1, there is a large body of techniques for achieving the objectives of SPL adoption. Unfortunately, their design and implementation are often specific to a given artefact type. There is, however, an opportunity to reuse the principles guiding these techniques for other artefacts.
• The absence of a unified process: From feature identification and location to feature model synthesis and reusable assets construction, addressing within the same environment the different objectives of these activities is challenging. Each approach requires inputs and provides outputs at different granularity levels and with different formats complicating their integration in a unified process. If such techniques are underlying in a framework, they could be seamlessly used in different scenarios.
• The need for benchmarking: The existing approaches and techniques for extractive SPL adoption differ in the way they analyse and manipulate the artefact variants. Thus, they also differ in their performance and quality of the results. In this context, their assessment and comparison become challenging.
Contributions of this chapter.
• A unifying framework to support extractive SPL adoption. The process proposed in the framework is built upon an analysis of the steps from the state-of-the-art on mining artefact variants. We use an intermediate model allowing to generalize and integrate existing specific techniques.
• A framework that allows domain experts to experiment with existing techniques for each of the activities of the process. This framework provides a common ground for assessing and comparing different algorithms as well as comparing different ways to chain them during the activities of extractive SPL adoption.
• A framework with extensibility for visualisations to support domain experts in manual tasks during extractive SPL adoption.
• BUT4Reuse (Bottom-Up Technologies for Reuse): A realization of the framework for extractive SPL adoption specially designed for genericity and extensibility.
This chapter is structured as follows: Section 4.2 presents the framework principles and the promoted process. Section 4.3 presents the realization of the framework. Section 4.4 presents an empirical evaluation and Section 4.5 discusses the limitation of the framework. Finally, Section 4.6 presents the conclusions outlining future research directions.
A Framework for extractive SPL adoption
This section presents the framework principles and the notion of adapters to support different artefact types. Then we present the design of an adapters using an illustrative example. Finally, we detail the activities of the framework for extractive SPL adoption.
Principles
In order to be generic in the support of various artefact types, our framework is built upon the following three principles:
(1) A typical software artefact can be decomposed into distinct elements, hereafter referred to as elements.
(2) Given a pair of elements in a specific artefact type, a similarity metric can be computed for comparison purposes.
(3) Given a set of elements recovered from existing artefacts, a new artefact, or at least a part of it (which would be a reusable asset), can be constructed.
On the one hand, principles (1) and (2) make possible to reason on a set of software artefacts for identifying commonality and variability, which in turn will be exploited in feature identification and location activities. Principle (3), on the other hand, promises to enable the construction of the reusable assets based on the elements found in existing artefacts.
Because our approach aims at supporting different types of artefacts, and to allow extensibility, we propose to rely on adapters for the different artefact types. These adapters are implemented as the main components of the framework. The adapter is responsible for decomposing each artefact type into the constituting elements, and for defining how a set of elements should be constructed to create a reusable asset. Figure 4.1 illustrates examples of adapters dealing with different artefact types. We can see how an adapter allows two operations, 1) to adapt the artefact to elements and 2) to take elements as input to construct a reusable asset for this artefact type.
Source code can be adapted to Abstract Syntax Tree (AST) elements which capture the modular structure of source code.
4.2.2
Designing an adapter to benefit from the framework Designing an adapter for a given artefact type requires four tasks. Throughout this dissertation, we refer to these tasks to explain the design decisions in the implementation of various adapters.
Task 1: Element identification. Identifying the Elements that compose an Artefact. This will define the granularity of the elements in a given artefact type. For the same artefact type we can select from coarse to fine granularity. For example, we can design an adapter for source code that only takes into account whole components (e.g., packages), or an adapter that decomposes in the AST elements presented in Figure 4.2, or we can decide to use a finer granularity and decompose statement elements inside the methods.
Task 2: Structural constraints definition. Identifying Structural Dependencies for the Elements. When the artefact type is structured, the elements will have containment relations. For example, in ASTs we have the relation to the parent. In model artefacts there is usually a root and containment references as well. However, there can be other types of structural dependencies. In the case of source code, this information is usually captured in program dependence graphs. For example, a class can extend another class producing a structural dependency, or a class, inside one of its methods, can instantiate another class. In addition to identifying all the dependencies among elements, in this task, it is also important to identify the cardinality of these element relations. For example, following with source code, if the programming language does not allow multi-inheritance, the inheritance dependency between two classes should have a maximum of one.
Task 3: Similarity metric definition. Defining a Similarity metric between any pair of Elements. An element should compare its similarity with another element getting a value ranging from zero (completely different) to one (identical). The similarity function is defined as σ :
E × E → [0, 1].
There is no general rule to calculate similarity. In some cases the elements provide an id, a unique signature or, in the case of a tree structure, a unique path to the root element. In other cases, it can be more complicated and heuristics must be integrated considering different structural characteristics, semantic relationships of retrieved words from the elements, or other kind of semantic similarity.
Task 4: Reusable assets construction. Defining how to use a set of elements to construct reusable assets. In order to decide how to construct, it is important to decide how the reusable asset will be composed with other reusable assets. If we are using a compositional approach, we will need to construct a fragment that conforms to this compositional approach. In general, if we decompose an artefact in a set of elements and we use the same set of elements for the construction, we should be able to obtain the same artefact.
To illustrate the design of an adapter, we will consider the following scenario:
Extractive SPL adoption for images: An illustrative example A graphic artist wants to adopt an SPL from image variants that were initially created following the copy-paste-modify reuse paradigm. On the left side of Figure 4.3 we present the set of existing image variants i and on the right side we illustrate the design decisions for the images adapter.
• Task 1: Element identification. It was decided to adapt image variants in an intermediate representation based on Pixel elements. The decomposition of an image into Pixel elements consists in loading the whole pixel matrix from the image file and obtaining the non-transparent pixels as Pixel elements.
• Task 2: Structural constraints. Each Pixel element has a structural dependency with its position (Cartesian coordinates). It was decided that pixel overlapping is not allowed so only one pixel is permitted for each position. This will allow to automatically discover structural constraints between features, e.g., two shirts cannot be used in the same image because at least one pixel position will be shared and the pixels in that position cannot overlap.
• Task 3: Similarity metric. The similarity metric consists in checking that the Pixel element positions are exactly the same, and then calculating the similarity of the color (red, green and blue values) and alpha channel (transparency).
• Task 4: Reusable assets construction. The construction for a set of Pixel elements is based on including the pixels at their corresponding positions in a transparent image.
The targeted compositional approach is the image composer of the FeatureIDE variability management tool [TKB + 14]. This composer will be able to use the constructed assets as reusable assets in a derivation mechanism based on superimposition of images.
Artefact variants
Pixel Elements Element dependencies
Activities for extractive SPL adoption within the framework
In Section 2.2.2, we presented relevant activities in extractive SPL adoption. Figure 4.5 presents the concepts and complementary activities introduced by our framework. Regarding the role of the adapter, in the decomposition layer, on the left side of Figure 4.5, we illustrate how the adapter represents the strategy for the decomposition of the artefact variants into elements. Also, on the right side of the figure, we illustrate how the adapter and its construction method helps in preparing the reusable assets. The framework builds on an architecture composed of various extensible layers explained below. After the decomposition of all artefact variants in elements, the elements are grouped via block identification. Then, with the abstraction of elements and blocks, we can chain the activities for extractive SPL adoption. Moreover, in Figure 4.5 we present the visualisation layer for domain experts which is an horizontal layer that covers the whole process.
Decomposition:
The first layer is the decomposition layer using the adapters. Therefore, this layer is extensible by providing support for different artefact types. It enables the creation of the elements representation which provides a common internal representation.
Block identification:
As presented in Section 3.1.1, a block is a set of elements obtained by analysing the artefact variants (e.g., comparing the variants to identify common and variable parts). Blocks permit to increase the granularity of the analysis by the domain experts so that we do not have to reason at element level. This is specially relevant when trying to identify or locate features, and also during constraints discovery. In the context of feature identification, block identification represents an initial step before reasoning at feature level.
In our running example, the identified blocks shown in Figure 4.4 correspond to the sets of Pixel elements of the different "parts" of the images. The employed technique, to be explained later, is the interdependent elements technique [START_REF] Ziadi | Feature identification from the source code of product variants[END_REF] that calculated the intersections among the Pixel elements of the variants.
Feature identification: Feature identification consists in analysing the blocks to map them to features. For instance, blocks identified as feature-specific can be converted into a feature of the envisioned FM. In other cases, a block can be merged with another block, or we can split the elements of one block in two blocks, to adjust it to identified features. To illustrate the usefulness of this, we present a recurrent issue when dealing with copy-paste-modified source code artefacts. This issue is the situation where a modification (e.g., a bug fix) was released in a set of the artefacts but not in all of them. In this case, the block identification technique could identify three different blocks: One containing the modified elements for the bug fix, one containing the "buggy" elements, and, finally, the one that contains the shared elements that were not part of the modified elements. In this case, we would like to remove the "buggy" block and merge the bug fix block with the shared block.
Also, some features are intended to interact with other features. A feature interaction is defined as some way in which a feature or features modify or influence another feature in defining overall system behavior [START_REF] Zave | An experiment in feature engineering[END_REF]. Logging feature is a common example to illustrate features that must interact with others because of its crosscutting behavior [START_REF] Vinicius Couto | Extracting software product lines: A case study using conditional compilation[END_REF] (i.e., other features might need to log if the Logging feature is present). In some cases, the implementation of the interaction needs coordination elements (also known as derivatives [START_REF] Liu | Feature oriented refactoring of legacy applications[END_REF], junctions [START_REF] Ziadi | Feature identification from the source code of product variants[END_REF] or glue code among others). It is also important to consider the negations of the features, i.e., some elements can be needed when a feature is not present. In our context of extractive SPL adoption, feature interactions, and the coordination elements, must be taken into account.
As part of feature identification, block names should be modified by domain experts to have representative names. In our running example, the graphic artist decided that the identified blocks would be directly assigned to features and a meaningful name to each feature was given.
Feature location: For feature location, we aim to locate the features reasoning on the list of known features and the identified blocks. Concretely, feature location techniques in our framework create mappings between features and blocks with a defined confidence. The confidence is a value from zero to one. For example, if the feature location technique concludes that a given feature is located in a given block with a confidence of one, it means that the technique is almost certain that the elements of this block are implementing this feature.
Constraints Discovery:
The identification of constraints by mining existing assets has been identified as an important challenge for research on the SPL domain [START_REF] Klewerton | Feature location for software product line migration: a mapping study[END_REF]. For this purpose, our framework is extensible to contribute constraints discovery approaches. In our running example, the Pixel elements cannot overlap as discussed before, so mutual exclusion constraints can be found, as shown in
Reusable assets construction:
The framework supports the step of creating the reusable assets from a set of elements using the adapter. Depending on the needs, the set of elements could correspond to a block, an identified feature, or the elements that corresponded to a located feature. In the running example, the reusable assets were constructed and they corresponded to the images of the identified blocks as shown in Figure 4.4.
Visualisation:
The visualisation layer is orthogonal to the other layers. It is intended to present to the domain expert relevant information yielded by other layers. Visualisations can be used, not only to display information, but also to interact with the results.
Framework realization in BUT4Reuse
Bottom-Up Technologies for Reuse (BUT4Reuse) is our realization of the presented framework. Significant efforts have been dedicated to engineering a complete tool-supported approach ii . The assessment of the realization of the framework consists of two parts: In the first place, we assess its genericity by presenting the available adapters. Secondly, we asses its extensibility by presenting the different techniques integrated in each of the framework layers. Most of the techniques that we integrated were previously used in extractive SPL adoption. However, given
ii BUT4Reuse source code, tests, documentation, user manual and tutorials: http://but4reuse.github.io that they were proposed for specific artefact types, our contribution consists in generalizing them to support the elements abstraction.
Genericity in BUT4Reuse through the adapters
Currently, BUT4Reuse integrates 15 adapters dealing with different artefact types to be directly used. Table 4.1 presents detailed characteristics of each of them. Also, it is worthy to mention that one can build on top of these adapters to develop tailored or improved ones. Despite of the different types of artefacts, similarity metrics and construction mechanisms, they can all benefit from the unified framework to perform extractive SPL adoptions.
From a technical perspective, based on our experience, the development burden for a typical adapter is small in terms of LOC. This is because 1) BUT4Reuse Core implementation provides the dedicated extension points to ease the work of the adapter developer and 2) the adapters can rely on off-the-shelf libraries for the manipulation of the targeted artefact types including its decomposition, similarity calculation or construction. For example, the Text Lines, File Structure, CSV, Graphs, and Images adapters are respectively made of only 177, 207, 210, 274 and 230 LOC. Besides, each of them has been implemented in less than one day by an experienced developer.
The C and Java Source Code adapters have been realized by integrating ExtractorPL [ZHP + 14]. The integration of this adapter took about one work-day and consists of 930 LOC. The short time to integrate ExtratorPL can be justified because it is also based on the principle of decomposing the artefacts into elements (AST elements in this case). After this integration, we reproduced the same case studies dealing with source code artefact variants presented in previous works [ZFdSZ12, ZHP + 14]. We further extended ExtractorPL in the case of Java source code by including more information about structural dependencies among elements. ExtractorPL uses ASTs as the ones presented in Figure 4.2 which only capture containment dependencies, thus, omitting other structural dependencies present in source code (for example Java methods instantiating classes from other packages will have a structural dependency with these classes). For this purpose, we use source code dependency graphs created with the Puck tool [START_REF] Girault | Puck: an architecture refatoring tool[END_REF] to add extra dependencies to the AST elements.
The development and integration of the EMF Models adapter took eight days and consists of 499 LOC. It should be noted that the complexity and development time of all the abovementioned adapters must be put in perspective with the benefits in the complex analyses that they enable once integrated in BUT4Reuse.
Regarding the similarity function of the adapters to compare elements, we presented that the results ranged from zero to one. By default the threshold is one, meaning that they need to be identical to be considered the same element during the automatic techniques (e.g., block identification). However, we allow a user-specified threshold that can be useful when using heuristics for similarity that are not using ids or signatures of the elements. We also allow another optional user-specified threshold for requiring the domain expert to manually decide if two elements are equal.
Block identification
Figure 4.6 illustrates the block identification techniques integrated in BUT4Reuse. All of them share a similarity analysis phase using the similarity function defined by the adapter to compare elements. We use gray backgrounds to show the different identified blocks using these techniques. It has been suggested that the interdependent elements technique presented before uses an algorithm similar to Formal Concept Analysis [START_REF] Ganter | Formal Concept Analysis: Mathematical Foundations[END_REF]. Our experimental results confirm that the same blocks are obtained using both techniques. However, the interdependent elements algorithm orders the blocks by frequency in the artefact variants, while with FCA a post-processing of the retrieved blocks is needed to achieve this.
FCA and structural splitting:
This technique extends FCA by further splitting the obtained blocks based on information regarding the structural dependencies of the elements. Concretely, it detects groups of elements within a block that do not depend on other elements of the block. In Figure 4.6, we show how the block in the bottom left is split into two blocks because the two groups are not connected through dependencies within the block. This technique can be useful in distinguishing potentially unrelated groups of elements that could belong to different features (specially when the artefact elements are strongly connected and the blocks are large in number of elements). On the contrary, if the elements are not highly connected, we risk obtaining a lot of blocks with few elements.
Similar Elements:
This technique provides the finest granularity where each element, after the similarity analysis phase, corresponds to one block. In Figure 4.6 we show how each block is associated to one element. By using this algorithm we create large amount of blocks to be used for testing or for delegating the analysis to the techniques of the following extractive SPL adoption activities.
Feature identification
For feature identification, the domain experts analyse the elements of each of the identified blocks to try to map them to features. This means that, currently, it is a manual process only supported by the visualisations. In some cases, feature identification could be a trivial activity if block identification techniques could end up with blocks directly associated to features. However, in other cases, the identified blocks could be related to feature interactions or noise introduced by independent evolutions of the artefact variants (e.g., bug fixes) requiring further analysis.
Constraints discovery techniques could help to spot blocks containing coordination elements in case of feature interactions. These techniques can discover relations among the features involved in the feature interaction. For example, structural dependencies can be discovered among the block of coordination elements and the features. Also, by mining the existing configurations we can find that a block only appears when some features are present.
iii Galatea Formal Concept Analysis library: https://github.com/jrfaller/galatea
Feature location
For feature location, the current implementation consists in trying to map blocks to features. Apart from the core of each feature, we are also interested in locating coordination elements belonging to feature interactions. Before applying feature location techniques, in order to locate these coordination elements, we implemented a method for pre-processing the feature list. This method can be optionally used to automatically include "artificial" features related to pairwise interactions, 3-wise interactions or feature negations.
Given that we introduced the notion of the confidence of the located feature with a value from zero to one, we allow the definition of a user-specified threshold. By default, the threshold is one. We integrated the following set of feature location techniques:
Feature-Specific: This heuristic is based on the idea that, for a given feature, the relevant blocks are those that are always present when the feature is present. Equation 4.2 shows how it is calculated. For each feature and block pair, we calculate, from the artefact variants implementing the feature, the percentage that it also contains the block. A percentage of 100% for a given block and feature means that the block always appear when the feature is present in the artefacts.
located(f i , b i ) = | artef acts(f i ) ∩ artef acts(b i ) | | artef acts(f i ) | (4.2)
Figure 4.7a shows an example assuming that FCA was applied for block identification and that we are trying to locate a given feature (F1) that we know it is implemented in two artefacts (A1 and A1). We can observe how we only reach 100% in the blocks intersecting A1 and A2. Strict Feature-Specific (SFS): SFS is more restrictive than feature-specific by following two assumptions: A feature is located in a block when 1) the block always appears in the artefacts that implements this feature and 2) the block never appears in any artefact that does not implement this feature. The principles of this feature location technique are similar to locating distinguishing features using diff sets [START_REF] Rubin | Locating distinguishing features using diff sets[END_REF]. Equation 4.3 shows how SFS is calculated. Figure 4.7b shows an example of this technique where we can observe that this technique "penalizes" the blocks that are included in artefacts which do not include this block (i.e., A3, A4 and A5). 100% is only obtained in the block that intersects A1 and A2 and no other artefact.
located(f i , b i ) = | artef acts(f i ) ∩ artef acts(b i ) | | artef acts(f i ) ∪ artef acts(b i ) | (4.
3)
The next following feature location techniques use information retrieval (IR) techniques. The intuition behind these techniques is based on the fact that feature names and descriptions contain useful information that can be used to associate features to elements. Therefore, words (also known as terms) are extracted from feature names and descriptions as well as from the elements. Several techniques are proposed to filter and pre-process these words to reduce the amount of words (e.g., synonyms). Some of these filtering techniques are explained in Chapter 8.
Latent Semantic Indexing (LSI): LSI is a technique for analyzing relationships among a set of documents [DDL + 90] that has been used in feature location in the context of extractive SPL adoption [ASH + 13c, SSD13, XXJ12]. The name and description of a feature are extracted to create a term query. Then, a term-document matrix is created and the cosine similarity is calculated with the words extracted from the elements of the block. Given that the amount of words can be large, LSI techniques often rely on parameters to reduce this amount. In our implementation, this can be configured for using a user-specified percentage of the words. Alternatively, the user can select a fixed number of words ignoring the least frequent ones. Then, depending on the similarity among these words using LSI, some blocks will seem to be more related to this feature.
SFS and Shared term:
The intuition behind this technique is first to group features and blocks with SFS and then apply a "search" of the feature words within the elements of the block to discard elements that may be completely unrelated. For each association between feature and block, we keep, for this feature, only the elements of the block that have at least one meaningful word shared with the feature. That means that we keep the elements whose term frequency (tf) between feature and element (featureElementTF) is greater than zero. For clarification, featureElementTF is defined in Equation 4.4 being f the feature, e the element and tf a method that just counts the number of times a given term appears in a given list of terms.
f eatureElementT F (f, e) = term i ∈e.terms tf (term i , f.terms) (4.4)
Figure 4.9 illustrates, on the left side, how for a given feature, we have associated words and how, from a block obtained with SFS, we discard elements that do not share any word with the feature.
SFS and Term frequency:
After employing SFS, this technique is based on the idea that all the features assigned to a block compete for the block elements. The feature (or features in case of drawback) with higher featureElementTF will keep the elements while the other features will not consider this element as part of it. Figure 4.9 illustrates this technique in the center of the figure. Three features compete for the elements of a block obtained with SFS, and the assignation is made by calculating the tf between each element and the features.
SFS and tf-idf: Figure 4.9, on the right side, illustrates this technique. SFS is applied and then the features also compete, in this case, for the elements of the block but a different weight is used for each word of the feature. This weight (or score) is calculated through the term frequency -inverse document frequency (tf-idf) value of the set of features that are competing. tf-idf is a well known technique in IR [START_REF] Gerard Salton | A vector space model for automatic indexing[END_REF]. In our context, the idea is that words appearing more frequently through the features may not be as important as less frequent words. A more detailed explanation and the equations are presented in Section 8.2.1.
Constraints discovery
In BUT4Reuse, several techniques for discovering constraints can be simultaneously applied. We provide three techniques for discovering constraints among features or blocks.
Structural Constraints Discovery: It consists in analysing the structural dependencies between pairs of features or blocks. Specifically, we identify requires (A ⇒ B) and mutual exclusion (¬ (A ∧ B)) structural constraints by analysing the structural dependencies defined in the elements. The requires constraint is defined when at least one element from one side has a structural dependency with an element of the other side. Formally, and being the same for features (replacing B by F), the definition for the requires constraint is presented in Equation 4.5. do stands for dependency object and a dependency object can be an element or an external entity (e.g., in the images example the dependency objects are cartesian coordinates). The mutual exclusion constraint discovery is also defined at block and feature levels. The rationale is that, in some cases, a do can only tolerate a maximum number of elements depending on it. Figure 4.10 illustrates, on the right side, three blocks (B3, B4 and B5) which have elements that depends on an element do. If do can only tolerate two elements depending on it then B3 and B4 cannot co-occur, nor B3 and B5. Therefore, B3 excludes B4 and B5. However, B4 and B5 can co-occur because they do not exceed the maximum number of dependencies allowed by do.
B 1 requires B 2 ⇐⇒ ∃ e ∈ B 1 : ∃ do ∈ e.dependencies : do ∈ B 2 ∧ do / ∈ B 1 (4.5)
In our running example, the pixel position dependency object can only tolerate one Pixel element depending on it. Other example are containment references in Classes of EMF Models where an upper bound can be defined according to the cardinality of the reference.
We present how to identify these mutual constraints: Given DOs the set of dependency objects where do ∈ DO, and dependencyIDs the different types of structural dependencies where id ∈ dependencyIDs, the function nRef (number of references) represents the set cardinality of the subgroup of elements in the block that has a structural dependency with a given do. Equation 4.6 presents how nRef is calculated and the formula for mutual exclusion at block level when the set of elements of the blocks are disjoint.
nRef (B i , do, id) = |{e : e ∈ B i ∧ do ∈ e.getDependencies(id)}| B 1 excludes B 2 ⇐⇒ ∃ do ∈ DO, ∃ id ∈ dependencyIDs : nRef (B 1 , do, id) + nRef (B 2 , do, id) > do.getM axDependencies(id) (4.6)
A-priory association rule: This technique uses association rule learning. Concretely we integrated the A-Priory algorithm as previously evaluated for this purpose in the SPLE literature [START_REF] Lora-Michiels | A method based on association rules to construct product line models[END_REF]. In comparison to the structural constraints technique that suggests constraints after internally analysing the elements and their structural dependencies, this approach mines the relationships of the presence or absence of the blocks in the artefact variants (i.e., the configurations of the existing variants). It is worth mentioning that using this technique, the discovered constraints will restrict the configuration space to the feature configurations of the initial artefact variants. In other words, any feature combination that was not previously present in the mined artefact variants will be considered as invalid. Our implementation of this technique uses the Weka data mining library [HFH + 09].
FCA-based constraints discovery:
When applying FCA, concepts are related through concept lattices which represent potential relations among the blocks. This technique has been presented and evaluated in research on extractive SPL adoption [RPK11, AmHS + 14, SSS16].
We implemented the requires dependency when there is a sub-concept that requires a superconcept according to the identified lattices.
Feature model synthesis
This activity is covered by BUT4Reuse with two simple implementations. Currently, the synthesized feature models are exported to FeatureIDE [TKB + 14].
Flat feature diagram:
This technique creates a FM without any hierarchical information among features. It is based on the creation of an abstract root feature where all the features are added as subfeatures. Regarding the constraints, all of them are added as cross-tree constraints.
Alternatives before Hierarchy: This heuristic is based on calculating first the Alternative constructions from the mutual exclusion constraints, and then creating the hierarchy using the requires constraints. The constraints that were not included in the hierarchy are added as cross-tree constraints. Figure 4.4 showed the result of this heuristic in the case of the identified mutual exclusions in the images example.
Reusable assets construction
The construction is the responsibility of the adapter that will use the elements associated to features to create the reusable assets. Extractive SPL adoption approaches are said to be semantically correct when they can generate exactly the original products [START_REF] Rubin | Combining related products into product lines[END_REF]. Once the features are located, BUT4Reuse has a functionality to re-generate the initial products to check this property.
Visualisations
BUT4Reuse provides a set of visualisations.
Bars Visualisation: Bars visualisations are used for visualising crosscutting concerns in aspect oriented software development [START_REF] Eclipse | The visualiser, AJDT: AspectJ Development Tools[END_REF] or for visualising source code clones [START_REF] Tairas | Visualization of clone detection results[END_REF].
In our context, bars are used for displaying several types of information. Concretely, it can display how elements are distributed on the artefacts, how blocks are distributed on the artefacts, how features span in the blocks and how blocks map the features.
Feature location heat-map: Other currently available visualisations rely on Heat Maps
where larger values in a matrix are represented by dark squares and smaller values by lighter squares. Feature location heat-map visualises the relations among features and blocks to help in feature location. Figure 4.12 presents a matrix that relates known features on Vending Machines artefacts to identified blocks on this set of artefacts. The matrix values that define their color is based on the calculation from the feature-specific feature location technique. For example, for feature Coffee, block 0 and block 4 have 100% because these blocks are present in all the artefacts where we have Coffee. The heat-map is enriched with red location marks to show the results of the selected feature technique that can be other than feature-specific. In this case, the red marks correspond to SFS meaning that, in the case of Coffee, block 0 appears in other variants that do not have Coffee, and block 3 does not appear in any variant does not have Coffee. Therefore, the red mark appears only for block 3. Feature Relations Graphs (FRoGs): FRoGs is detailed in Chapter 9 and it is used for constraints discovery.
VariClouds: This visualisation, based on word clouds, is used during the feature identification activity to suggest feature names to domain experts. It is described in Chapter 8.
Experiences and evaluation with the Eclipse case study
In this section we discuss the practical usage of the realization of the framework in BUT4Reuse. First, we quantitatively evaluate the effort for integrating new techniques and adapters. Then, we detail an SPL adoption scenario building on the case study of Eclipse variants. We perform a qualitative evaluation in a controlled experiment scenario with master students that designed and developed the Eclipse Adapter. We also present the results of the usage of this Adapter to discuss the benefits of an extensible framework.
Design of the Eclipse adapter
Eclipse [START_REF]Eclipse. Eclipse integrated development environment[END_REF] is an integrated development environment providing tool-sets for a wide range of software development needs. Each of this tool-sets is called an Eclipse package. The official Kepler release of Eclipse is a family of twelve default packages: Standard, Java EE, Java, C/C++, Scout, Java and DSL, Modeling Tools, RCP and RAP, Testing, Java and Reporting, Parallel Applications and Automotive Software iv . We targeted the extractive SPL adoption taking these packages as existing variants.
An Eclipse package is based on a folder that contains the executable and a set of folders and configuration files. Two relevant folders are the plugins folder (containing the installed plugins), and the features folder (containing information about the features present in the variant). We purposely ignored the information that the features folder could provide and we used it only for discussing the results of the extractive SPL adoption activities.
Chapter 6 provides more details on Eclipse and Eclipse releases. The detailed information in that chapter is not necessary to understand this experiment which focus on the Kepler release.
Eclipse adapter design and implementation
A BUT4Reuse adapter for Eclipse was designed and implemented in three weeks of development by a group of eight master students that received a formation of six hours on BUT4Reuse principles. They followed the tasks presented in Section 4.2.1.
• Task 1: Elements identification. The elements that compose an Eclipse package are the Plugin elements (the plugins) and the File elements (all the resources of an Eclipse package that are not the plugins). In order to decompose an artefact, the adapter performs a tree traversal of the Eclipse root folder obtaining the Plugin and File elements.
• Task 2: Structural dependencies identification. A Plugin element may depend on other plugins. Each plugin has meta-data (bundle manifest declaration) which declares its required plugins. Concretely, we considered the static non-optional dependencies defined in the Require-Bundle set. Therefore the Plugin element will structurally depend on the Plugin elements of these plugins. The id assigned to this dependency type is requiredBundle and it has no upper bound because there is no limit in the number of plugins that can require a plugin. The File elements depend on their corresponding parent File element and the defined dependency type's id is container. There is no upper bound as the folder can contain any number of files or folders.
• Task 3: Similarity metric definition. The similarity between Plugin elements is implemented comparing the plugin identifiers (ids). For File elements, the similarity is implemented comparing their file path relative to the Eclipse root folder.
iv http://eclipse.org/downloads/packages/release/Kepler/SR2
• Task 4: Reusable assets construction. The construction of a set of elements is implemented by replicating the plugins and files associated to these elements. Also, there is a configuration file named bundles.info that is adjusted if present during the construction. This final adjustment leads to completely functional Eclipses created through systematic reuse.
The Eclipse adapter, integrated in BUT4Reuse, consists of 471 LOC. Because of the presented design decisions, there are some known limitations: 1) we only identify Plugin elements that are in the Eclipse plugins folder. Other technical mechanisms exist like using the dropins folder or the bundle pooling mechanism. However, they are not used in the Eclipse official packages so it was not considered a relevant limitation, 2) the similarity metric among Plugin elements does not consider plugin versions, meaning that two plugins with the same ids but different versions are considered the same Plugin element. This situation happens with 23 plugins out of more than two thousand plugins present in the twelve variants and only in two of them they were major version changes. Finally, 3) other technical methods to define structural dependencies such as Import-Package or x-friends are not considered.
Lessons learned:
We consider that the learning curve in the framework principles and the basic usage of BUT4Reuse (six hours) is acceptable. The students started to discuss in terms of elements and blocks quickly. We have also observed that the effort is bigger in the design of the Adapter than in the implementation itself. Before starting the implementation, the students needed to obtain an in-depth knowledge of many notions and terminology of Eclipse. This corroborated our experience with other adapters, such the JSON and Scratch adapter shown in Table 4.1 which were implemented by another student. Defining the granularity of the elements and the similarity metric, or identifying how to get the information of the dependencies, represented a difficult decision-making process with many trade-offs.
Results and discussions
We report the results of the different layers defined in Section 4.2.3 and we discuss the implication of the layers' extensibility. We used the twelve Eclipse Kepler SR2 Windows 64-bits packages as artefact variants. The average number of elements per artefact is 1483. The Eclipse adapter developed by the students takes eleven seconds to decompose the twelve Eclipse packages into Plugin and File elements. The reported performance in execution time is the average of ten executions calculated using a laptop Dell Latitude E6330 with a processor Intel(R) Core(TM) i7-3540M CPU @3.00GHz 3.00GHz, 8GB RAM, with Windows 7 64-bit.
Visualisation: Figure 4.13 shows an example of a visualisation that helped to understand the Eclipse structure in terms of the defined elements. Concretely, it presents the result of the Graph visualisation of the elements of the Modeling Tools package. The white nodes are Plugin elements and the grey nodes are File elements. The edges correspond to the defined structural dependencies. The size of the nodes is related to the number of elements that depends on this node (in-degree). The biggest grey File element node corresponds to the mentioned plugins folder containing all the plugins. The biggest white Plugin element node corresponds to the org.eclipse.core.runtime plugin which is the foundation of the Eclipse platform. Almost all plugins require this plugin to be functional. If we focus on the white nodes, on the left side of the biggest grey File element (plugins folder), we can see a set of highly interconnected plugins, and on the right side, a set of plugins with few or no dependencies to other plugins. If we focus on the grey nodes, we can observe the tree-like structures of the File elements.
Block identification: We used the Interdependent Elements technique for block identification as presented in Section 4.3.2. 61 blocks were identified. The average number of elements per block was 68, and the technique took 62 milliseconds.
Feature identification: For this step, we requested the expertise of three domain experts with more than ten years of experience on Eclipse development, who analysed, independently from each other, the textual representations of the elements of the 61 Blocks. They were able to manually identify features by guessing the functionality that the blocks could provide. We present this manual process through an explanation of the feature naming subactivity.
Feature naming during feature identification: The domain experts were able to select a name for each identified functionality. The feature identification process was possible with an average of 87% of the blocks assigned to a named feature. This manual task took an average of 51 minutes. We manually analysed the reported names of all the blocks and the experts' comments. Regarding the coincidences in the names, 56% of the blocks were equally named by the three domain experts. In 33% of the blocks there were coincidences in two of them. That ends up with an 11% where there was no coincidence. Also, not all the blocks were easy to name. In 18% of them, at least one domain expert was not able to set a name. According to their comments, the reasons were 1) completely ignoring the plugins or 2) the plugins inside a block had no evident relations among them. The first one is a limitation stemming from the experts' knowledge while the second one is a limitation of the block identification technique. Also, 8% of the blocks corresponded to plugins that are libraries. The three domain experts mentioned that these blocks cannot be considered as features, but support for actual features. Another 5% of the blocks were considered irrelevant from a functional perspective given that they completely consist of source code plugins (non compiled) found in some distributions but not in others.
We repeated this feature naming case study with the assistance of a visualisation paradigm for domain experts. The results are presented in Chapter 8.
Feature location:
In this case study, feature location consists in locating the plugins associated to each Eclipse feature. For assessment purposes, we programatically mined the Eclipse features of all the Eclipse packages by getting the information from the features folder. This provide us the list of features to locate, as well as a ground truth for evaluation of the employed feature location technique.
Chapter 6 explains the feature location activity in Eclipse packages and the results of four feature location techniques.
Constraints discovery: We used the Structural constraints discovery technique on the 61 blocks. 74790 structural constraints were discovered. That demonstrated how highly interconnected the Eclipse plugins are. The analysis took 88 seconds. We also used the A-Priory association rules (with a limit of 30000 rules to prevent memory issues in the algorithm). The analysis took 0.5 seconds. This technique discovered also excludes constraints that are not expected to be true in the context of Eclipse features analysis. This technique is conservative in the sense that it prevents block combinations that are not part of the existing variants. Again, there are trade-offs of using one technique or other. The A-Priory technique, which does not reason on the elements' structural dependencies, is more conservative against possible semantic constraints among the features. A user of BUT4Reuse may decide that selecting this technique is not appropriate for the Eclipse variants scenario because we are not expecting mutual exclusions constraints among features.
Reusable assets construction: Each of the 61 Blocks were constructed separately. The reusable asset consists of a set of files that, if integrated in an Eclipse, can provide some functionality. We evaluated the validity of the reusable assets by re-constructing the 12 Eclipse variants. We compared the file structure from the original and the re-constructed ones. They were the same except for a few cases because of the mentioned limitation of not considering different plugin versions. After manually solving these versioning problems, we manually checked that the Eclipse packages were executable and functional (i.e., the plugins could be started without dependency issues). We further generated non-existent variants from structurally valid configurations according to our discovered constraints. For example, we generated an Eclipse with only the Core block, i.e., the elements that are common to the twelve packages. Also, we generated one package with all the blocks providing, in a single package, the functionality of the twelve variants. We created other Eclipse with the core and CVS versioning system support, and another with the union of the blocks corresponding to the Java and the Testing Eclipse variants.
Limitations
Feature model synthesis: As discussed before, the blocks were renamed during feature identification. After that, using the two available feature model synthesis approaches, the Flat feature diagram and the Alternatives before Hierarchy heuristic, we created two different feature diagrams. In the second one, the hierarchy was very limited because of the highly interconnected blocks. The presence of the cross-tree constraints were more prominent given that classical feature diagrams only support one parent feature.
Limitations
In this section we discuss two general limitations of the proposed framework for extractive SPL adoption.
The problem of feature interactions
Apart from planned feature interactions (e.g., the Logging feature mentioned in Section 4.2.3), feature interactions can also be the cause of undesired system behavior. In extractive SPL adoption this is an issue when, once the SPL is adopted, we want to derive new products which were not part of the initial configurations. As we mentioned in Section 4.3.7, the extractive SPL adoption is semantically correct when we can generate exactly the original products [START_REF] Rubin | Combining related products into product lines[END_REF]. After performing the feature constraints discovery activity, the generation of new products beyond the original ones is an issue mainly related to SPL testing [HPP + 14].
The boundaries of semantic similarity
The third task for designing an adapter described in Section 4.2.2 is related to the definition of a similarity metric among elements. The similarity function can make use of all the information that the adapter can retrieve such as the properties of the element, information about the ancestor elements in case of structured elements, or information about element dependencies to other elements. However, in many cases, calculating the similarity requires to take into consideration the semantics of the elements.
This issue has been tackled in different research communities dealing with different artefact types. In the modeling community, they have categorized different similarity calculation approaches [START_REF] Kolovos | Different models for model matching: An analysis of approaches to support model differencing[END_REF]: The static identity-based matching assumes that the elements have a unique id. The signature-based matching calculates an id based on several properties. The similarity-based matching assign weights to properties and aggregate related elements (e.g., dependencies among elements). Finally, custom language-specific matching take into account the semantics of the elements. To achieve these semantic comparisons the similarity function is aware of the domain-specific semantics (e.g., UML class models [START_REF] Xing | Differencing logical UML models[END_REF]).
In the source code clone detection community, textual similarity is used to detect clones of Type I, II or III [START_REF] Kumar | A survey on software clone detection research[END_REF] which are, respectively: identical source code fragments, fragments with changes in the name of literals, variables, etc., or fragments where some modifications where made by adding or removing parts. In the Type IV, the fragments do not share textual similarity but they have functional similarity. These ones are called semantic clones.
While designing an adapter for BUT4Reuse requiring semantic comparisons, we will depend on the state-of-the-art of the available semantic comparison approaches for the targeted artefact type. The limitations of the similarity function will be also the limitations of the BUT4Reuse adapter. One example is the requirements and the natural language text adapter that we implemented, shown in Table 4.1. We use a similarity function to compare the meaning of two sentences based on a semantic analysis [WP94] using WordNet [fJ15] which is a database of cognitive synonyms. For analysing the variability in requirements, we used syntactic and semantic similarity while others also rely on parts of behaviors as manifested in the requirements [START_REF] Itzik | Variability analysis of requirements: Considering behavioral differences and reflecting stakeholders' perspectives[END_REF]. Semantic analysis requires advanced techniques which are out of the scope of this dissertation. However, we are aware that failing to correctly calculate the similarity can impact negatively the extractive SPL adoption activities supported by BUT4Reuse.
Conclusions
We introduced a generic and extensible framework for an extractive approach to SPLE adoption. We presented its principles with the objective to reduce the current high up-front investment required for an end-to-end adoption of systematic reuse. The framework can be easily adapted to different artefact types and can integrate state-of-the-art techniques and visualisation paradigms to help in this process. We have presented Bottom-Up Technologies for Reuse (BUT4Reuse) which is our realization of the framework. We demonstrated the generic and extensible characteristics of this realization by presenting a variety of fifteen adapters to deal with diverse artefact types. We also demonstrated the extensibility with techniques and visualisation paradigms that have already been integrated to provide a complete solution. We empirically evaluated the framework integration and development complexity, and its usage, in the scenario of adopting an SPL approach from existing Eclipse variants.
As further work, apart from improving or proposing concrete techniques, there are still many challenges on generecity and extensibility in extractive SPL adoption. Software does not rely on only one type of artefact. For example, a software project uses to contain requirements, design models, source code and test suites. Therefore, we should be able to take into account different artefact types simultaneously. We conducted experiments in this direction, for example, considering files and plugins in Eclipse variants, or source code, files and meta-data in the case of Android applications as we will see in Chapter 7. Also, extensibility in the layers of the framework creates the need of defining guidelines for different scenarios to select the most appropriate techniques and extensions.
Extraction of Model-based software product lines from model variants
This chapter is based on the work that has been published in the following papers:
Introduction
Using the principles of the BUT4Reuse framework described in Chapter 4, we present MoVa2PL (Model Variants to Product Line) where we address the requirements for extracting Model-based Software Product Lines (MSPLs) from model variants. Adopting an MSPL, or any other kind of SPL, will allow practitioners to easily and efficiently propagate changes in one feature to all existing variants by simply re-deriving them automatically. Moreover, the extracted MSPL can be relied upon to efficiently derive new products by combining features.
Challenges of extractive SPL adoption in the modeling scenario.
• Dealing with model variants. Analysing and comparing the existing model variants (i.e., models that are used using ad-hoc reuse techniques) to identify commonality and variability in terms of features is an important activity to extract an MSPL. Also, once features are identified and located, and the constraints among them are detected, we need to use this information to construct the operative MSPL.
Contributions of this chapter.
• BUT4Reuse adapter for models enabling MoVa2PL: We provide a meta-model independent approach for commonality and variability analysis through the design of a BUT4Reuse adapter. We assume that variants are not independently generated out of the same family of systems. We include information about the element dependencies for the automatic detection of constraints between the identified features. Finally, we propose a method to automatically construct the MSPL. In the realization of our approach, we used the Common Variability Language (CVL) [START_REF] Haugen | CVL: common variability language[END_REF] to implement the MSPL.
The remainder of this chapter is structured as follows: Section 5.2 presents an example to illustrate the challenges in the modeling domain. Section 5.3 details the design decisions of the model adapter. Section 5.4 presents experiments based on case studies with large systems and we discuss the approach and limitations. Finally, Section 5.5 concludes this work and outlines future directions.
Extraction of Model-based Product Lines
In the realm of software engineering, models, which are high-level specifications of systems, have progressively gained importance for researchers and practitioners as the primary artefacts of development projects. Traditionally, modeling has been used in a descriptive way to represent systems by abstracting away some aspects of the systems and emphasizing others [START_REF] Völter | DSL Engineering -Designing, Implementing and Using Domain-Specific Languages[END_REF]. Nonetheless, prescriptive modeling is now trending and is relied upon to automate the generation of products as well as their validations [START_REF] Douglas | Model-driven engineering[END_REF]. In this context, models are often extended, customized or simply reconfigured for use in particular system settings. Thus, an important challenge in Model-Driven Engineering (MDE) is to develop and maintain multiple variants, i.e., similar models, by exploiting the features that the models share (commonalities) and managing the features that vary among them (variabilities) [START_REF] Apel | An overview of feature-oriented software development[END_REF].
We present in Figure 5.1 our running example illustrating a scenario of UML model variants for different banking systems. This banking system domain and artefacts were used in previous works [ZFdSZ12, ZJ06, ABB + 02]. In this scenario we have created three models through ad-hoc reuse with variations on the limit of bank withdrawal, the consortium entity and currency conversion. The building of variants for such a simple running example aims to illustrate how time-consuming and error-prone the manual creation of variants can be in real-world complex scenarios.
The first created variant, Bank 1 UML model, is implemented with information related to currency conversion and consortium. Bank 2 UML model includes a new requested feature that is the support for a limit in the withdrawal. This new banking system, however, does not need consortium. Thus, to create it, we consider a copy of Bank 1 where we added one UML property, two UML operations, modified the name attribute of a UML operation and removed the UML class Consortium and all related UML elements (one UML class and two UML operations). The needs of Bank 3, on the other hand, include limit in the withdraw and consortium, but no currency conversion. To create this variant, we build from a copy of Bank 2 where we removed all UML elements related to currency conversion (one UML property, four UML operations and one UML class). However, we also selected and copied UML elements from Bank 1 to complete the implementation of Bank 3.
Add Remove Modify
Copy-paste The presented manual process quickly ceases to be sustainable if we consider the possibility that continuously creating new variants requires even more effort in finding, selecting and reusing elements from other variants. Furthermore, because of a lack of an explicit formalization of feature constraints, potential inconsistencies in the requested feature configuration for a new variant will be found during or after the variant derivation.
To extract an MSPL from the variants of our running example, feature identification and location would consist in analysing the three UML variants shown in Figure 5.1 to identify the core elements of a banking system, as well as the different features related to currency conversion, consortium and limit support. In addition, we would need to identify the constraints among them. Then, we will exploit the identified features and the existing variants to build the necessary assets for the MSPL. In this case, these assets are the CVL layers presented in Section 3.2 which are the variability definition and product realization layers. We are concerned with the following research questions:
RQ1: Based on existing model variants, can we automatically construct a variability definition layer that ensures the validity of the configuration space?
RQ2: Can we automatically infer a product realization layer for variants of complex systems?
This chapter presents the design of the BUT4Reuse model adapter which aims to provide a solution for the previous questions enabling the extraction of MSPLs.
Designing the Model Adapter
We report the design decisions during the creation of the model adapter. The following four subsections corresponds to the design tasks described in Section 4.2.1 including technical details about modeling frameworks and CVL.
Elements identification: A meta-model independent approach
Models are artefacts that can be expressed as a sequence of elementary construction operations [START_REF] Blanc | Detecting model inconsistency through operation-based model construction[END_REF]. By using the Meta Object Facility (MOF) concepts [START_REF] Omg | Meta Object Facility (MOF) Core Specification[END_REF] we are able to decompose any model compliant with the Essential MOF which is a subset containing MOF's core. The Eclipse Modeling Framework (EMF) [START_REF]Eclipse. Eclipse integrated development environment[END_REF] is a meta-modeling framework which is considered an implementation of Essential MOF. EMF is widely used to define domain-specific modeling languages (DSLs). The model adapter decomposes the model in elements, called Atomic Model Elements (AMEs) hereafter. The AMEs in our approach are:
• Class: Each DSL defines which types of meta-classes are available. Therefore, it is important not to strictly relate the term class with UML classes. In Figure 5.1 we can observe how, even for UML models, we have other classes such as Packages, Properties or Operations.
• Attribute: A class in a DSL can contain attributes relevant to this class. Each attribute will have a value. A typical attribute can be the name attribute of a class which expects a string value.
• Reference: Apart from attributes, references are important properties of the classes. Instead of the typed values of the attributes, references are "links" to other classes.
From a technical perspective, we implemented the decomposition of a model variant in AMEs using a pre-order tree traversal of the model by following the containment references. We add each Class and its Attributes and, after that, we add the References once all the Classes are retrieved.
The reflexivity capability of EMF models allows to get information from the meta-model. Concretely, we use this capability to provide a generic approach for dealing with models in MoVa2PL. During the decomposition, we also check, for both attributes and references, that they are not derived, nor volatile, nor transient. If this is the case, we ignore them in the decomposition. For example, if an attribute has the derived flag in a given meta-model, it means that its value is automatically calculated from other attributes or by some function, thus we do not add this as Attribute element during the decomposition of the model in AMEs.
We also add the condition that their values must have been set before (i.e., non null values or empty lists of referenced elements).
By applying the presented method, Figure 5.
Structural dependencies identification
In Section 4.3.5, we explained why the information about the structural dependencies among elements is important to identify structural constraints. A dependency involves a pair of AMEs and it has an id which corresponds to the dependency type. Each dependency type has an upper bound representing the maximum number of times an AME can be referenced with its dependent counterpart.
The AME dependencies are obtained as follows:
• Class: A class AME depends on its parent class AME. By parent we mean the container relation (not to be confused with inheritance in the UML sense). The dependency id is the containment reference id as it was given in the meta-model. The upper bound is the containment reference upper bound as indicated in the cardinality defined in the meta-model. For instance, from the running example, the class AME Operation deposit depends on the class AME Class Account (parent), its dependency id is ownedOperation and, in this case, there is no upper bound because the meta-model defines that a UML Class can contain unlimited owned operations.
• Attribute: An attribute AME depends on the class AME hosting the attribute. The dependency id is the attribute id which is defined in the meta-model. A class cannot have the same attribute twice, therefore, the upper bound is one. As an example of such dependency, any Operation has an attribute AME name. In Operation deposit, the attribute AME name depends on Operation deposit and there can only be one attribute name.
• Reference: A reference AME depends on the class AME hosting the reference. The dependency id is the reference id of the host class. The upper bound in a dependency with the host is one as it was the case for the attribute AME. However, a reference AME also depends on each of the class AMEs referenced. In this case, the hard-coded value referenced is used as id for the dependencies to the referenced class AMEs. Given that a class AME can be referenced as many times as desired, the upper bound in a dependency with the referenced classes is unlimited. From the running example, the reference AME Type of Parameter amount depends on Parameter amount (the host) and also depends on the class AME Primitive Type double as it is referenced.
Similarity metric definition: Relying on extensible techniques
In Section 4.5 we discussed the different ways to calculate similarity between elements. For the comparison between AMEs, we rely on existing techniques of model comparison that are highly extensible. We used EMF DiffMerge [START_REF]EMF Diff/Merge: a diff/merge component for models[END_REF] that enables the comparison of two model scopes using different match and diff policies. These techniques allow dealing with meta-model peculiarities or implementing specific comparison purposes. By using DiffMerge, we provide the means to integrate domain-specific similarity calculations if needed. MoVa2PL provides a default similarity metric for AMEs. More precisely, using the classification by Kolovos et al. [START_REF] Kolovos | Different models for model matching: An analysis of approaches to support model differencing[END_REF] MoVa2PL default model comparison behaviour is static identity-based matching. By using the mentioned extension mechanisms, it is possible to contribute signature based-matching or others if required.
We explain how we designed the default similarity metric for the model adapter, which is a boolean method (i.e., 0 for different and 1 for equal).
• Class: Two class AMEs are equal if we isolate each of the classes in a scope that contains only these elements and the default EMF DiffMerge policy returns no difference in the comparison. The Match policy used by default consists in trying to compare an attribute tagged as id for the meta-class of the class. In EMF meta-models, this information is sometimes included (e.g., an id attribute or a unique name attribute). If no id attribute is defined, it tries to infer an id by directly looking in the serialization mechanism of the artefact (e.g., ids in a XMI file). If no id is found, it calculates a URI as id for the comparison. Notice that this default policy ignores all the attributes and references of the class (except the id attribute if defined). This way it will not necessarily state that a class is different if they have different attributes or references.
• Attribute: In EMF, each defined attribute in the meta-model has an identifier (for example Operation_Name is the id for the attribute Name of the Operation meta-class).
Two attribute AMEs will be the same if they deal with the same attribute id and if the owner classes of the attribute are the same. Finally, the diff policy is in charge of deciding whether the values of the attributes should be considered equal for this attribute id. The default implementation of the diff policy just performs an equals operation on the values.
• Reference: As well as with attributes, two reference AMEs are equal if they share the same reference id and if the owner classes are the same. Then we check that the referenced classes are the same. If it is an ordered reference, the referenced AMEs must appear in the same positions while if it is not ordered, it is only needed that all elements are present in the other reference AME.
Reusable assets construction: Generating a CVL model
As presented in Section 2.1.2, different strategies can be selected for implementing an SPL ranging from positive or negative variability (or hybrid approaches) [START_REF] Völter | Product line implementation using aspect-oriented and model-driven software development[END_REF]. These strategies have been already analysed in the context of MSPLs where positive variability is referred to as the additive approach and negative variability is referred to as the subtractive one [START_REF] Zhang | Developing Model-Driven Software Product Lines[END_REF][START_REF] Perrouin | Reconciling automation and flexibility in product derivation[END_REF]. The additive approach relies on a minimum base model composed only by the core of the family of models, i.e., the model elements that are common to all model variants. Then, library models are required containing the model fragments to be added to the base model. On the contrary, the subtractive strategy consists in constructing the maximum base model and then removing the model elements related to each of the non selected features. Hybrid approaches use a mix between subtractive and additive strategies by using library models and still leaving the possibility to subtract from the base model.
In MoVa2PL, we rely on a subtractive strategy. It is possible to construct a maximum base model even if the resulting base model violates cardinalities defined in the relations among meta-classes (i.e., cardinality upper bounds). Despite that the base model is structurally invalid, the resolution model will be responsible to operate in the base model to bring it to a valid state. We explain how the CVL layers, described in Section 5.2, are created. Once these layers are constructed, the used CVL tool [START_REF]CVL Tool[END_REF] provides an engine to automatically transform the base model in a resolved model with the selected features.
Variability model construction:
The variability model is created using information from the identified features and the discovered constraints. Figure 5.4a shows the CVL variability model created from the three model variants of the running example. The four steps for its creation are as follows:
1. Identified features are added as well as their negations. Feature negations are needed as a technical mechanism to differentiate the actions of the resolution layer. The negations will trigger the removal of model elements from the base model. We use the flat feature model synthesis presented in Section 4.3.6.
2. Discovered structural constraints are added as propositional logic formulas.
3. Mutual exclusion constraints are added to avoid selecting a feature and its own negation.
4. Configurations, in terms of features in the existing model variants, are added. In CVL terminology, these configurations are called resolution elements.
In the running example, as shown at the bottom of Figure 5.4a, three configurations, corresponding to existing variants, are created in the fourth step. However, with the features and constraints identified and formalized as result of the first three steps, there are eight possible valid configurations. Therefore five additional models can be derived using different combinations of feature selections. Base model construction: The base model creation, which in our case is a maximal model, can be considered as a realization of an n-way model merge [START_REF] Rubin | N-way model merging[END_REF]. We start from the class AME marked as the initial resource (i.e., the root) and we automatically construct the base model from scratch with the information contained in each AME using: an in-depth tree traversal of the containment dependencies creating the classes and setting their attributes, and a second phase where the references are set to the corresponding classes in the base model. In this process, there is neither need to consider if upper bounds are being violated, nor if there are attributes in the base model that are already set. In these cases, as discussed before, the resolution model is responsible for providing the means to adjust it at derivation time. Figure 5.4b shows the base model obtained from the variants of our running example.
Resolution model construction: Given that we are following a subtractive strategy, each feature negation will be resolved by removing the classes associated to this feature negation.
To do so, we create three CVL entities for each feature negation: A placement fragment, a replacement fragment and a fragment substitution element. The placement fragment defines the model elements from the base model that will be replaced with the replacement fragment.
To implement deletion in our subtractive strategy, this replacement fragment consists of an empty fragment. Finally, the fragment substitution element is only a link to relate the placement with the replacement fragment.
In the CVL implementation that we use [START_REF]CVL Tool[END_REF], the resolution layer is defined inside the variability model itself so the resolution information is contained in each of the features presented before in the variability layer. Regarding attribute and reference AMEs, if the class AME hosting the attribute or reference AME is not in the same feature, we need to add a placement, replacement and substitution elements to the resolution information of the feature. For example, in the WithdrawWithoutLimit feature, we add a Placement value in the name attribute of the corresponding Operation class and we add a Replacement value, as shown at the bottom of Figure 5.5, with the string of the attribute AME. At resolution time, if WithdrawWithoutLimit is selected, this attribute value will be assigned. For reference AMEs, the same approach is followed but Object placement, replacement and substitution are used.
Experimental Assessment
In this section we discuss the assessment of MoVa2PL in two case studies. First, we describe the BUT4Reuse settings for conducting the extractive SPL adoption. Then, we present the characteristics of the case studies and the extracted MSPL. Finally, we summarize the evaluation of the MoVa2PL where we checked its efficiency to extract an MSPL for large models, and its effectiveness to derive the initial models and new valid model variants.
BUT4Reuse settings for the case studies
Block and feature identification: Block identification is performed by computing the interdependence relations among AMEs as explained in the interdependent elements algorithm in Section 4.3.2. Feature identification is a process where domain experts will manually review the elements from the identified blocks to map them with the features of the system.
Feature constraints discovery:
A structurally valid model is a model that does not violate any constraint defined in the meta-model such as cardinality of the model references or the non existence of dangling elements (i.e., an element without parent). Semantic validity of model variants, which is checked by domain experts, is out of the scope of the structural analysis. As described in Section 5.3.2, we augment the information of AMEs with the dependencies among AMEs in all variants. Thus, once the features are already identified, we can perform the structural constraints discovery activity which will allow a more reliable definition of the variability model for the MSPL. The binary structural constraints discovery technique is explained in Section 4.3.5. For the discovery of structural constraints among features we reason on the dependencies of the AMEs. We provide details, separately, on the requires and excludes constraints discovery in this context.
Requires constraints discovery:
To avoid dangling elements in derived models after the extraction of the MSPL, we identify the requires constraints that will assure that all the classes (except the root) will have a parent. We also identify the requires constraints between any pair of features when one feature needs the other (i.e., the feature is referenced).
In Figure 5.6 we show the blocks that were identified by computing the interdependence between AMEs. The BankCore feature, which corresponds to the first block (Block 0), comprises most of the AMEs. Specifically, it comprises those common to all model variants. On the contrary, WithdrawWithoutLimit is based only on one AME (in the bottom left corner of the figure) that is the attribute AME of the name of an Operation. Each AME shows also its dependencies with other AMEs through the edges (clockwise-based directed graph). We observe that most of these dependencies exist between AMEs corresponding to the same feature (intra-feature structural dependencies). However, we observe dependencies between AMEs of different features (inter-feature structural dependencies) that in this case they are all in the direction to the Block 0 of the BankCore feature. As defined in the binary structural constraints discovery, we identify the requires constraint between two features when at least one AME from one feature has a structural dependency to an AME of the other feature. Figure 5.7 thus shows the discovered requires constraints in the running example.
Figure 5.7: Structural constraints discovered among the features of the banking system. All the features require the bank core and there is a mutual exclusion between withdraw with limit and withdraw without limit.
Mutual exclusion constraints discovery:
Cardinality in model references plays an important role in defining a DSL. These cardinalities define constraints in the domain that must not be violated to obtain valid models. To avoid violation of upper bound cardinalities we identify in which cases two features cannot coexist in the same model.
To illustrate upper bound cardinalities in real modeling scenarios, and following with the context of the running example, we discuss the cardinalities of the widely used UML metamodel. In UML meta-model, as implemented in Eclipse UML2 ecore, there are 242 classes with a total of 3113 non-volatile, non-transient, non-derived references. 67.7% of these references have no upper bound. 32.2% of them have an upper bound of one maximum referenced model classes. The remaining 0.1% corresponds to the DurationObservation meta-class which allows to model execution durations in UML. This class has the event reference with an upper bound of two UML NamedElements.
Taking the example of a containment reference with an upper bound of one, it is structurally invalid to reference two different containments at the same time. This is applicable also for the reference AME and for the attribute AME. In our running example, we have two attribute AMEs that depend on the same Operation class AME with the name dependency id. In one of them the value is withdrawWithLimit and in the other is withdrawWithoutLimit. These attribute AMEs correspond to two different features and only one can be used in a derived model. This discovered constraint is shown in Figure 5.7 with an "excludes" link between the two features.
ArgoUML case study
ArgoUML is an open source tool for UML modeling. Variants of this tool were created from its Java codebase [START_REF] Vinicius Couto | Extracting software product lines: A case study using conditional compilation[END_REF]. The features are mainly related to the tool support for the edition of different kind of UML diagrams (i.e., Activity, Collaboration, Deployment, Sequence, State and UseCase diagrams). We reverse-engineered the source-code of seven variants related to diagram edition support as UML models in order to apply MoVa2PL.
Table 5.1 presents these variants and the AMEs obtained after using the model adapter.
The last column corresponds to the number of dependencies between AMEs. For example, the first one, ActivityDisabled, meaning that this model variant contains all the features related to UML diagrams except Activity diagram, contains more than hundred thousand AMEs. Reverse-engineering source code variants to models was also appropriate to evaluate the scalability of MoVaPL. These models contain more than fifty thousand classes. The decomposition in AMEs for the seven model variants, including the dependencies, took an average of 15 seconds (i.e., around two seconds per variant) using a lap-top Dell Latitute E6330 with a processor Inter(R) Core(TM) i73540M CPU @3.00GHz 3.00GHz, 8GB RAM, with Windows 7 and 64-bit Operating System. The interdependent elements algorithm identified 41 blocks and took an average of seven minutes in ten runs. Table 5.2 shows the result of the identified blocks and their size in terms of AMEs. Apart from the "big" blocks, we manually checked the small blocks and we realized that most of them contain reference AMEs that, in the UML meta-model, are defined as ordered (i.e., the order of the referenced elements is important). The applied matching method takes into consideration the ordering of the referenced elements and therefore considered them as different. This issue, probably introduced by the Java to UML reverse engineering tool, makes more difficult the work of the domain expert who needs to manipulate and analyse these blocks. However, in terms of size of model elements, the main part of the features were successfully identified by MoVa2PL (i.e., blocks zero to six).
Figure 5.9 shows the graph visualisation of the discovered structural constraints. Concretely, for the ArgoUML case study, 45 requires constraints and 13 mutual exclusion constraints were discovered. Figure 5.9 shows that the six identified features related to the diagrams requires the Core feature. The other nodes of the graph correspond to blocks that the domain expert needs to analyse. These automatically calculated relations among the blocks can help during this manual process. For example, those blocks that exclude each other should be related and the scope of analysis is narrowed for those blocks requiring another block that is not the Core feature. The mentioned problem identified with the ordering of the references highlights the importance of the matching method. MoVa2PL is flexible to apply different matching methods as we presented in Section 5.3.3. By providing a matching method that ignores the ordering of the references, 18 blocks were identified in the ArgoUML case study from which the first seven blocks also corresponded to the Core and diagrams' features.
In-Flight Entertainment Systems case study
We consider the case study of an In-Flight Entertainment (IFE) System. This system is responsible for providing entertainment services for the passengers, including movies, music, internet connection or games during a flight. For our experiments we consider an IFE system from the Thales Group modelled in Capella [START_REF] Polarsys | [END_REF]. Capella is a system engineering modeling tool which implements the Arcadia method for system, software and hardware architectural design. A system modelled with Capella consists of five layers. The operational analysis layer captures the stakeholders, their needs, as well as general information of the system's domain.
The system analysis layer formalizes the system requirements. The logical architecture layer defines how the system fulfils its requirements. The physical architecture layer defines how the system will be technically developed and built. Finally, the end-product breakdown structure layer formalizes the component requirements definition to facilitate component integration, validation, verification and qualification. The domain of Arcadia method is realized and tool-supported in Capella by defining a system engineering DSL consisting of 17 related meta-models with a total of 411 meta-classes.
We consider in our study three model variants of the IFE system. The first is the Original IFE system and the other two (LowCost1 and LowCost2) are manually created taking this variant as input. LowCost1 is a variant that does not include the feature for Wi-Fi access for passengers. Figure 5.11 shows an operational analysis diagram with model elements related to Wi-Fi access. We can see how the aircraft provides connectivity and the personal device allows the passenger to browse the Internet during the flight. The capability of Wi-Fi access for passengers is propagated to the rest of the model layers such as the system analysis or the logical and physical architecture. LowCost2 is an IFE system without support for
ExteriorVideo which allows the passengers to watch, in their personal screens, the exterior of the plane at any time during the flight. The interdependent AMEs algorithm leads to the identification of three blocks. The identified blocks are manually analysed and mapped to the features. Table 5.4 summarizes the number of AMEs for the corresponding features.
Discussions about MoVa2PL
For evaluating MoVa2PL, based on the case studies outlined above, we concentrate on checking that, in each case, the extracted MSPL is able not only to 1) re-generate the previously existing variants using our systematic reuse approach, but also to 2) generate previously non-existing variants which are structurally valid. By realizing the experiments on different modeling meta-models, namely UML models and the Capella DSL, we have shown the flexibility and genericity of MoVa2PL. We obtained an exact match of the re-generated models and we generated possible non-existing variants (RQ1 and RQ2). We also checked manually the structural validity and we found that we successfully prevented invalid models (RQ1).
Currently, MoVa2PL presents a number of limitations and includes hypotheses which are threats to validity. These limitations correspond also to the limitations of BUT4Reuse itself as we presented in Section 4.5. Feature identification is a challenging task and can be complex for domain experts. Automatically identifying features using heuristics may lead to an output where a given identified feature is actually a set of different ones. This situation is likely when a set of features came always together in all the variants. In this case, the interdependent elements technique for block identification cannot distinguish among them. The coordination elements of possible feature interactions can also complicate the process.
Regarding feature constraints, MoVa2PL presents some limitation in ensuring the structural validity of models. Indeed, we are not considering constraints that could be defined using the Object Constraints Language (OCL) [OMG14, [START_REF] Warmer | The Object Constraint Language: Getting Your Models Ready for MDA[END_REF][START_REF] Czarnecki | Verifying feature-based model templates against well-formedness OCL constraints[END_REF]. There is also the issue of the semantic validity of derived model variants. A semantically valid model is a structurally valid model which also makes sense in the domain (i.e., a combination of features that does not violate any semantic rule of the domain). The extracted MSPL will allow the creation of new models based on combinations of identified features. However, these new models might be semantically invalid. The challenging issue of assuring some notion of semantic validity has been addressed in other works such as Czarnecki et al. [START_REF] Czarnecki | Verifying feature-based model templates against well-formedness OCL constraints[END_REF]. In Chapter 9, we propose a visualisation paradigm to help domain experts formalize these constraints using their domain knowledge.
Conclusion
The definition of the model adapter for BUT4Reuse enables MoVa2PL to chain extractive SPL adoption activities. We have presented MoVa2PL as a solution to MSPL adoption from existing model variants. Firstly, the feature identification process considers structural constraints discovery in order to extract a variability model that ensures the validity of the MSPL configuration space. Secondly, a product realization layer is extracted with the information of the AMEs related to each of the features. This realization layer will operate on a base model extracted by merging the variants in a base model. We assessed MoVa2PL in two case studies considering big-medium sized model variants with different meta-models.
Despite that MoVa2PL is extensible for the similarity calculation, it is interesting to continue researching on facilitating the definition of signature-based matching approaches [START_REF] Kolovos | Different models for model matching: An analysis of approaches to support model differencing[END_REF] by domain experts, or the evaluation of semantic techniques in other case studies. Also, as presented in Section 5.2, apart from descriptive models, models are used in a prescriptive way and therefore the artefacts associated to them (e.g., model transformations [START_REF] Chechik | Perspectives of Model Transformation Reuse[END_REF] or behavior semantics [MGC + 16]) should be taken into account during extractive MSPL adoptions.
Part III
Collecting artefact variants for study and benchmarking
Introduction
As presented in Section 2.2.2, feature location is an essential activity of extractive processes towards systematic reuse. In Section 3.1.3, we discussed the diversity of proposed techniques and the increasing interest by the research community on this subject [START_REF] Rubin | A survey of feature location techniques[END_REF][START_REF] Klewerton | Feature location for software product line migration: a mapping study[END_REF]. Because of this, feature location benchmarks are required to push the state-of-the-art enabling intensive experimentation of the techniques. Concretely, there is a need to empirically evaluate and compare the strengths and weakness of the techniques in different scenarios.
Comparing and experimenting with feature location techniques is challenging
• Most of the tools are strongly dependent on specific artefact types that they were designed for (e.g., a given type of model or programming language).
• Performance comparison requires common settings and environments. There exist difficulties to reproduce the experimental settings to compare performance.
• Most of the research prototypes are either unavailable or hard to configure. There exists a lack of accessibility to the tools implementing each technique with its variants abstraction and feature location phases.
Given that common case study subjects and frameworks are in need to foster the research activity [START_REF] Sim | Using benchmarking to advance research: A challenge to software engineering[END_REF], we identified two requirements for such frameworks in feature location:
• A standard case study subject: Subjects that are non-trivial and easy to use are needed. This includes: 1) A list of existing features, 2) for each feature, a group of elements implementing it and 3) a set of product variants accompanied by the information of the included features.
• A benchmarking framework: In addition to the standard subjects, a full implementation allowing a common, quick and intensive evaluation is needed. This includes: 1) An available implementation with a common abstraction for the product variants to be considered by the case studies, 2) easy and extensible mechanisms to integrate feature location techniques to support the experimentation, and 3) predefined evaluation metrics to draw comparable results.
Contributions of this chapter.
• We present the Eclipse Feature Location Benchmark (EFLBench) and examples of its usage. We propose a standard case study for feature location and a benchmark framework using Eclipse packages, their features and their associated plugins. We implemented EFLBench within BUT4Reuse which allows a quick integration of feature location techniques.
• We present the automatic generation of Eclipse variants as part of EFLBench capabilities to construct tailored benchmarks. This enables the evaluation of techniques in different scenarios to show their strengths and weaknesses.
The rest of the chapter is structured as follows: In Section 6.2 we present Eclipse as a case study subject and in Section 6.3 we present the EFLBench framework. Section 6.4 presents different feature location techniques and the results of EFLBench usage in the official Eclipse releases. Section 6.5 presents the strategies for automatic generation of Eclipse variants. Finally, Section 6.6 concludes and presents future work.
The Eclipse family of integrated development environments
This section extends the information presented in Section 4.4 about the domain of the Eclipse Integrated Development Environment (IDE) [START_REF]Eclipse. Eclipse integrated development environment[END_REF]. Then we justify the creation of a benchmarking framework using this case study.
6.2.1
Tailored Eclipses for different development needs The packages present variation depending on the included and not-included features. For example, Eclipse package for Testers is the only one including the Jubula Functional Testing features. On the contrary, other features like the Java Development tools are shared by most of the packages. There are also common features for all the packages, like the Equinox features that implement the core functionality of the Eclipse architecture. The online documentation of each release provides high-level information on the features that each package provides i .
i High-level comparison of Eclipse packages of the latest release:
https://eclipse.org/downloads/compare.php
It is important to mention that in this work we are not interested in the variation among the releases (e.g., version 4.4 and 4.5, or version 4.4 SR1 and 4.4 SR2), known as variation in time.
We focus on the variation of the different packages of a given release, known as variation in space, which is expressed in terms of included and not-included features. Each package is different in order to support the needs of the targeted developer profile by including only the appropriate features.
Eclipse is feature-oriented and based on plugins. Each feature consists of a set of plugins that are the actual implementation of the feature. Table 6.1 shows an example of feature with four plugins as implementation elements that, if included in an Eclipse package, adds support for the Concurrent Versioning System (CVS). At technical level, the actual features of a package can be found within a folder called features containing meta-information regarding the included features and the list of plugins associated to each. A feature has an id, a name and a description as defined by the feature providers of the Eclipse community. A plugin has an id and a name defined by the plugin providers, but it does not have a description. Eclipse CVS Client org.eclipse.team.cvs.core CVS Team Provider Core org.eclipse.team.cvs.ssh2 CVS SSH2 org.eclipse.team.cvs.ui CVS Team Provider UI Table 6.2 presents data regarding the evolution of the Eclipse releases over the years. In particular, it presents the total number of packages, features and plugins per release. To illustrate the distribution of packages and features, Figure 6.1 depicts a matrix of the different Eclipse Kepler SR2 packages where a black box denotes the presence of a feature (horizontal axis) in a package (vertical axis). We observe that some features are present in all the packages while others are specific to only few packages. The 437 features are alphabetically ordered by their id. For instance, the feature Eclipse CVS Client, tagged in the figure, is present in all packages except in the Automotive Software package. Features have dependencies among them: Includes is the Eclipse terminology to define subfeatures, and Requires means that there is a functional dependency between the features. Figure 6.2 shows the dependencies between all the features of all packages in Eclipse Kepler SR2. We tagged some features and subfeatures of the Eclipse Modeling Framework to show cases of features that are strongly related. Functional dependencies are mainly motivated by the existence of dependencies between plugins of different features. In the Eclipse IDE family there is no excludes constraint between the features. Regarding plugin dependencies, they are explicitly declared in each plugin meta-data. Figure 6.3 shows a small excerpt of the dependency connections of the 2043 plugins of Eclipse Kepler SR2. Concretely, the excerpt shows the dependencies of the four CVS plugins presented in Table 6.1.
Reasons to consider Eclipse for benchmarking
We present characteristics of Eclipse packages that make the case study interesting for a feature location benchmark:
Ground truth available: The Eclipse case study fulfils the requirement, mentioned in Section 6.1, of providing the needed data to be used as ground truth. This ground truth can be extracted from features meta-information. Despite that the granularity of the implementation elements (plugins) is coarse if we compare it with source code AST nodes, the number of plugins is still reasonably high. In Eclipse Kepler SR2, the total amount of unique plugins is 2043 with an average of 609 plugins per Eclipse package and a standard deviation of 192.
Challenging:
The relation between the number of available packages in the different Eclipse releases (around 12) and the number of different features (more than 500 in the latest release) is not balanced. This makes the Eclipse case study challenging for techniques based only in static comparison (e.g., interdependent elements or FCA) because they will probably identify few "big" blocks containing implementation elements belonging to a lot of features. The number of available product variants has been shown to be an important factor for feature location techniques [START_REF] Fischer | Enhancing clone-and-own with systematic reuse for developing software variants[END_REF].
Friendly for information retrieval and dependency analysis: Eclipse feature and plugin providers have created their own natural language vocabulary. The feature and plugin names (and the description in the case of the features) can be categorized as meaningful names [START_REF] Rubin | A survey of feature location techniques[END_REF] enabling the use of several IR techniques. Also, the dependencies between features and dependencies between implementation elements have been used in feature location techniques. For example, in source code, program dependence analysis has been used by exploiting program dependence graphs [START_REF] Chen | Case study of feature location using dependence graph, after 10 years[END_REF]. Acher et al. also leveraged architecture and plugin dependencies [ACC + 14]. As presented in previous section, Eclipse also has dependencies between features and dependencies between plugins enabling their exploitation during feature location.
Noisy: There are properties that can be considered as "noise" that are common in real scenarios. Some of them can be considered as non-conformities in feature specification [START_REF] Iuri | On the relationship between features granularity and nonconformities in software product lines: An exploratory study[END_REF].
A case study without "noise" should be considered as an optimistic case study. In Eclipse Kepler SR2, 8 plugins do not have a name, and different plugins from the same feature are named exactly the same. There are also 177 plugins associated to more than one feature. Thereby the features' plugin sets are not completely disjoint. These plugins are mostly related to libraries for common functionalities which were not included as required plugins but as a part of the feature itself. In addition, 40 plugins present in some of the variants are not declared in any feature. Also, in few cases, feature versions are different among packages of the same release.
Friendly for customizable benchmark generation:
The fact that Eclipse releases contain few packages can be seen as a limitation for benchmarking in other desired scenarios with larger amount of variants. For example, it will be desired to show the relation between
EFLBench: Eclipse Feature Location Benchmarking framework
the results of the technique and the number of considered variants. Apart from the official releases, software engineering practitioners have created their own Eclipse packages. Therefore, researchers can use their own packages or create variants with specific characteristics. In addition, the plugin-based architecture of Eclipse allows to implement automatic generators of Eclipse variants as we present later in Section 6.5.
Similar experiences exist:
Analysing plugin-based or component-based software system families to leverage their variability has been shown in previous works [ACC +
EFLBench: Eclipse Feature Location Benchmarking framework
EFLBench is aimed to be used with any set of Eclipse packages including packages with features that are not part of any official release. Figure 6.4 illustrates, at the top, the phase for constructing the benchmark and, at the bottom part, the phase for using it. The following subsections provide more details on the two phases.
Benchmark construction
The benchmark construction phase takes as input the Eclipse packages and automatically produce two outputs, 1) a Feature list with information about each feature name, description and the list of packages where it was present, and 2) a ground truth with the mapping between the features and the implementation elements which are the plugins.
We implemented an automatic extractor of features information. The implementation elements of a feature are those plugins that are directly associated to this feature. From the 437 features of the Eclipse Kepler SR2, each one has an average of 5.23 plugins associated with, and a standard deviation of 9.67 plugins. There is one outlier with 119 plugins which is the feature BIRT Framework included in the Reporting package. From the 437 features, there are 19 features that do not contain any plugin, so they are considered abstract features which are created just for grouping other features. For example, the abstract feature UML2 Extender SDK (Software Development Kit) includes the features UML2 End User Features, Source for UML2 End User Features, UML2 Documentation and UML2 Examples.
Reproducibility is becoming quite easy by using benchmarks and common frameworks that launch and compare different techniques [START_REF] Sim | Using benchmarking to advance research: A challenge to software engineering[END_REF]. This practice, allows a valid performance comparison with all the implemented and future techniques. We integrated EFLBench and its automatic extractor in BUT4Reuse.
Benchmark usage
Once the benchmark is constructed, at the bottom of Figure 6.4 we illustrate how it can be used through BUT4Reuse where feature location techniques can be integrated as presented in Section 4.3.4. The Eclipse adapter, detailed in Section 4.4.1, is responsible for the variant abstraction phase. This will be followed by the launch of the targeted feature location techniques which takes as input the feature list and the Eclipse packages (excluding the features folder). The feature location technique produces a mapping between features and plugins that can be evaluated against the ground truth obtained in the benchmark construction phase. Concretely, EFLBench calculates the precision and recall which are classical evaluation metrics in IR studies (e.g., [START_REF] Eyal | Feature location in a collection of product variants: Combining information retrieval and hierarchical clustering[END_REF]).
We explain precision and recall, two metrics that complement each other, in the context of EFLBench. A feature location technique assigns a set of plugins to each feature. In this set, there can be some plugins that are actually correct according to the ground truth. Those are true positives (TP). TPs are also referred to as hit. On the set of plugins retrieved by the feature location technique for each feature, there can be other plugins which do not belong to the feature. Those are false positives (FP) which are also referred to as false alarms. Precision is the percentage of correctly retrieved plugins relative to the total of retrieved plugins by the feature location technique. A precision of 100% means that the ground truth of the plugins assigned to a feature and the retrieved set from the feature location technique are the same and no "extra" plugins were included. The formula of precision is shown in Equation 6.1.
precision = T P T P + F P
= plugins hit plugins hit + plugins f alse alarm (6.1)
According to the ground truth there can be some plugins that are not included in the retrieved set, meaning that they are miss. Those plugins are false negatives (FN). Recall is the percentage of correctly retrieved plugins from the set of the ground truth. A recall of 100%
Examples of EFLBench usage in Eclipse releases
means that all the plugins of the ground truth were assigned to the feature. The formula of recall is shown in Equation 6.2.
recall = T P T P + F N
= plugins hit plugins hit + plugins miss (6.2)
Precision and recall are calculated for each feature. In order to have a global result of the precision and recall we use the mean of all the features. Finally, BUT4Reuse reports the time spent for the feature location technique. With this information, the time performance of different techniques can be compared.
Examples of EFLBench usage in Eclipse releases
This section aims at presenting the possibilities of EFLBench by benchmarking four feature location techniques in official Eclipse releases. For the four techniques we use Formal Concept Analysis (FCA) as a first step for block identification. FCA is presented in Section 4.3.2. Concretely, the four feature location techniques are SFS, SFS+ST, SFS+TF, SFS+TFIDF which were detailed in Section 4.3.4.
In SFS+ST, SFS+TF, SFS+TFIDF, where we use IR and Natural Language Processing (NLP), we do not make use of the feature or plugin ids. In order to extract the meaningful words from both features (name and description) and elements (plugin names), we used two well established techniques in the IR field. We discuss them here with examples regarding the Eclipse case study:
• Parts-of-speech tags remover: These techniques analyse and tag words depending on their role in the text. The objective is to filter and keep only the potentially relevant words. For example, conjunctions (e.g., "and"), articles (e.g., "the") or prepositions (e.g., "in") are frequent and may not add relevant information. As an example, we consider the following feature name and description: "Eclipse Scout Project. Eclipse Scout is a business application framework that supports desktop, web and mobile frontends. This feature contains the Scout core runtime components.". We apply Part-of-Speech Tagger techniques using OpenNLP [START_REF]Apache. Opennlp[END_REF].
• Stemming: This technique reduces the words to their root. The objective is to unify words not to consider them as unrelated. For instance, "playing" will be considered as stemming from "play" and "tools" from "tool". Instead of keeping the root, we keep the word with greater number of occurrences to replace the involved words. As example, in the Graphiti feature name and description we find "[...]Graphiti supports the fast and easy creation of unified graphical tools, which can graphically display[...]" so graphical and graphically is considered the same word as their shared stem is graphic.
Regarding the implementation, we used the Snowball steamer [START_REF] Porter | Snowball: A language for stemming algorithms[END_REF].
Given that tf-idf is used in SFS+TFIDF, we illustrate it in the context of Eclipse features. For example "Core", "Client" or "Documentation" are more frequent words across features but "CVS" or "BIRT", being less frequent, are probably more relevant, informative or discriminating.
We used the benchmark created with each of the Eclipse releases presented in Table 6.2. The experiments were launched using BUT4Reuse at commit ce3a002 (19 December 2015) which contains the presented feature location techniques. Detailed instructions for reproducibility are available ii . We used a laptop Dell Latitude E6330 with a processor Intel(R) Core(TM) i7-3540M [email protected] with 8GB RAM and Windows 7 64-bit.
After using the benchmark, we obtained the results shown in Table 6.3. Precision and Recall are the mean of all the features as discussed at the end of Section 6.3.2. The results in terms of precision are not satisfactory in the presented feature location techniques. This suggests that the case study is challenging. Also, we noticed that there are no relevant differences in the results of these techniques among the different Eclipse releases. As discussed before, given the few amount of Eclipse packages under consideration, FCA is able to distinguish blocks which may actually correspond to a high number of features. For example, all the plugins corresponding specifically to the Eclipse Modeling package, will be grouped in one block while many features are involved.
Another example, in Eclipse Kepler SR2, FCA-based block identification identifies 60 blocks with an average of 34 plugins per block and a standard deviation of 54 plugins. In Eclipse Europa Winter, with only 4 packages, only 6 blocks are identified with an average of 80 plugins each and a standard deviation of 81. Given the low number of Eclipse packages, FCA identifies a low number of blocks. The number of blocks is specially low if we compare it with the actual number of features that we aim to locate (e.g., 60 blocks in Kepler SR2 against its 437 features). The higher the number of Eclipse packages, the most likely FCA will be able to distinguish different blocks. The first location technique (FCA+SFS) does not assume meaningful names given that no IR technique is used. The features are located in the elements of a whole block obtaining a high recall (few plugins missing). Eclipse feature names and descriptions are probably written by the same community of developers that create the plugins and decide their names. In the approaches using IR techniques, the authors expected a higher increment of precision without a loss of recall but the results suggest that certain divergence exists between the vocabulary used at feature level and at implementation level.
Regarding the time performance, Table 6. 4 shows, in milliseconds, the time spent for the different releases. The Adapt column corresponds to the time to decompose the Eclipse packages into a set of plugin elements and get their information. This adaptation step heavily rely to access the file system and we obtain better time results after the second adaptation of the same Eclipse package. The FCA time corresponds to the time for block identification.
We consider Adapt and FCA as the preparation time. Then, the following columns show the time of the different feature location techniques. We can observe that the time performance is not a limitation of these techniques as they take a maximum of around half a minute. It is out of the scope of the EFLBench contribution to propose feature location techniques that could obtain better results in the presented cases. The objective is to present the benchmark usage showing that quick feedback from feature location techniques can be obtained in the Eclipse releases case studies. In addition, we provide empirical results of four feature location techniques that can be used as baseline.
Automatic and parametrizable generator of Eclipse variants
As shown in Table 6.2, the number of official packages of an Eclipse release amounts to around 12 Eclipse variants. In order to provide a framework for intensive evaluation of feature location techniques, cases with larger number of Eclipse variants are desired. In addition, a parametrizable number of variants could serve to analyse the results of the same feature location technique under different circumstances. We extended the benchmark construction phase of EFLBench with an automatic and parametrizable generator of Eclipse variants to construct benchmarks with tailored characteristics. The approach consists in automatically creating variants from a user-specified Eclipse package.
Figure 6.5 illustrates the benchmark construction phase using the automatic generation of Eclipse variants. First, as shown on the upper left side of the figure, we take as input an Eclipse package to extract its features and feature constraints. These features and constraints define a configuration space in the sense that, by deselecting features, we can still have valid Eclipse configurations (i.e., all the feature constraints are satisfied). Then, we leverage this configuration space to select a set of configurations. The automatic selection of configurations is parametrized by a given strategy, thus, this step is extensible to different implementations. Shortly below, we present three different strategies that we have implemented. Finally, once the set of configurations are selected, we implemented an automatic method to construct the variants through the input Eclipse and the feature configurations. The constructed variants are created for preparing the benchmark construction but, if desired, given that constraints are respected, they can be executed in the same way as the packages in Eclipse releases.
Strategies for the automatic selection of configurations
We implemented three strategies to select configurations from a set of features and constraints with the final objective to construct benchmarks presenting different characteristics. Apart from the input Eclipse, the three take as input a user-specified number of variants that want to be generated. We present the three strategies and then discuss their properties:
• Random selection strategy: In this strategy, we randomly select configurations from the configuration space. The selection of random valid configurations, taking as input features and their constraints, is implemented through a functionality offered by the PLEDGE library (Product Line Editor and tests Generation tool) [HPP + 13b] which internally relies on a SAT solver [START_REF] Le | The sat4j library, release 2.2[END_REF].
• Random selection strategy trying to maximize dissimilarity: Given the specified number of variants, this strategy aims to obtain a set of configurations that maximize their global dissimilarity (i.e., different among them). For this we use a similarity heuristic between configurations which is supported by PLEDGE relying on the Jaccard distance [HPP + 14]. First, PLEDGE selects random configurations and then it applies a searchbased approach guided by a fitness function that tries to identify the most dissimilar configurations. This strategy demands to select the time allocated to the search-based algorithm. Once the allowed time is over, the set of configurations are obtained.
• Percentage-based random selection strategy: This strategy consists of two steps. First, we ignore the constraints and we go through the feature list deciding if we select or not each feature. This is automated by a user-specified percentage defining the chances of the features of being selected. Second, once some features are randomly selected, we need to guarantee that the feature constraints are satisfied. We may have included a feature that requires another one that was not included. Therefore, we repair the configuration including the missing features until obtaining a valid configuration.
The first and second strategy can be used to evaluate how a feature location technique behaves with dissimilar variants with high t-wise coverage. Empirical studies of Henard et al. showed that dissimilar configurations exhibit interesting properties in terms of t-wise coverage [HPP + 14]. They also showed that the strategy of selecting random configurations from the configuration space, without the search-based step, already obtained a median of more than 90% of pairwise coverage in 120 FMs of moderate size (i.e., less than one thousand features). The third strategy, compared to the first two, allows to have more control over the total number of selected features per configuration.
Using as input the Modeling package of Eclipse Kepler SR2, Figure 6.6 shows, in the vertical axis, the number of features in 1000 automatically selected configurations using the presented strategies. The total number of features of the input Eclipse package is 173 corresponding to the maximum value. Considering the feature constraints, the configuration space exceeds the million configurations. In the case of the random and dissimilarity strategies, as shown in Figures 6.6a and 6.6b, we can observe that only some outlier configurations reach a large number of selected features. Given that the dissimilarity strategy depends on the number of desired variants to generate, we repeated the process with different number of configurations (not only 1000) obtaining analogous results. We also observed that the time allowed for the search-based algorithm did not affect the number of selected features, at least from 10 minutes to 1 hour as shown in Figure 6.6b. On the contrary, in Figure 6.6c, we can observe how the user-specified percentage has an impact in the median of selected features. Larger percentages allow to obtain configurations with larger number of selected features and, therefore, there will be less chances to obtain dissimilar variants using this strategy compared to the ones using random selection.
Results using automatic generation of variants
We show examples of using the EFLBench strategies for automatic generation of Eclipse variants. We focus on discussing the results of evaluating the FCA+SFS feature location technique. As input for the random generation strategies, we use the Modeling package of Eclipse Kepler SR2 which is the same used to illustrate the strategies for selecting configurations in Figure 6.6.
Using percentage-based random selection of features, we aim to empirically analyse if the number of available variants has an impact on the FCA+SFS technique. First, we generated 100 variants using 40% as percentage for feature selection. By setting this percentage, the first 10 variants cover the 173 features which is the total number of features of the input Eclipse. This allows the construction of different benchmarking settings adding 10 variants each time while keeping the total number of possible features constant.
Table 6.5 shows the precision and recall obtained for FCA+SFS when considering different number of variants. We can observe how precision improves with the number of variants. From 10 to 20 variants, we have a precision improvement of around 15%. Beyond 30 variants, it seems that the included variants, with their feature combinations, are not adding more information that can be exploited by the FCA+SFS technique. As an extreme case, we can observe how we obtain the same precision with 90 and 100 variants. Regarding recall, independently of the number of variants we obtain very high levels of recall. It slightly decrease 7% from 10 to 100 variants, while precision increase, mainly because of the "noise" introduced by non-conformities in feature specification discussed in Section 6.2.2. Table 6.5 also presents time measures of one execution showing that the FCA+SFS technique scales correctly for 100 variants in this benchmark. Concretely, it took only around 15 seconds in total for FCA and SFS. If we include, as part of the feature location process, the time for adapting the variants using the Eclipse adapter (the Adapt time mentioned in Section 6.4), in the case of 100 variants it took 35 minutes which is still acceptable.
We used the same Modeling package as input to generate 100 variants with the random selection strategy. As in the previous experiment, we keep the number of features constant given that 10 variants already cover the 173 features. Then, we calculate the results by incrementally adding another 10 variants. Table 6.6 shows the results where we can observe that, with only 10 variants, we have 72.83% of precision. The result with 10 variants generated with this random selection strategy is better compared to using 100 variants generated through the percentage-based random selection which was 66.02% as shown in Table 6.5. This result empirically suggests that the FCA+SFS feature location technique performs better when the variants are more dissimilar. Then, starting with 20 variants we reach 90% precision and then from 40 to 80 variants it stays constant in 93.13%. It is worth to mention that the dissimilarity strategy obtained similar results as presented in Table 6.6. In several runs, for 10 variants we obtain around 70% of precision while for 20 variants we already reach 90%. The presented examples are intended to show the capabilities of EFLBench in creating scenarios to compare the results of feature location techniques. Concretely, we have shown how to analyse the result 1) with different number of variants and 2) with the same number of variants but with different degrees of similarity. In the case of FCA+SFS, we provided empirical evidences that having more available variants do not necessarily means better results in precision. However, dissimilar variants is an important factor for obtaining higher levels of precision.
Conclusions
We have presented EFLBench, a framework and a benchmark for supporting research on feature location in artefact variants. Existing and future techniques dealing with this activity in extractive SPL adoption can find a challenging playground which is directly reproducible. The benchmark can be constructed from any set of Eclipse packages from which the ground truth is extracted. We have shown examples of its usage with the Eclipse packages of the official releases for analysing four different feature location techniques. We also provide automatic generation of Eclipse variants using three strategies to support the creation of different benchmarking scenarios. We discussed the evaluation of one of the feature location techniques using randomly generated sets of Eclipse packages.
As further work we aim to generalize the usage of feature location benchmarks inside BUT4Reuse providing extensibility for other case studies. We also plan to use the benchmark in order to evaluate existing and innovative feature location techniques while also encouraging the research community on using it as part of their evaluation. Given the high proliferation of techniques, meta-techniques for feature location can be proposed such as voting systems where the results of several techniques could provide better results than using each of them independently. The harmonization provided by BUT4Reuse is an enabler to implement these ensemble techniques and EFLBench could be used for experimentation.
Another interesting open research question is related to the impact in extractive SPL adoption of the results obtained with feature location techniques. We need more empirical analysis of what is the actual meaning of precision and recall by measuring the time and effort required by domain experts to fully locate the features after applying these techniques (i.e., manually removing false positives and adding false negatives).
7
Feature identification in mined families of android applications
This chapter is based on the work that has been published in the following paper:
• Li Li,
Introduction
As presented in Section 2.2.2, feature identification is an important activity in extractive SPL adoption. Concretely, it is relevant when there is not enough knowledge about the features included in a set of artefact variants. Case studies are essential in order to empirically evaluate the proposed techniques for this activity, however, there is a:
Lack of realistic case studies to intensively experiment with feature identification techniques
• We should avoid artefact variants created "in the lab": Artefact variants directly derived from an SPL or from feature-based generators are optimistic scenarios for experimenting with feature identification techniques. This automatic derivation does not introduce "noise" that can be originated by manually producing the variants with ad-hoc reuse. For example, maintenance changes such as bug fixes, which are not applied to all family members, have a significant impact on extractive SPL adoption activities [START_REF] Fischer | Enhancing clone-and-own with systematic reuse for developing software variants[END_REF].
• Mining software repositories is challenging: Software repositories contain a wealth of artefacts and information that can be mined to study development processes and experimentally assess research approaches. The challenge is thus to create a method to identify families of artefact variants that can be used for experimenting with feature identification techniques.
Contributions of this chapter.
• Android Application families identification (AppVariants): We propose an approach to identify families of Android applications (hereafter apps) in large app repositories with the objective to provide realistic case studies for research on the feature identification activity of extractive SPL adoption.
• BUT4Reuse adapter for Android apps: We describe the design and implementation of an adapter to enable feature identification experimentation within the BUT4Reuse framework.
• Preliminary categorization of families after applying feature identification in selected app families: We perform feature identification to discuss the characteristics of some identified app families.
The remainder of the chapter is structured as follows: Section 7.2 defines what we mean by Android app families and includes an example on mining one family of app variants. Section 7.3 presents our solution for the family variants identification process and Section 7.4 presents and summarizes the results of feature identification in selected families. Finally, Section 7.5 concludes this chapter outlining future research directions.
Families of Android applications
The myriads of smart phones around the globe have given rise to a vast proliferation of mobile apps. Each app targets specific user tasks and there is an increasing number of targeted user profiles. In this context, Android is a leading technology for their development and on-line markets are the main means for their distribution.
Games, weather, social networking services (SNS), navigation, music or news are among the most popular app categories. In our context, it is not strictly necessary that apps within a family belong to the same app category. We focus on interesting cases for experimenting with feature identification during extractive SPL adoption, therefore, we consider a family as several app variants created by reusing assets to fit different customer needs.
We propose discovering families from on-line Android app markets. Regarding other software repositories, GitHub has offered many opportunities to the software engineering research community with its millions of projects which include Android apps. Unfortunately, there are perils in using GitHub data [KGB + 14]. Among these, it is noteworthy that a large proportion are toy projects or used for sharing code between a limited number of people. Findings on such software data are thus often not generalisable to software development.
7.2.1
An example of feature identification in an Android app family 8684 is a company specialized in travel and transportation apps which reached more than five million users. The 8684 family of variants that we manually identified motivated our interest in trying to automatically identify other families in app markets. In the 8684 portfolio we have found several apps including CityBus, LongWayBus, Metro, Train or TrainTicket.
Given the specialized nature of this company, their apps have many aspects in common such as time schedule or location management. They also have apps from other categories, for example one dedicated to FastFood restaurants which also manages locations. Figure 7.1 shows screenshots of the six mentioned mobile apps. We aim to explore the reuse that has been conducted while implementing the different apps and check if some of this reuse corresponds to distinguishable features. Our hypothesis is that, apart from the different graphical elements, there may be shared implementations of the business logic.
We perform a preliminary analysis of the six app variants using the File structure adapter within BUT4Reuse which decomposes the app in File elements as presented in Section 4.3.1. Then, we selected FCA as block identification technique (details in Section 4.3.2, it is based on separating the intersections of elements across the variants) to identify features. In the Java packages that we manually identified as related to the company source code (cn.tianqu and cn.chinabus), 270 Java files are shared by at least two variants, meaning that they are identical files. On the other hand, 470 Java files in average are specific to each variant. We only consider these two packages because the others are related to general libraries used in Android development.
Train ticket Train
Certificate-based filtering:
Unfortunately, the package naming style is not respected in every case. Concretely, it is a common practice for malicious apps to use the same package names as other legitimate apps [LLB + 16]. This is possible with a technique called repackaging where apps are disassembled and assembled again. Therefore, the validity of the package name characteristic is threatened. We complement the first characteristic with another one related to the certificate of apps. In order to make an app publicly available in a market, developers have to sign their apps through their unique certificate. Thus, if two apps are signed by a same certificate, we have reason to believe that these two apps are developed by the same provider.
Code-based comparison:
In the last step, we perform pairwise comparison on apps that are located in the same family thanks to the aforementioned two steps. Based on the comparison results, we attempt to filter out such apps that are not sharing code with others and consequently can be considered as outliers. Given a threshold t, an app family F , and an app a i ∈ F , for every a j ∈ F , where j = i, if the similarity distance a ij < t, we consider that a i is an outlier of family F , and thus we drop it from F in this step. The source code similarity is computed with the formula similarity = identicalM ethods/totalM ethods, which has already been used in other studies [START_REF] Li | An investigation into the use of common libraries in android apps[END_REF]. Because this step is more computationally expensive than the previous two, we only use it to exclude outlier apps from the app families.
Summarizing the mentioned steps, we cluster apps into the same family if they belong to the portfolio of the same app provider (step 1 and 2) and as long as they are partially similar in terms of source code similarity (step 3).
Implementation and preliminary results
We implemented AppVariants which is dedicated to validate the pertinence of the aforementioned three characteristics. It is important to clarify that we are interested in variation in space (variants) and not in variation over time (versions) [START_REF] Apel | Feature-Oriented Software Product Lines -Concepts and Implementation[END_REF]. However, we use the information of the versions to decide which versions of the variants should be simultaneously used to perform feature identification.
Prototype implementation:
AppVariants adopts a tree model to store the meta information of the identified apps. Figure 7.4 shows an illustrative example for com.baidu apps on how their meta-data are stored. Each non-leaf node of the tree is represented by a package segment (e.g., baidu) while leaf nodes are represented through the remaining package segments (e.g., BaiduMap). Furthermore, each leaf node is accompanied by a ranked list of meta-data of apps, including their certificates and assemble times. As shown in Figure 7.4, the vertical axis is a time line referring to different versions of the app indicated by the leaf node. Given a time point, the tree model also provides a way to identify family variants. For example, as shown in the dashed rectangle, we are able to collect a set of variants for com.baidu given the latest time point. In our current implementation, for the selection of the version of the apps within a family, we use the latest available versions of each of them. Experimental setup: For the purpose of providing real family variants for feature identification research, we need to apply our approach on a market scale. Thus, for this mining process, we use a part of the AndroZoo repository which is a collection of apps crawled from several markets to support research in Android [START_REF] Allix | Androzoo: Collecting millions of android apps for the research community[END_REF]. Concretely, we select around 1.5 million apps from AndroZoo belonging exclusively to the Google Play market. This data set has already been used in other analyses [LBB + 15b, LBB + 15a].
Preliminary mining results: Among the 1.5 million Android apps, based on packagebased categorization and certificate-based filtration (steps 1 and 2), we are able to collect 75,963 families of apps. The amount of variants in each family ranges from 2 to 12,702 apps. Figure 7.5 shows the distribution of the number of variants in our collected families before the third step using a boxplot. The median number of variants is three, meaning that half of the collected families have at least three variants. Given that we do not show outliers in this boxplot, only 88% of the families in the mined data set are shown. The remaining 12% are families with more than 10 variants each. Furthermore, 760 families, representing the 1% of our mined dataset, have over 100 variants. Table 7.1 shows the top 10 families regarding the number of apps in the app provider portfolio. As an example, com.andromo, the top ranked package prefix, has 12,702 variants. In the next section we will explain the reason behind this large amount of apps.
Thanks to our manual observation, we have found that, based only on the first two steps of our approach, we are already able to collect a big data set of families. However, some families do not share any common source code from one another. For example, the family com.ethereal contains four variants, where the similarity of all of them is below 5%. To this
7.4
Feature identification in selected families using BUT4Reuse
In order to perform more in-depth feature identification analyses than the one shown in Section 7.2.1 regarding the 8684 family, we implemented a BUT4Reuse adapter for Android apps. This section presents the design of the adapter and its usage in selected families obtained through AppVariants.
BUT4Reuse adapter for Android apps
Android apps are distributed in the form of apk files (Android application package) which contains all the resources for the execution of the app. In Section 4.2.2, we described the tasks needed to design an adapter. We detail the design decisions of the first task, element identification, as it is specially relevant for the feature identification activity. We decided to simultaneously decompose the app artefact variant at different levels: files, source code and meta-data defined in the Android manifest file. Therefore, to implement the Android adapter, we complement the available Java source code adapter (AST elements) and Files adapter (File elements) with Permission elements. Permission elements help in the comparison of the different privileges needed by the app for its execution. For example, accessing the information of the phone contact list, or using the camera, can be required by one variant but not in others.
The apk file need to be pre-processed before the Android adapter can decompose the app.
As part of the BUT4Reuse Android adapter, we automated a chain to unpack these files and decompile them. Figure 7.7 presents this chain and the used techniques. First, the apk file is uncompressed to obtain all the resources. Then, there are two files requiring further processing. The file classes.dex, which is a Dalvik executable used in Android, is transformed to Java byte-code with the dex2jar tool ii , and then decompiled with the Java decompiler called jd-cmd iii to obtain actual Java classes that can be parsed by the adapter. Also, AndroidManifest.xml is a binary file that can not be read without unpackaging it with a dedicated method provided by the Android asset packaging tool. From this manifest file, the adapter creates elements related to Android permissions.
Categorization of Android families after applying feature identification
In this section we present the categorization of four types of mined families and we present a representative example of each of them.
• Feature-based app generators and distributors: In Table 7.1 we presented app provider portfolios with elevated number of apps. We analysed variants from the com.andromo family to investigate the reason behind this fact. Blocks are intensively shared between variants, and they all share a common block which is mainly related to screen layout management. Preliminary feature identification within the app variants confirmed that the Andromo framework is feature-oriented providing initial components for user customization.
ii Dalvik executable to Java byte-code, dex2jar tool: https://github.com/pxb1988/dex2jar iii Java Decompiler, command line version, jd-cmd tool: https://github.com/kwart/jd-cmd By looking at their website, Andromo is a company that offers a framework aiming at hiding technical details of app development. Figure 7.8 shows examples of apps created with Andromo. They also provide services for easing app distribution. That is the reason why they all have the same unique certificate com.andromo even if they are from different developers within their community. Concretely, an example of the main package of an app is com.andromo.dev282094.app267841 where dev is the id of the user and app is the id of the app. In their website iv we can read "You can add [. Despite that user customizations complicate feature identification in our mined app variants, in-depth analysis of frameworks like this one could aim to reverse engineer their variability and architecture. Given that the business value of these frameworks is their architecture, protection from extractive approaches, which has been coined as variability-aware security [ABC + 15], is an important research direction that can highly benefit from this category of mined families.
• Content-driven variability: In this category of family we have variability at the displayed content level while the app architecture remains the same. For example, we analysed the tudou family with six variants. Tudou is a Chinese on-line video services company that broadcast users content and television series. We found three which were very similar, almost complete app clones. A manual examination showed that they reused the same source code to distribute apps targeting different television series. Figure 7.9 shows two of these apps where only the content is changing while the app structure and functionality remains constant.
• Device-driven variability: The variability in this category exists because of the targeted devices. We analysed the six variants from the baidu family, a Chinese web services provider. In two of them, browser.inter and browserhd.inter, we found that much code has been reused. They are both internet browser apps where one is iv Features provided by Andromo for app creation: http://andromo.com/features specialized for mobile phones and the other for tablets. Figure 7.10 shows a screenshot of these two apps. It is not unusual to provide the same app in terms of functionalities, but specialized for different devices. • Libraries reuse (AppVariants undesirable cases): We have found mined app families where the reuse between variants only concerns the use of the same libraries.
As an example, we analysed a mined family with four apps: FrancsEuros, StrongBox, MyShopping and BattleSpace. Figure 7.11 shows screenshots of these apps which are a currency converter, a quiz game, a shopping list manager and an arcade game respectively. The identified reuse concerns the code of the used library fr.pcsoft while slight reuse between the apps was identified in the main package. A manual examination showed that reuse in the main package belongs to technicalities of the used development environment (i.e., WinDev provided by pcsoft) but not to actual relevant code.
In the baidu family presented before, Baidu also provides an app software development kit which they extensively use in the six baidu variants. Apart from the mentioned internet browser apps, in two apps, Baidu Maps and input, we identified that they share a voice recognition feature. However, the rest of the reused elements are only related to their app development kit. As presented in Section 7.3, the third step of the mining approach related to source code similarity does not restrict the similarity analysis to the main package of the apps. The analysis is performed in the whole source code including libraries. In general, around 41% of an app source code is related to libraries [START_REF] Li | An investigation into the use of common libraries in android apps[END_REF]. In the field of apps clone detection, as well as in our approach, distinguishing actual source code from libraries is a known limitation [START_REF] Li | An investigation into the use of common libraries in android apps[END_REF]. In fact, there is no explicit meta-information about the code that corresponds to libraries in Android apps. The main technique to avoid these false positives is to use white-lists with known and frequent libraries [START_REF] Li | An investigation into the use of common libraries in android apps[END_REF]. However, this does not guarantee that all libraries will be filtered during our third step regarding the similarity calculation.
We do not intend this categorization to be exhaustive but we present representative examples of our analysis of the identified families. We have shown that the results of AppVariants are promising to enable intensive experimentation of feature identification techniques.
Conclusions
Android app development is a flourishing industry whose products are publicly available in app markets. We propose a method, called AppVariants, to identify app families with the objective to provide realistic case studies for research on the feature identification activity of extractive SPL adoption. The identified app families serve to study reuse practices and provide assessment about app families suitability for SPL adoption. We presented our implementation of the mining process and the analysis of selected families.
As further work, we can envision an automatic approach to explore the identified app families to determine the semantic cohesion of the family members. The inclusion of natural language analysis of app descriptions and user comments in the market website could help in improving this mining. The mining process of AppVariants could also be extended with approaches that could escape cases of libraries reuse or code obfuscation. Also, concerning feature identification within app families, we performed preliminary analysis of several families but more in-depth experimentation on feature identification and extractive SPL adoption could be performed.
Part IV
Assistance and visualisation in artefact variants analysis
Introduction
As discussed in Section 2.2.2, in extractive SPL adoption, identifying the features across the artefact variants is a needed activity when there is no complete information about the features within a domain. The lack of complete documentation, the amount of fields of expertise or just the complexity of artefacts created to satisfy different customer needs, can lead to the lost of the global view of the existing features. During the feature identification activity, there is the naming sub-activity where we need to unequivocally provide names to the features. However, there is a:
Lack of assistance during feature naming for domain experts.
• Domain experts need interactive visualisations to reason about possible names. We detailed in Section 3.1.2 several reported experiences in naming during feature identification showing the general lack of support for domain experts. In practice, automated comparison approaches will identify distinguishable blocks shared by the artefact variants, and would thus still require domain experts to manually map them with actual features. To that end, domain experts must look at the elements of the blocks, understand their semantics, and guess the functionality that each block provides when present in a variant.
Contributions of this chapter.
• VariClouds: A visualisation that provides support for feature naming during feature identification. This paradigm facilitates the first encounter between the domain experts and the variants so as to help in understanding the semantics hidden in the variants as well as in the variability among them. Concretely, in order to suggest names, we leverage word clouds, a widely used visualisation paradigm for textual data [START_REF] Smith | Tagging: People-powered Metadata for the Social Web[END_REF]. Blocks can be renamed by interacting with the suggested words of the word clouds.
• An automated approach for heuristically assigning the names of the blocks based on the same weighting factors used for displaying the word clouds. Domain experts can use this option for one of the blocks, or to all of them, as a previous step for manual validation or refinement.
• An integrated visualisation in the process promoted by the BUT4Reuse framework presented in Chapter 4. BUT4Reuse adapters are extended to provide meaningful words from the elements obtained when decomposing artefacts.
The remainder of this chapter is structured as follows: Section 8.2 presents VariClouds and Section 8.3 presents its phases for domain experts. We present and discuss the evaluation and the characteristics of the selected case studies in Section 8.4. Section 8.5 discusses the limitations and threats to validity we identified in our approach. Section 8.6 concludes the chapter and presents further work.
VariClouds approach
The VariClouds approach leverages word clouds to visualise variants, or parts of variants, providing a means for exploratory analysis in the context of the feature identification activity during extractive SPL adoption. More specifically, it is used to solve the problem of naming.
We first provide background information on word clouds. Then, we detail VariClouds by explaining how words are retrieved from product variants. Later, we explain the phases from the perspective of a domain expert.
Word clouds and weighting factors
Word clouds gained momentum in the Web as aggregators of activity, as a means to measure popularity, and as a mechanism for social tagging/indexing [START_REF] Smith | Tagging: People-powered Metadata for the Social Web[END_REF]. Word clouds have been also used for text summarization and analysis in several domains such as patent or opinion analysis [KBGE11, WWL + 10, SGLS07]. The principle underlying word clouds is the weighting factor of the words appearing in a document. This weighting factor is used to change the relevance of the word in the visualisation, typically by assigning larger font sizes to the more weighted words (e.g., more frequent words).
Term frequency (tf) is a metric consisting in giving more relevance to the terms appearing with more frequency in a document d. When dealing with a set D of documents d 1 , ..., d n , term frequency-inverse document frequency (tf-idf) is another metric used in IR [START_REF] Gerard Salton | A vector space model for automatic indexing[END_REF]. For a document d, tf-idf penalizes common terms that appear across most of the documents in D and emphasizes those terms that are more specific to d. There are different formulas to calculate them. In this work, we used the formulas presented in Equation 8.1, where we use raw term frequency (tf) which is calculated counting the occurrences of a given term in a document, inverse document frequency (idf) which measures how much rare or common a term is across all the documents using a logarithmic scale and, finally, tf-idf uses tf multiplied by idf to penalize or encourage a term depending on its occurrence across D.
tf
(term i , d) = f term i ,d idf (term i , D) = log |D| |{d ∈ D : term i ∈ d}| tf-idf(term i , d, D) = tf (term i , d.terms) × idf (term i , D) (8.1)
8.2.2
Retrieving the words through the adapters
As presented in Chapter 4, each adapter is associated to an artefact type and is responsible for decomposing a given artefact into a set of elements. The first task for designing an adapter is the identification of the elements that want to be considered. To support VariClouds, each adapter must be enriched to yield the words exposed for each element. A word is therefore a term that can be inferred from an element (e.g., the label of a model element or the name of a Java method). We overview below how the adapters for artefact types relevant to our case studies were implemented:
• Models adapter: Models can be decomposed in atomic model elements as we presented in Section 5.3.1. Specifically, we consider the Class, Attribute and Reference which are the mainly used concepts to define domain-specific languages (DSLs). We get the words through technicalities of the Eclipse Modeling Framework (EMF) [START_REF]Eclipse. Eclipse integrated development environment[END_REF]. To get the text of a class instance we use its EMF Item Provider. Item providers are registered for each class type and they have a label provider that return its associated text as defined by the DSL implementation. Figure 8.1a shows an excerpt of a model with the text retrieved from the label provider of each class. Concretely, it is a screenshot of the EMF reflective editor which uses these label providers. For the Attributes, we get the value of the attribute in string format. We decided to ignore the text from References and we did not evaluate the impact of adding the text of all the referenced classes. We further post process the text of Class and Attribute by tokenizing the text to obtain separated words.
• Eclipse adapter: In Section 4.4.1 we presented the Eclipse adapter for decomposing an Eclipse package in its set of plugin elements. Also, some plugin examples were shown in Section 6.2 (e.g., Eclipse CVS Client or CVS Team Provider). For each plugin, we take its name as defined by the plugin providers in the plugin metadata. We also tokenize the name to obtain the set of words.
...
(a)
Excerpt of the In-Flight Entertainment System model [START_REF] Polarsys | [END_REF] to illustrate the meaningful words retrieved using the models adapter. • Source code adapter: In source code, classes and methods usually have meaningful names provided by developers which can be exploited to retrieve relevant words regarding the implemented functionality. Automatic summarization [START_REF] Spärck | Automatic summarising: The state of the art[END_REF] is a research field extensively studied in source code artefacts to generate documentation or helping developers in program comprehension (e.g., [START_REF] Sridhara | Generating parameter comments and integrating with method summaries[END_REF]). In our context, the BUT4Reuse source code adapter decomposes the artefact in Abstract Syntax Tree (AST) elements.
To retrieve words, we use class names, method names and declared field names contained in these elements. We ignored the text in the body of the methods or in source code comments. Figure 8.1b shows, inside the dashed zone, the words retrieved from the source code adapter in an excerpt of a Java artefact.
The adapters mechanism, and its flexibility to implement the way that words are extracted, enable the genericity of VariClouds to support different artefact types.
8.3
Using VariClouds: Phases for the domain experts The two phases are explained in detail in the next two subsections. In short, in the preparation phase, during the creation of a word cloud, several word filters can be applied and tuned. Therefore, this phase aims to refine the word cloud creation by visualising the summarization of all variants, specific variants or identified blocks. In the second phase, block naming is performed through visualising the tf-idf word cloud of each block. Alternatively, the domain expert can use an automatic algorithm to name the blocks.
Each phase is explained using the running example of six variants of Vending Machine statechart models. Figure 8.3 shows the statechart diagrams of variants number 1 and 4 which cover the features available in the family: Three different beverages (coffee, tea and soda), two payment systems (cash and credit card) and ring tone alert. The objective of this running example is to show how to use VariClouds for naming during the process of identifying these features.
Phase 1: Preparation of the word clouds
The word clouds can be created using the words from any set of elements. family. Figure 8.4b displays the words which are frequent (tf) in Variant 1. Figure 8.4c shows another word cloud of Variant 1 but using the tf-idf weighting factor. That means that, being six the number of variants, the words appearing in Figure 8.4c are the words that make Variant 1 special regarding to the other five variants (i.e., management of the pin of a credit card and providing soda).
More meaningful word clouds can be obtained by pre-processing the words of the elements provided by the adapter. The domain experts can explore word clouds, as the ones in Figure 8.4, in order to include word filters and to refine filter settings. Therefore, the preparation phase is an iterative process where the domain experts create and refine the word clouds until they consider that the domain vocabulary is correctly represented in them. • CamelCase splitter: CamelCase is a de-facto naming convention for source code that assemble words to avoid white spaces. For instance, the method getPersonAddress is split in the words "get" "person" and "address".
• Parts-of-speech tags remover: These techniques analyse and tag words depending on their role in the text. For example, we can decide to remove conjunctions (e.g., "and") and articles (e.g., "the") as they may not add relevant information and they are very frequent. We apply Part-of-Speech Tagger techniques as implemented in OpenNLP [START_REF]Apache. Opennlp[END_REF]. In Figure 8.6, this filter removed the word "by" because it was tagged as a preposition.
• Synonyms: This filter calculates possible synonyms for each pair of words by using a similarity threshold. We used an implementation of WordNet [fJ15] and the WUP similarity metric [START_REF] Wu | Verb semantics and lexical selection[END_REF]. In Figure 8.6, "insert" and "enter" were automatically detected as synonyms. Users can also define their own synonyms list.
• Stemming: This filter considers as equal all words that have the same root. For example, "playing" stems from "play" and it is considered equal to "play" if this word appears. Instead of keeping the root, we keep the word with greater number of occurrences to replace the involved words. For the implementation we used the Snowball steamer [START_REF] Porter | Snowball: A language for stemming algorithms[END_REF]. In Figure 8.6, "display" and "displayed" had the same stem.
• Multiwords: Multiwords are a set of words that should not be separated [SBB + 02].
Multiwords will be treated like an unique word. In Figure 8.6, "credit card" was included by the user as a multiword. • Stop words: Stop words are words to be ignored because they do not add information.
The users can define their own stop words. In Figure 8.6, "state" and "transition" were included as stop words as they are just language concepts in statecharts which are very frequent and that do not add relevant information.
The users that select and configure some of these filters must be aware that these techniques can have positive improvements in their task but it also can represent a risk of losing information. We present examples in Section 8.4.2.
Phase 2: Block naming
Blocks are obtained using the block identification technique. Then, the domain experts use interactive word cloud visualisations constructed with the elements of each block in order to name each block. The hypothesis of the VariClouds approach for block naming is that relevant words are those that make each block special regarding the rest of the blocks. For this reason, tf-idf weighting factor is used.
Figure 8.7 shows the tf-idf word clouds of the blocks identified while automatically analysing and comparing the vending machine statechart variants of the running example. As presented in Section 8.2.1, tf-idf penalizes the words appearing frequently in most of the blocks giving more weight to the words that make this block special. For example, "code" and "enter" appear in Blocks 2, 3 and 4 and that is the reason why these words lose weight compared to the different beverage names "soda", "coffee" and "tea" which are, in this case, the actual feature names. In Block 6 we have "pin" as the largest word while "credit card", the actual feature name, is also present but smaller. The reason is that both "pin" and "credit card" are special for this block but "pin" is more frequent. The domain experts can interact on the suggested names to set the block names. VariClouds approach proposes an automatic algorithm to suggest the names of each block.
Algorithm 1 shows the pseudo-code of this automatic renaming which is not complex. The parameter k is the initial number of words to be used when renaming. Using k = 1 each block name will initially have only the word with highest tf-idf score. As we will present later, given the empirical results of our case studies, by default we suggest to use at least k = 2. For each block, we assign the concatenation of the first k words with the highest tf-idf scores (lines 2 to 12). When renaming all blocks, it is possible that two or more blocks have the same name. Therefore, we avoid name conflicts by iteratively appending the word with the next highest score until there are no conflicts (lines 13 to 33). If there are no more remaining words in the ranking and there are still conflicts, we append different numbers at the end of the name.
Algorithm 1 Automatic renaming of blocks.
Case studies and evaluation
We assess the soundness of VariClouds via experiments for answering the following research questions:
• RQ-1: To what extent can the blocks of implementation elements (e.g., source code elements) automatically provide insights that correspond to expert judgement about the semantics of these blocks? • RQ-2: Is the word cloud visualisation paradigm effective for naming during feature identification?
The following three subsections present the case studies and the answers to the two research questions.
Requirements of selected case studies
We consider case studies that satisfy three conditions:
1. Previously published/used by the SPL research community.
2. The artefacts of the case studies (i.e., variants) are publicly available.
3. The name of the features are reported in their corresponding referenced works.
The first two conditions aim to guarantee that our approach can be easily reproduced by other researchers. The third condition provides a ground truth to evaluate the naming process given that the result of the manual naming was reported in their respective publications.
We further take care of selecting case studies that differ with regards to their artefact types (source code, modeling languages and plugin-based software). Table 8.1 characterizes the case studies (CS). First, source code-based case studies are software systems written in Java. We show the number of source code compilation units (i.e., Classes), the number of methods, and the number of source lines of code (LOC) ii . For the different case studies regarding modeling languages, we show the number of Classes that each model variant has. The term Classes should not to be confused strictly with UML Classes as we explained in Section 5.3.1. In fact, only CS 7 is a case study with UML model variants. Attributes (Attrs) and References (Refs) are the number of non-null properties of the classes. Finally, we show the characteristics of variants of Eclipse, an IDE known for its plugin-based architecture. Concretely, we focus on variants simultaneously released in Kepler which were already used in Section 4.4.1.
We leverage the source code and model case studies (CS 1 to CS 7) to evaluate RQ-1 providing quantitatively results using the ground truth. The Eclipse case study (CS 8), on the other hand, is used for RQ-2 providing time measurements and qualitative results with the involvement of domain experts.
ii Lines which contain characters other than white space and comments using Google CodePro
Quality of the word clouds
We use the Mean Reciprocal Rank (MRR) [START_REF] Voorhees | The TREC-8 question answering track report[END_REF] for the evaluation. MRR is a metric used in IR to measure the quality of rankings. Concretely, it captures how early the relevant result appears in the ranking. It is also considered that MRR measures users' effort as it is related to their search length. This is the case of using word clouds where the user look at the largest name, then to the second largest, then to the third and so on. MRR is used in the cases that there is only one relevant solution as it is the case of known-item search (the feature name of the ground truth). The reciprocal rank (RR) of a given feature name in its associated block is calculated as 1/rank i where rank i is the position in the ranking where the feature name appears. Let F be the features that we want to search, the formula of MRR is presented in Equation 8.2. For example, a MRR of 1 means that all feature names were the largest in the word cloud of its associated block.
M RR = 1 |F | |F | i=1 1 rank i (8.2)
In the worst case scenario the name is not found in any rank position and therefore the denominator of RR is 0. In this special case we consider 1/rank i = 0 as if rank i was tending to infinity. However, given the importance of this fact we will discuss these cases separately.
We have observed that the core components common to all variants use to be encompassed in a feature whose names are Core or Base. These names do not use to be part of the emerging vocabulary and for this reason, we refer to them as MRR2, the MRR metric where the Core or Base feature is excluded from the set F .
Table 8.2 shows the results in terms of MRR2, MRR and the rank of each of the features. If a name is not found we denote it with the empty set symbol ∅. M Rank shows the average of the rank of the features without considering the core feature and the features that were not found. For the calculation of the rank of some features, we were slightly flexible in manually deciding if two words were the same. We present an exhaustive list of these cases that can be debatable. We considered log similar to logging, cash similar to cash payment, credit card similar to credit card payment, ring tone similar to ring tone alert, exterior-view similar to ExteriorVideo, Exception similar to ExceptionHandling, Label similar to LabelMedia, Favourites similar to Favorite and for the names of the UML diagrams we did not consider the word diagram (e.g., Activity diagram similar to Activity).
The mean of MRank is 1.62 which indicates that, ignoring the core features and the not found features, the feature names appear in the first two positions of the ranking. The mean of MRR2 in this set of case studies is 0.79. This result is promising and, therefore, guarantee the soundness of the VariClouds which will show the largest words for the most relevant terms thus reducing users' search effort. Despite the promising average results, there are some unsuccessful results that cannot be neglected. We discuss the two main reasons for unsuccessful RR results with a special focus on the two not-found feature names (rank = ∅).
• Mismatch between domain names and implementation details: In the case of the Cognitive support feature in ArgoUML, there is a complete mismatch between the feature name and the vocabulary emerging from the implementation. This is the worst case for name suggestion during feature identification using VariClouds. The largest words of its associated blocks are cr, criticized and design. Specifically, criticized corresponds to the Critics subsystem of the ArgoUML architecture [START_REF] Vinicius Couto | Extracting software product lines: A case study using conditional compilation[END_REF] which implements the Cognitive support. In the ArgoUML SPL website iii this feature is called design critics. The reference work [START_REF] Vinicius Couto | Extracting software product lines: A case study using conditional compilation[END_REF] took the feature name from the publication that explained this functionality [START_REF] Robbins | Cognitive support, UML adherence, and XMI interchange in argo/uml[END_REF].
• Filters undesired effect: In the case of WithdrawWithoutLimit of the Banking systems, with and without, despite being prepositions, were important words which were discarded by the parts-of-speech tag remover. By deactivating this filter, MRR2 can be 0.875 if we consider without similar to WithdrawWithoutLimit (withdraw and limit are the rank items following the first one that is without). In the preparation phase, the camel case splitter was activated for all the source code based case studies, as well as for the UML model variants of the Banking systems given that UML model vocabulary was close to source code language. However, in the case of the Notepad case study, camel case is not the style followed by the developers. There are methods called finD or field declarations called findNexT therefore the largest words for the Find feature are find, fin and d. This fact complicates the readibility of the word cloud but luckily it does not affect the RR in this case.
iii ArgoUML SPL website: http://argouml-spl.stage.tigris.org/
The benefits of summarization
Textual representations were used in previous works to characterize, separately, each implementation element. In small artefacts, or for illustrative purposes, this can be useful but it does not scale for human comprehension. In blocks with thousands of elements, a summarization approach might avoid the domain experts to spend time, during naming, looking at the technicalities of the artefacts' implementation, or trying to reason using the textual representation of individual elements.
A small excerpt of the textual representation of model elements was presented in Figure 5 This subsection presents our case study to analyse the benefits of summarization with VariClouds instead of using plain textual representation of individual elements. Before designing and implementing VariClouds, we considered the scenario of Eclipse variants presented in Section 4.4. In those experiments, we requested the expertise of three domain experts with more than ten years of experience on Eclipse development, who analysed, independently from each other, the Elements' textual representations of the 61 Blocks that were identified. This manual task took an average of 51 minutes which provides us a baseline.
For the evaluation of RQ-2 we consider the Eclipse case study again. We evaluated VariClouds with another three domain experts on Eclipse which were not the same persons as in the previous experiment. However, they have similar professional expertise in Eclipse development as the previous ones. We asked them, independently, to perform the feature identification tasks using the word cloud visualisation as support for the element textual representations of each block. In addition, they were asked to think aloud and report their mental process to select the names. During the experiment all the domain experts stated the following process for block naming:
1. Read very quickly the textual descriptions of the elements (beginning, middle and end) in order to have an initial clue about the logic and identify a word in their mind. 2. Read the word cloud largest names and contrast them with the one in their mind. 3. Select one from the word cloud or use the guessed one. 4. Optionally refine the selected word with an extra word.
The average time was reduced from 51 minutes to 28 minutes. Even if 23 minutes may not seem a very relevant reduction, it constitutes a 45% decrease in this case study. Qualitatively, all of them stated that the word clouds were useful for assigning the names. This was emphasized when they were not completely sure about the logic of the block. They stated that the presence of the word clouds served as reinforcement or confirmation for the final naming decision. According to the time reduction and their mental process, we can say that word clouds reduce domain experts' comprehension time and help them to be more confident with the naming decisions while accelerating the process.
8.5
Threats to validity
VariClouds is a visualisation paradigm that makes use of well established IR techniques.
We provide empirical results about its soundness. We found that the results on the case studies are promising but we cannot assure that the findings can be generalized. However, as discussed in Section 3.1.2, in the past, the efforts were mainly focused on the technicalities of innovative and promising block identification techniques rather than on support for final users.
VariClouds fills the gap of a domain experts' process that had a very rudimentary support.
Other threats to validity regarding the generalization of the findings is that the feature names of the ground truth obtained from the SPL literature are conditioned to human factors. Some domain experts decided to put the names we use as ground truth. Also, human factors determine the match between this ground truth and the words used by the developers of the artefacts' implementation. All the case studies consider variants that belongs to the same developers or providers. In the case of completely independently developed artefact variants, the emerging vocabulary from the variants may use a completely different terminology in a way that the synonym filter or other more advanced filters may not be able to correlate.
VariClouds claims for generality in supporting different artefact types, however it assumes the existence of an adapter as presented in Section 8.2.2. In addition, for some reason or for implementation details, some artefact types can have the limitation that their elements may lack meaningful names (e.g., compiled or obfuscated artefacts). In the same way, VariClouds assumes the existence of a block identification technique. As presented in Section 4.3.2, BUT4Reuse provides a set of them, however, it is accepted that there are many factors affecting the quality of the results of these techniques such as the number of variants, their diversity, the match and diff policies, the number of features or the presence of feature interactions among others [RC13a, ZHP + 14, FLLE14]. We agree that the results of VariClouds strongly depend on the block identification technique, however, the research conducted to propose VariClouds is complementary as it is focused in the interaction and visualisation paradigm for the domain experts. Therefore the two research directions can evolve in parallel.
Finally, as threat to validity regarding the Eclipse case study, even if we consider that the new domain experts have a very similar background to the previous domain experts, we cannot assure that the difference is because they have a different set of skills.
Conclusion
We contributed VariClouds as an approach that extensively uses word cloud visualisations in order to provide insights of the emerging vocabulary and variability from a set of variants. Specifically it is designed for helping domain experts in feature identification and naming. We evaluated it in several case studies dealing with different artefact types to show its soundness and genericness.
As further work we aim to evaluate the use of different weights for different Element types. For example, in source code, we can have the hypothesis that the words that belong to a class name have more relevance than the words belonging to a method name. From a visualisation perspective, rather than alphabetically ordering the words, we aim to evaluate the use of more structured clouds, such as tree clouds or force-directed graph drawing. The adapter provides information about element dependencies that can help in creating these structured clouds. We also aim to design other advanced word cloud filters where domain ontologies could be leveraged.
9
Feature constraints analysis with feature relations graphs
This chapter is based on the work that has been published in the following paper:
Introduction
Feature constraints discovery is an important activity in extractive SPL adoption. As described in Section 2.2.2, once the features are identified, domain experts still need to reason about the constraints that may exist among the features. In Section 3.1.4 we presented related work on automatic techniques for this activity and Section 4.3.5 detailed the integrated techniques in BUT4Reuse. However, beyond automatic techniques, domain experts require visualisation paradigms to assist during this activity enabling free exploration for constraints discovery.
Unfortunately, there is a:
Lack of support for manual feature constraints discovery
• It is complex to understand how features are related among them. As we described in Section 3.3, existing visualisation paradigms in SPLE have not focused enough on solutions to reason about feature constraints and about the existing relations among the features, especially when the objective is to discover missing constraints (hereafter non-formalized constraints).
• Soft constraints discovery should be part of the process. Soft constraints, presented in Section 2.1.2, are relevant domain information, therefore, domain experts should have the means to formalize this knowledge as part of the discovery process.
• Stakeholders belonging to different fields of expertise complicate the process. Feature constraints usually exist among features belonging to different fields of expertise, thus, the stakeholders may not have an overview about how the features relevant to them are related to the others. In these cases, it is challenging to effectively communicate and visualise the constraints.
Contributions of this chapter.
• The Feature Relations Graphs (FRoGs) visualisation paradigm to represent the relations among features. For each feature, we are able to display a FRoG which shows the impact, in terms of constraints, of the considered feature on all the other features. The visualisation can be considered as a specialized radial ego network where both stakeholder diversity and soft constraints are taken into account.
It is also worth mentioning that, the usage of the FRoGs visualisation is not necessarily restricted to the context of extractive SPL adoption. It can be used in documentation, feature model maintenance or recommendation during product configuration.
The remainder of this chapter is structured as follows: Section 9.2 presents an overview of manual feature constraints discovery. Section 9.3 formalizes the data abstractions and describes our proposed visualisation paradigm. Then, Section 9.4 presents the case study and, finally, Section 9.5 concludes and outline future work.
Manual feature constraints discovery
Manual feature constraints discovery
FRoGs is a visualisation for domain experts to discover constraints during extractive SPL adoption based on mining existing product configurations. Other constraints discovery techniques, such as the structural constraints discovery (Section 4.3.5), automatically analyze the structural dependencies among the elements of a feature. However, FRoGs only considers the information from the existing configurations that can be calculated after the feature identification and location activities. Figure 9.1 illustrates how the configurations are obtained.
Once the features are known and located, it is possible to identify, for each variant, the features that it contains. In the example of the figure, at the bottom, we can see that the elements associated to F1 and F2 features are those present in Variant 1, therefore, Configuration 1 consists of these two features.
Feature identification and naming
Feature location FRoGs visualisation enables domain experts to formulate questions regarding a given feature.
Continuing with the example of Figure 9.1 and focusing on F3: Given the fact that F3 always appears with F2 in the existing configurations, can we affirm that F3 requires F2? The fact that F3 never appears with F4 is because F3 excludes F4? or is it just a coincidence and nothing prevents F3 to be with F4? In this example we are not using meaningful feature names but, with meaningful names, domain experts could be able to discover constraints based on their domain knowledge. FRoGs visualisation also aims to trigger other kind of questions regarding tendencies found in the existing configurations. This is related to enabling the discovery and formalization of soft constraints.
Given that variability in SPLs arises from different stakeholders' goals [START_REF] Yu | Configuring features with stakeholder goals[END_REF], in general, constraints are formalized by different stakeholders. Each stakeholder can thus be interested in a partial view of the variability (stakeholder perspective). FRoGs visualisation aims to facilitate this separation while enabling, at the same time, to reason about potential constraints found across stakeholder perspectives.
FRoGs visualisation can be categorized as a radial ego network representation which is used in social network analysis. The ego represents the focal node of the graph. In software engineering, ego networks have been used to visualise how the modification of a component could demand the modification of other components [START_REF] Marco D'ambros | The evolution radar: Visualizing integrated logical coupling information[END_REF]. FRoGs, for its part, is designed to deal with the specificities of feature constraints in SPLE.
Mining configurations to create feature relations graphs
In this section we present how the graphs of the FRoGs visualisation are created and how they are used. Concretely, we present the data used to create the visualisation paradigm and how this data is graphically represented. We also describe the interaction possibilities. Finally, we discuss rejected alternatives for a better understanding of our design decisions.
Apart from extractive SPL adoption, FRoGs can be also used for constraints discovery in an operative SPL to discover non-formalized constraints in the FM. The principles of the visualisation are the same, so in next sections we explain FRoGs as a general solution for constraints discovery in SPLE.
Data abstraction and formulas
A configuration is defined as a set of selected features
c i = {f 1 , f 2 , .., f n }. Let S = {c 1 , c 2 , .
., c m } be the configuration space, i.e., a set with all the possible valid configurations for a given FM. In the case of extractive SPL adoption, if no previous automatic constraints discovery techniques are applied, the FM will consist on independent optional features. Let EC ⊆ S be the set of existing configurations. Our approach assumes that all configurations in the EC set are valid. Also, if duplicated configurations exist, our approach does not take them into account. Indeed, one configuration represents one product variant independently of the amount of units of the product variant that we may have.
As example, and following with the Car example [START_REF] Czarnecki | Sample spaces and feature models: There and back again[END_REF] presented in Section 2.1.2, Table 9.1 presents the S set. The Car FM was shown in Figure 2.5 containing the constraints that define the boundaries of S.
Table 9.1: Configuration space for the Car example.
Manual Automatic DriveByWire ForNorthAmerica f 1 f 2 f 3 f 4 Conf 1 Conf 2 Conf 3 Conf 4 Conf 5 Conf 6
Let consider that EC equals S excluding the last configuration Conf 6. In this case EC is expressed as:
EC = { c 1 = { f 2 , f 4 }, c 2 = { f 1 , }, c 3 = { f 2 , }, c 4 = { f 1 , f 4 }, c 5 = { f 2 , f 3 , f 4 }}
Given EC, we can reason on the feature relations in this set. First, let EC f i = {c ∈ EC : f i ∈ c} be the subset of existing configurations that contain the feature f i . For example, the existing configurations containing the Manual feature is defined as EC f 1 = {c 2 , c 4 }. With this definition of EC f i , we can calculate the ratio of the occurrence of a feature given the occurrence of another feature. Following existing notation [START_REF] Czarnecki | Sample spaces and feature models: There and back again[END_REF] we call this operation f i given f j . Equation 9.1 presents the formula which is similar to conditional probability.
f i given f j = |EC f i EC f j | |EC f j | (9.1)
In the Car example, the proportion of Automatic given DriveByWire (f 2 given f 3 ) equals 1 since when we have DriveByWire in the EC, we always have Automatic. In the same way, Manual given Automatic (f 1 given f 2 ) equals 0 because when we have Automatic we never have Manual. Equation 9.2 details how Automatic given ForNorthAmerica (f 2 given f 4 ) is calculated to obtain 0.66.
f2 given f4 = |EC f 2 EC f 4 | |EC f 4 | = |(c1, c3, c5) (c1, c4, c5)| |(c1, c4, c5)| = |(c1, c5)| |(c1, c4, c5)| = 2 3 ≈ 0.66 (9.2)
These ratios, that are continuous values ranging from zero to one, are mapped to the different potential hard and soft constraints as described in Table 9.2. For example, Automatic given DriveByWire = 1 is mapped to DriveByWire requires Automatic. Regarding soft constraints, the thresholds enc threshold and dis threshold are defined and adjusted by domain experts based on their experience. These thresholds are also discussed and set by domain experts in previous related works [START_REF] Czarnecki | Sample spaces and feature models: There and back again[END_REF].
Table 9.2: Constraints identification while using feature relations graphs
Condition
Constraint Notation
f j given f i = 1 f i requires f j f i ⇒ f j f j given f i = 0 f i excludes f j f i ⇒ ¬f j enc threshold <= f j given f i < 1 f i encourages f j sof t(f i ⇒ f j ) 0 < f j given f i <= dis threshold f i discourages f j sof t(f i ⇒ ¬f j )
We define a confidence metric for the validity of the mined constraints. The confidence metric is only applicable for relations that are not explicitly formalized in the FM. The intuition of our confidence metric is to calculate the probability of finding one configuration that violates this constraint. Let S f i be the subset of the valid configurations that contains the feature f i . S f i = {c ∈ S : f i ∈ c}. Equation 9.3 shows how the metric is calculated.
Conf idence(f
i ) = |EC f i | |S f i | (9.3)
For example, if we have f j given f i = 1, the confidence of f i ⇒ f j is calculated as the percentage of the possible valid configurations containing f i that exist in EC. The confidence will increase when new configurations containing f i are created as we will reduce the probability of finding a configuration contradicting f i ⇒ f j . The confidence will reach one when
EC f i = S f i .
Calculating the number of valid configurations is a well known problem in automated analysis of FMs [START_REF] Benavides | Automated reasoning on feature models[END_REF]. Given that S f i can be prohibitively large, the cardinality |S f i | is calculated reasoning on the formalized feature constraints without the need of an exhaustive enumeration of the set S f i .
Regarding the stakeholder perspectives, each of them consists in the subset of features that are "owned" by a stakeholder. These subsets must be disjoint in the current version. Stakeholder perspectives can represent conceptual units in SPLs such as domains or subdomains [BFK + 99].
Once these conceptual units are identified, the features are distributed in these subdomains [START_REF] John | A practical guide to product line scoping[END_REF]. Following with the Car example, features related to the type of Gear are mainly relevant for final Customers to choose their cars. However, DriveByWire is more the concern of Engineers during the construction of the vehicle and the ForNorthAmerica feature is relevant for Commercials which are interested in sales analysis. Each of these stakeholder perspectives cannot ignore how features belonging to other stakeholders impact their own features.
Graphical aspects
This subsection presents design decisions that are relevant from a visualisation point of view. A FRoG is presented as a circle where the considered feature is displayed in the center. This feature in the center will be called f c hereafter. The rest of features are displayed around f c with a constant separation of 2π |f eatures|-1 . This separation allows to uniformly distribute all the features (except f c ) around the circle.
The features are ordered by stakeholder perspectives and circular sectors are displayed for each of them. Before providing more details about the visualisation, Figure 9.2 shows an example of a FRoG for the Manual feature of the Car example after mining EC. Specifically, this FRoG shows the impact of Manual (f c in the center) in the rest of the features. For example, Manual excludes Automatic and DriveByWire. We can also see the sectors of the Customer, Engineer and Commercial stakeholders covering their corresponding features.
The zones of a FRoG
Without considering the stakeholder sectors, each FRoG displays five differentiated zones. These zones are associated with a specific color and related to a specific type of constraint. Figure 9.3 shows these zones and how the distance between f c and the boundary of the circle is calculated according to the f i given f c operation.
• Requires Zone. This zone is the closest to f c . Features in the requires zone are "attached" to f c meaning that, in EC, when we have f c we always have f i . The requires zone is reserved to the maximum value of f i given f c which is 1. The requires zone, as well as the excludes zone that we will present later, are reserved to only one value and their size fits exactly with the diameter of the f i nodes. The requires zone is displayed with the green color. • Excludes Zone. This zone, displayed with the red color, is in the extreme of the circle.
It is the furthest from f c to illustrate that there is no occurrence of f c and f i together in any configuration. • Encourages Zone. This zone includes all f i that are potentially encouraged by the presence of f c . The color is a pale green meaning that it is close to the requires zone without reaching it. The encourages zone fades to white in the independent zone that we will present later. The fading occurs at the distance defined by the enc threshold . • Discourages Zone. This zone includes all f i that are potentially discouraged by the presence of f c . The color is orange meaning that it is close to the excludes zone without reaching it. The discourages zone fades to white in the independent zone at the distance defined by the dis threshold .
• Independent Zone. This zone, located between the enc threshold and the dis threshold , is displayed with the white color containing the features that are not impacted by the presence of f c .
In the FRoG of Figure 9.2, where Manual was the f c , we can observe how both Automatic and DriveByWire are in the excludes zone. The feature ForNorthAmerica, with a f i given f c of 0.5, is located in the middle between the requires and excludes zones. In this case, the feature is in the independent zone.
Comments on colors
The usage of colors in a visualisation is an important design decision [START_REF] Silva | Using color in visualization: A survey[END_REF]. FRoG zones are depending on univariate data (f i given f c ). This data has continuous values. However, as we have shown, these values are mapped to zones in a discrete fashion. Therefore, we have defined boundaries and we associated one color to each of the zones following the color scheme shown before in Figure 9.3.
The green of the requires zone contrasts heavily with the red of the excludes zone. The separation principle, that claims that close values must be represented by colors perceived to be closer, is respected with the green and the pale green for the requires and encourages zones. The same principle is respected in the red and orange colors associated to the excludes and discourages zones. The color scheme used in the zones is a diverging color scheme given that it illustrates the progression from a central point. The central point is when f i given f c equals 0.5. In addition, the traffic lights metaphor, shared by most of the cultural contexts, is used in FRoGs visualisation. The red color for the excludes zone has a connotation of prohibition (f i cannot be with f c ), while the green of the requires zone has a connotation of acceptance (f i must be with f c ).
The size of the requires and excludes zones fits exactly with the size of a f i node. For the other zones, the size depends on the percentage defined in the enc threshold and dis threshold . The fading zone between the encourages and discourages zones is very small in size. This is aimed to create a visual effect to recall that these boundaries are not as restrictive as the boundaries of the hard constraints represented with a black line. For example, a feature f i with a formalized excludes hard constraint cannot appear in the discourages zone. Otherwise, it would not be a hard constraint. On the contrary, one feature f i that has not a hard constraint could "move" over time between the encourages, independent and discourages zone when new configurations are added.
Differentiating formalized, non-formalized and inferred constraints
A FRoG displays the mined constraints by analysing the EC. However, some of them (or all in an ideal case) are normally already defined in the FM. In the FRoGs visualisation, for already formalized constraints, we add an inner circle in the node of f i with the color associated to the type of constraint. Figure 9.4 shows this notation. For example, M anual ⇒ ¬Automatic is an already formalized constraint as shown in the Car FM in Figure 2.5. Therefore, in the FRoG of Figure 9.2, this notation appears in the Automatic node. Concretely, it uses the inner circle in red color because it is an excludes constraint.
Formalized Requires constraint Formalized Encourages constraint
Formalized Excludes constraint Formalized Discourages constraint It is worth noting that, in the case of formalized soft constraints, the mining process on EC could situate f i in a FRoG zone not corresponding to the defined soft constraint. This could alert the user about a general violation of the formalized soft constraint.
An inferred constraint, is a constraint that is not formalized in the FM but that exists because of logical rules. For example, when A ⇒ B and B ⇒ C, it can be inferred that A ⇒ C even if it is not explicitly formalized. Figure 9.2 shows an example of inferred constraint in the DriveByWire node. In fact, Manual excludes Automatic, and DriveByWire requires Automatic. Given this situation, Manual excludes DriveByWire.
Figure 9.2 shows the FRoG legend at the right side of the image. In this legend, the "Relation types" category shows the notation for the feature relations. We use the triangle for inferred constraints to differentiate it from the circle of the formalized ones. The triangle metaphor contrasts with the circle and it has the connotation of an arrowhead representing the existence of a rule of inference. If it is neither a formalized nor an inferred constraint, we will refer to it as an undefined relation independently of the zone where the f i is placed. Undefined relations do not contain any symbol inside the f i node.
Stakeholder Perspectives
Each stakeholder perspective has an associated color used in the circular sectors of the FRoG. The f c node has the color of the stakeholder perspective it belongs to. In addition, the circular sector of the stakeholder perspective of f c has a slightly larger radius. Figure 9.2 showed how the Manual feature has the Customer color and that the Customer sector has larger radius.
Stakeholder perspectives are nominal data and we use a qualitative color scale that does not imply order. Despite that default colors are provided, the stakeholder perspective colors could be changed by users. Figure 9.5 shows the used color scheme. In the rejected alternatives section 9.3.4, we discuss why only grey, purple and blue variations are present in this default color scheme to avoid interfering with the colors from the zones.
Displaying the Confidence
The confidence is displayed outside of the FRoG zones. It is presented as a percentage accompanied by a pie chart visualising this percentage. In the bottom-right side of Figure 9.2 we can see a confidence of 100% with a complete black circle. Figure 9.6 shows another example with a confidence value of 5% showing a pie chart with only 5% in black.
Interaction possibilities
FRoGs is a visualisation tool allowing free exploration. The navigation and filter capabilities are presented in the next paragraphs focusing on details of our implementation. To illustrate these interaction possibilities, Figure 9.6 shows a screenshot of the tool at which we added dashed sections to highlight the different parts. The "Feature List" part contains all the variable features of the FM. The "FRoG" part shows the FRoG of the selected feature with the "Confidence" and "Legend" sub parts.
Navigation
On the left side of Figure 9.6 we can see the "Feature List" part. From this list we will be able to select the one that want to be analysed. As we can see in the figure, PBSAutomatic is the one selected and displayed in the center of the FRoG. Apart from interacting with the list of features, we can interact with any feature node to show its FRoG.
Filters
Filters can be applied simultaneously to hide feature nodes belonging to stakeholder perspectives, relation types or FRoG zones. Different filter combinations could fulfil different objectives of FRoGs usage. Some examples of usage are: • Discovery of non-formalized constraints: We focus on potential non-formalized constraints by hiding all features on the independent zone and those constraints that are already formalized or inferred.
• Documenting constraints: We focus on constraints by hiding all features on the independent zone.
• Intra-stakeholder perspectives constraints: We hide all the features that do not correspond to a given stakeholder perspective.
• Inter-stakeholder perspectives constraints: We hide all the features that do not correspond to a given set of stakeholder perspectives.
Adjusting encourages and discourages thresholds
The enc threshold and dis threshold could be adjusted at will. An slider appears when selecting "Show Options" and, interacting with it, automatic feedback in the FRoG is provided by changing the size of the selected zone. If filters are applied in order to hide features in a given zone, hidden nodes could appear or disappear while modifying the thresholds.
Save as image
Another interaction capability of the visualisation tool is saving as image the current FRoG (filtered or not filtered) and the legend to an image file. This image can be used directly for documentation or to communicate some phenomena in feature relations.
Notes about the current implementation
FRoGs visualisation was implemented with Processing [CF12] and we achieved complete visual continuity. f i given f j is an algorithm with order O(m) where m is the number of existing configurations. Therefore, displaying a FRoG is O(nm) being n the number of features. The number of possible valid configurations is precomputed before starting the visualisation. The needed set of input files for FRoGs is exported using the SPL tool FeatureIDE [TKB + 14] and the mapping from stakeholder perspectives to features are defined in a configuration file.
Rejected alternatives
During the design of FRoGs, we asked the opinion of the industrial partner as well as other users with knowledge on SPLE. We report in this subsection the reasons to reject some visualisation design alternatives.
Colors of Requires and Excludes zones
Initially, we used pale green and pale red for the requires and excludes zones. Users reported that more intense green and red colors for these zones are preferred to catch more attention on hard constraints.
Default colors for stakeholder perspectives
Qualitative scales for nominal data are publicly available as for example the color scheme provided by ColorBrewer 2.0 [Col]. Figure 9.7 shows this color scheme that we used in the beginning. However, users complained about the similarities between some of the colors of the stakeholder perspectives' sectors and the red of the excludes zone. For instance, the fourth color of Figure 9.7 was a cause of confusion. We decided to exclude colors similar to red, orange as well as similar to green and pale green.
Complex relation types
The current version of FRoGs only has the possibility to see the impact of the presence of f c in the presence of the other features f i . In order to display other phenomena in the relations of a given feature f c , FRoGs implementation was flexible to show the impact of the presence or absence of f c in the presence or absence of other features f i . It was also possible to show the impact of the presence or absence of the rest of the features f i in relation with f c . These aspects gave raise to eight possible FRoGs for each feature. The other four combinations could also be displayed even if they are the "inverse" of one of the previous four types (f i given f c = 1 -(¬f i ) given f c ). Also, it is worth commenting that the direction of the arrows from f c to f i changes in types one and two, and three and four.
To obtain the distances to show the impact of other features on f c , it is enough to reverse the f i given f j operation: For type one is f i givenf c while for type two is f c given f i .
This functionality in the options of the visualisation tool confused the users. It is not easy to think about presence and absence of features in combination with the direction of the arrows. Our case study did not need these kinds of relation visualisation for documentation nor for constraint discovery so we decided to remove it to simplify FRoGs usage.
The variability in this SPL is described by Renault according to six stakeholder perspectives that represent different subdomains of the EPB's variability. These stakeholder perspectives are Customer, Environment, Functions, Design, Behavior and Components. We present a brief description of these perspectives and their associated features:
Customer: Customer visible variability is handled by the product division. For customers, the parking brake proposes three types of services. The manual brake, implemented in the PBSManual feature, is controlled by the driver either through the classical lever or a switch. The automatic brake, PBSAutomatic, is a system that may enable or disable the brake itself depending on an automatic analysis of the situation. Finally, the assisted brake, PBSAssisted, comes with extra functions which aid the driver in other situations such as assistance when starting the car on a slope.
Environment: For vehicle environment, the variability concerns the gear box where we distinguish two possible variants: GBManual and GBAutomatic. The presence of a ClutchPedal is also an optional feature.
Functions:
The feature AuxialiaryBrakeRelease is related to an optional function of the system that includes an auxiliary brake release mechanism.
Design: There are different alternatives concerning design decisions of the EPB impacting the technical solution. For architectural design, there are four feature alternatives: PullerCable, ElectricActuator, Calipser and TraditionalPB.
Behavior: After the vehicle has stopped, braking pressure is monitored during a certain amount of time for the single DC motor or permanently monitored for other solutions. The Behaviour includes thus the TemporaryMonitor and PermanentMonitor features.
Components:
The EPB variability also concerns the physical architecture/components. This consists in the presence of different means of applying the brake force: electric actuators mounted on the calipers or a single DC motor and puller cable. The latter is the traditional mechanical parking brake. Also, the type of sensors available may vary. This perspective thus includes the following features: DCMotor, PairEActuator, TiltSensor, EffortSensor, ClutchPosition, DoorPosition, and VSpeed.
As discussed with a domain expert of the EPB SPL, FRoGs could help during the design phase to visualise the impact of different configurations of features (Customer variability or Environment ) on the final solution (variability in Components ). Also, it can be used before the design (product planning) to choose the possible components for future products from several information sources (stakeholders).
Visualising the feature relations
We obtained 20 FRoGs, one per feature, after automatically mining the 200 existing configurations of the EPB case study. Regarding soft constraints, the values used for enc thershold and dis thershold are 25% and 75% respectively. We present the number of constraints that can be identified by visualising the FRoGs in Table 9.3. For example, the second row in the table corresponds to the FRoG of PBSAutomatic shown in Figure 9.6. We can see that this FRoG displays two hard constraints and one inferred. Also, it displays four potential soft constraints (three discourages and one encourages). Before experimenting with domain experts, we manually checked the correctness of the visualised information. Concretely, we checked that the formalized hard constraints and inferred constraints of the FM appear in the displayed FRoGs. We now discuss the analysis that domain experts carried out using FRoGs and how it helps in the comprehension of feature relations in the EPB SPL. For instance, analysing the FRoG associated with the EffortSensor feature (see Figure 9.10), it is observed that it has no hard constraints to any other features. That means that it has no relevant impact on the presence or absence of any feature. However, the feature PullerCable (see Figure 9.11) has a great impact on other features as the PullerCable FRoG shows seven features on the excludes and requires zones.
In the EPB case study, any non-formalized hard constraint was discovered neither in the excludes nor requires zones. This means that, potentially, the FM is not missing hard constraints. However, thanks to FRoGs, as presented in Table 9.3, a total of 55 potential soft constraints were found while mining the existing configurations. By visualising this, domain experts are able to think about two possibilities: a) whether it is an actual soft constraint that should be formalized or b) FRoG displayed a fact based on the existing configurations but it is not an actual soft constraint.
The FRoGs also represented a visualisation paradigm to understand the relations between stakeholder perspectives. From the 47 hard or inferred constraints, 20 of them are hard constraints relating features belonging to different stakeholder perspectives.
Extension considering feature conditions
Discussing with our industrial partner, we identified an interest to allow more than one feature in the center in the form of a condition. This way, instead of knowing the relation of a feature to the others, it is possible to visualise the relation of a condition of feature selections to the rest of the features. This allows to visualise, for example, how a set of feature selections from a given stakeholder perspective affects the features of another stakeholder perspective. As discussed before, the industrial partner showed interest in visualising the impact of Customer and Environment features in the features related to the Components stakeholder perspective.
Figure 9.12 shows a FRoG with two features in the center within the condition PBSManual & ClutchPedal. All the principles for constructing a FRoG is maintained for this extension of the FRoGs paradigm. The only change is that, in these cases, in f i given f j , f j is replaced with the condition. In the current implementation we do not show any notation for relation types (formalized, non-formalized nor inferred).
Conclusions
Feature constraints discovery is one important activity in extractive SPL adoption. Concretely, feature constraints are at the core of FMs and identifying and maintaining them is a challenging task in SPLE. We presented a paradigm called FRoGs to help domain experts visualise the feature relations mined from existing configurations in an interactive way. This can help to potentially discover and formalize new hard and soft constraints. In addition, FRoGs can be used to document, at the domain level, each feature by displaying its relations among the rest of features. We demonstrate in this chapter the usability of FRoGs on a real-world case study from a major manufacturer.
This work opens two main research directions. First, regarding to visualisation, when the SPL is operative, FRoGs can be improved to consider the time dimension. Indeed, the time dimension in the creation of each existing configuration could help users to understand the dynamics of feature relations and to reason about feature obsolescence or usage over time. Second, independently from the visualisation aspect, providing heuristics for determining the default values for the encourages and discourages thresholds is a relevant subject.
Part V
Configuration space analysis for estimating user assessments
Introduction
In many application domains of SPLE, product variants are intended to intensively interact with humans as presented in Section 2.3. In these Human-Computer Interaction (HCI) scenarios, products usability plays an important role in customers satisfaction. Estimate and predict user assessments for the different products is complex but crucial at the same time for 1) understanding user perception along the configuration space of the SPL and 2) maximizing the chances to select the best products for the targeted market. However, there are:
Many barriers to understand user opinions in SPLE contexts
In Section 2.3, we mentioned the three main barriers for understanding user expectations about the products within the SPL configuration space. These barriers are:
• The users cannot assess all possible products. For instance, less than three hundred optional features already permit to configure more products than the number of atoms in the universe. Given this combinatorial explosion, evaluating all products is impossible or prohibitively expensive as already highlighted by the SPL testing community [START_REF] Mcgregor | Testing a software product line[END_REF][START_REF] Do | On strategies for testing software product lines: A systematic literature review[END_REF].
• Human assessments are subjective by nature. Socio-cultural factors, personal preferences or previous experiences in similar systems, affect the way a user rates a product. Therefore, mechanisms for summarizing different user opinions are needed to draw conclusions about products usability or, more generally, about the user experience.
• Getting user assessment is resource expensive. The fastest time-to-market in SPLE is achieved by eliminating human assessments as much as possible [START_REF] Mcgregor | Testing a software product line[END_REF]. However, eliminating usability evaluations in products with HCI components is not always possible. Contrary to automated evaluations, HCI evaluation requires humans to experiment with the system which is a time consuming task. In addition, user fatigue is a relevant issue in usability evaluations when a user is exposed to several systems. When the user is exhausted or bored, the assessment quality and confidence degrade.
Contributions of this chapter.
• Human-centered SPL Rank (HSPLRank): This chapter introduces the theoretical and practical aspects of an approach for estimating and predicting user assessments within an SPL configuration space by leveraging user assessments in a limited number of variants.
This chapter is structured as follows: Section 10.2 discusses scenarios where estimation and prediction of user assessments is relevant. Section 10.3 details the phases of HSPLRank, our approach to overcome the presented challenges. Finally, Section 10.4 presents the conclusion. The evaluation of HSPLRank in two case studies is presented in Chapter 11.
Motivating scenarios for estimation and prediction
This section presents and discusses SPL application domains where estimating and predicting is specially relevant. Also, we introduce one case study in order to show a concrete example.
Subjectivity in SPL-based computer-generated art
With the whole myriad of software alternatives that exist today, the adoption of a software product is eventually dependent on users' subjective perception, sometimes beyond the offered functionalities. Being able to apprehend, estimate and predict this perception on the product as a whole will be an important step towards efficient production. Unfortunately, subjective perception of a software product is hard to formalize. It cannot even be computed with a simple formula based on the perception of its components: melting good ingredients indeed does not necessarily produce a good recipe. Systematically predicting the subjective appreciation of software is therefore an important research direction [START_REF] Hornby | Accelerating human-computer collaborative search through learning comparative and predictive user models[END_REF][START_REF] Li | Adaptive learning evaluation model for evolutionary art[END_REF].
An extreme case where appreciation is pertinent, is when the intention of the product is just about "beauty": this is the case of computer-generated art. By studying how computergenerated art products meet user perception of beauty, we can infer insightful techniques for estimating the perception of other types of software which may involve other HCI aspects. The practice of computer-generated art, also known as generative art, involves the use of an autonomous system that contributes to the creation of an art object, either in its whole, or in part by reusing pieces of art from a human artist, or using predefined algorithms or transformations [START_REF] Boden | What is generative art? Digital Creativity[END_REF]. This art genre is trending in the portfolio of many artists and designers in the fields of music, painting, sculpture, architecture or literature [START_REF] Edwards | Algorithmic composition: Computational thinking in music[END_REF][START_REF] Schwanauer | Machine models of music[END_REF][START_REF] Perlin | An image synthesizer[END_REF][START_REF] William | The computational beauty of nature, computer explorations of fractals, chaos, complex systems, and adaptation[END_REF][START_REF] Greenfield | Evolutionary methods for ant colony paintings[END_REF][START_REF]The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music[END_REF]. We consume computer-generated art systems in our daily life as in videogames, cinema effects, screen-savers or visual designs.
In general, the autonomous systems for computer-generated art rely to some degree to a randomization step in the generation algorithms. However, when no relevant stochastic component is introduced and the creation is not limited to a unique art object, computergenerated art allows to derive different art objects in a predefined and deterministic fashion, giving rise to a family of art products. In Section 3.6.2 we mentioned some SPLs of art related products. Usually, because of the combinatorial explosion of all possible art objects, not all of them will actually be created. Besides, a number of them may not reach the desired aesthetic quality. Exploring the art product family to find the "best" products is thus challenging for computer-generated art practitioners. In this field, they have proposed the paradigm of evolutionary computer-generated art which consists in the application of natural selection techniques to iteratively adapt the generated artworks to aesthetic preferences [START_REF]The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music[END_REF].
We can take into account human feedback for ranking the art objects in a process that explores the collective understanding of beauty. This process is strongly related to the notions of group intelligence which discuss how large numbers of people can simultaneously converge upon the same point [START_REF] Surowiecki | The Wisdom of Crowds[END_REF]. The case of art is interesting as it constitutes the worst case scenario for the homogeneity of opinions due to the subjective essence of art. Thus, we focus on user assessments on art product families and how it can help artists to understand people perceptions of their art products. In this scenario, the challenge is to answer the following question:
• Can we empirically study whether we can predict the like/dislike human perception of an artwork variant built by assembling perceivable components?
If the objective is to find the optimal products and rank computer-generated art product variants based on human feedback, this question leads to the following sub question:
• Given the combinatorial explosion of configurations and the limited resources, how can one identify and select the optimal subset of products that are relevant for human assessment?
One must keep in mind that, as we only consider a subset of products for human assessment, most of the products have not been assessed yet. In addition, user feedback is subjective by nature. Hence, another sub question for ranking product variants is:
• Given a subset of assessed product variants, how can one infer the user assessment of the non assessed products? How can one aggregate user assessments to calculate estimations even for the already assessed products?
HSPLRank, that will be explained later, is an approach that can be used for leveraging user assessments towards ranking all the possible art products based on estimations. We present an introduction of our case study in this scenario dealing with high subjectivity.
An SPL for digital landscape paintings
We have actively collaborated with Gabriele Rossi, an Italian art painter based in Paris, with whom we were able to conduct a large study on computer-generated art built by composing portions of paintings. We developed this system using SPLE techniques. In the last years, Gabriele has been drawing abstract representations of landscapes with quite a recognizable style of decomposing the canvas in different parts: a Sky part, a Middle part and the Ground. This variability is further enhanced by the fact that the Sky and Ground are mandatory while the Middle part is optional. For the realization of the computer-generated art system, the artist created different representative paintings for each part. For the Sky part, he painted 10 representative sky paintings hereafter noted S i , with i ∈ [1..10]. Similarly, he painted 9 paintings of the middle part (noted M i , where M 10 means the absence of middle part), and 10 paintings of the ground part (noted G i ).
Another variability dimension identified by the artist concerns the perception of the composition. Indeed, this perception can change if any instance of any part is flipped horizontally, thus adding an optional property for configurations (Flip). For example, a given ground part may have more brightness in its left or right side, adding a compositional decision for where to place this brightness in the whole painting. The artist also stated that each of the parts could take more or less space in the canvas. We therefore included an optional property for ExtraSize. For example, if a painting should have the sky visible in only a small section of the canvas, the middle and ground parts must be of larger size.
The variability in this domain can be expressed through a FM which exposes the different configurations that can be selected to yield painting variants. It was thus easy to introduce the artist with different FM concepts and to discuss the different elements of paintings in terms of features. This led to the establishment of a FM for his painting style that is presented in Figure 10.1.
A specific configuration, i.e., a given selection for sky, middle and ground, is assembled by superposition of the different parts. In Figure 10.2 we illustrate how different paintings in the style of the artist can be obtained by flipping one or more painting parts or/and increasing the size of elements. We have implemented a compositional derivation tool with Processing [CF12] which, given a configuration from the FM of Figure 10.1, we can generate a digital landscape painting based on the reusable assets of painting parts provided by the artist.
Using this SPL, the objective is to provide a solution for the challenges described in Section 10.2.1. Before introducing this solution we describe another relevant scenario for estimation and prediction of human assessments.
Variability and user experience in UI design
User involvement is one of the key principles of the user-centered design method [START_REF]ergonomics of human system interaction-part 210: Humancentred design for interactive systems[END_REF].
It is nearly impossible to design an appropriate UI without involving end users throughout the product design and development. Practitioners are often required to produce several variants of the same product, each of them presenting a different look and feel and different user interaction patterns. These variants are presented to end users who provide feedback for identifying the most adapted variant for their context of use.
Variability management in UIs serves to formalize the possible variants to be evaluated. The design, from scratch, of several UI variants is not practical when, eventually, all variants share among themselves some commonalities that can be leveraged in a systematic reuse process. Thus, it is essential to capture the variability of the design choices among potential variants. Many works have initiated first steps in the management of HCI-specific variability using SPL-based approaches [START_REF] Pleuss | User interface engineering for software product lines: the dilemma between automation and usability[END_REF][START_REF] Pleuss | Integrating automated product derivation and individual user interface design[END_REF].
As discussed in Section 10.1, the large amount of possible configurations presents many challenges for achieving UI variant evaluations in a feasible way, we need to try to reduce the number of variants to evaluate, perform the variant assessment and finally select the best one according to estimations. It is worth to mention that by evaluation, we mean not only evaluating usability of the HCI system but also general user experience.
We focus on early binding variability [START_REF] Kyo | Feature-oriented project line engineering[END_REF] where design alternatives are chosen at designtime. Therefore, this scenario is not focused on customization options which are done by end users themselves within the UI, nor to self adaptive systems which corresponds to run-time binding of variability. After the application of HSPLRank, the most relevant variants in a given scenario can be finally selected by the domain experts.
Variant reduction
Given that the number of possible configurations can grow exponentially with the number of features, we propose first to discard the configurations which represent variants with very poor expected quality. This quality level is based on expert judgement of the expected impact of features or feature combinations in the final product. In order to formalize this domain knowledge, we use soft constraints, presented in Section 2.1.2, to describe the viable configuration space. Therefore, viable configurations represent variants with the minimum acceptable quality to be tested by end users. Figure 10.4 shows an illustrative diagram of the configuration space. As we mentioned, the configuration space represents the set of all possible configurations that can be produced with the FM (i.e., the configuration space is the set of valid configurations which satisfies hard constraints). The viable configuration space represents the product variants which are of sufficient expected quality and that satisfy all the constraints, both hard and soft constraints.
In an scenario of UI design, our approach aims at requiring user assessments restricted to a priori usable UIs. As soft constraints are currently based on expert judgement, usability experts are involved in order to produce them. The usability experts analyze the usabilityrelated impact of feature combinations in a given SPL derivation scenario of UIs. Following with the E-Shop example presented in Section 2.1.2 (FM shown in Figure 2.1a) and considering a scenario of an E-Shop with a large amount of items in the catalogue, the usability expert can determine that not including a Search functionality in the UI results in a poor usability of the E-Shop. Table 2.1 presented the configuration space of the E-Shop that consist of eight possible configurations. By including the constraint sof t(Search), the viable space is reduced to four configurations as only Conf 2, 4, 6 and 8 satisfy the soft constraint. For this scenario of large catalogues, only this viable space will be used as the input for the variant assessment phase.
Soft constraints define the boundaries of what is supposed to be usable in a given scenario. As consequence, the characteristics of the case study are decisive because the soft constraints are not necessarily the same in all cases. For instance, for an E-Shop with only two items, the Search functionality is possibly counterproductive. In this case, the soft constraint will be sof t(¬Search). Soft constraints can be unsatisfied in a final product but, according to the usability expert, most probably it will be a non usable variant. In other words, soft constraints reduce, a priori, most of the non-viable products but we can still produce UIs that may result, a posteriori, in bad user experiences or in unexpected good solutions.
Given that the viable space could remain very large, we still require a reasonable way for variant assessment with end users. Next section explains the assessment phase and how it benefits from this variant reduction phase.
Variant assessment
For users assessment, HSPLRank proposes the use of search based techniques. Concretely, we have chosen to implement an evolutionary approach in order to select the variants for user assessment. Evolutionary algorithms apply the principles of natural selection to approximate optimal solutions in multi-dimensional search spaces [START_REF] Eiben | Introduction to Evolutionary Computing[END_REF]. In the context of Humancentered SPLs, a solution is a configuration, the dimensions are the features, and we measure the adaptation to the environment by exposing the derived product to user evaluation. Traditionally, in genetic algorithms, the fitness function is automatically calculated using a formula or an automatic computation. In our case we rely on an IGA [START_REF] Takagi | Interactive evolutionary computation: Fusion of the capabilities of EC optimization and human evaluation[END_REF] where the fitness function, which is the genetic algorithm operator that drives the evolution, is not automatically calculated but provided interactively by users. Each generation of the evolution consists of a population of configurations. After a given number of generations, the evolution favours the better adapted solutions which have more chances to propagate their chromosomes (i.e., feature selections). Therefore, the genetic algorithm tries to guide the search to regions of the configuration space with relevant variants.
The initialization of the first generation has a great impact on the results of a genetic algorithm [START_REF] Fraser | The seed is strong: Seeding strategies in searchbased software testing[END_REF][START_REF] Meadows | Evaluating the seeding genetic algorithm[END_REF]. HSPLRank proposes, for the first generation, only to include viable products which were enclosed in the previous variant reduction phase. In the genetic algorithms field, this technique is known as seeding. Population seeding techniques consist in initializing the population with solutions that are expected to be good solutions in promising regions of the search space. We further propose to try to guarantee the diversity of the seeded initial population by random selection of individuals from the viable space. Figure 10.5 illustrates an example of implementing an IGA for HSPLRank. We can observe the pool of users and the population through the generations. Figure 10.5 corresponds to the settings used in the case study that will be presented in Section 11.2.
For the generations following the initial population, we propose two alternatives: 1) we assume that soft constraints, stated by usability experts, must be always respected or 2) we allow to explore configurations beyond the viable space. If we assume that usability expert constraints are correct, we can strictly restrict the new generations to the viable space by avoiding non-viable products in all generations. However, we can also consider unexpected cases by allowing the exploration (starting from generation number two) of a priori non viable configurations. End users may eventually find that a combination which does not satisfy some soft constraint is well adapted for them. Therefore, in the realization of HSPLRank it can be decided between restricting the exploration phase of the genetic algorithm to the viable space or using the viable space only for seeding.
Ranking computation
The phase for the ranking computation takes as input the data set produced by the previous evolutionary phase, and as output we estimate and predict the score of the assessment of any possible configuration. Through this, we create a ranking for all possible configurations including those that are not in this initial data set.
First, a similarity measure for computing a similarity distance between two configurations in the domain must be used. We then design a protocol for aggregating feedback from different users for a set of similar products. This protocol works by defining a similarity radius which will be relied on to compute the weighted mean score for each possible configuration. We further investigate the suitability of the methods for defining the radius as well as for the approaches for computing the weighted mean. Finally, with the hypothesis that the computed score of each configuration should equate the human expected perception, we create a ranking of all possible configurations. We also provide confidence metrics for each of the ranked items. We detail the operators of the ranking computation phase: Similarity distance: Given two configurations C i and C j , we aim at formally computing a value for the similarity distance between them. The notion of similarity distance was already studied in the software engineering literature, specially by the SPL testing community [HAB13, HPP + 14]. One example discussed in these works is the use of the Hamming distance [START_REF] Hamming | Error Detecting and Error Correcting Codes[END_REF]. The space defined by the set of SPL features can be considered as a binary string where each position corresponds to a given feature. In this binary string, 0 stands for selected feature and 1 for not selected feature. Then, the Hamming distance can be calculated for any pair of configurations by counting the minimum number of substitutions required to change one string into the other. Hamming distance for calculating configurations similarity was previously used in the SPL domain . Another example is the Jaccard distance which is a set based similarity distance where we consider that a configuration is a set of selected and not selected features. Then, the Jaccard similarity is calculated as the size of the intersection of the sample sets divided by the size of the union [HPP + 14].
It is worth mentioning that the use of similarity metrics such as Hamming or Jaccard, in the context of HSPLRank, imply the assumption that when two products are similar in the way they were assembled, they will be appreciated similarly. Apart from these generic configuration similarity metrics, other approaches can be based on ad hoc domain-specific similarity functions between configurations or between the products derived from these configurations. HSPLRank does not impose a similarity distance method. In next chapter, in Section 11.2 dealing with the Paintings SPL, we use an ad hoc distance function to measure the distance between two paintings. In Section 11.3 regarding the case study in UI design, we use the Hamming distance. In the ad hoc distance function, we give different weights for the features, meaning that differences in the selection of some features are considered less relevant for the similarity of two configurations while other features are considered more relevant. The Hamming approach results in values from 0 (the two configurations are the same) to the maximal value which is the total number of features (completely different).
Similarity radius: To be part of the neighborhood of a given configuration, any configuration must be inside a similarity radius. This radius allows to restrict the products that will be considered similar enough for inferring information from one to the other. Figure 10.6 illustrates the example of a configuration C c (small white central point) and all configuration instances of the data set (black points) placed according to their relative distance to C c . When we consider a distance r to define the neighborhood (circumference line), we note that some instances remain in the radius of C c . The scores of the assessments of these instances which are inside the threshold of the similarity radius (the numbers on the black points) are used for the calculation of the weighted mean that represents the inferred score for C c . The ones that are outside the similarity radius are considered dissimilar and no information can be inferred regarding the white central point. This Figure corresponds to a real configuration of the Paintings case study that will be presented in Section 11.2.
Weighted mean score: In order to assess the collective agreement on the score of a given configuration, we consider its neighborhood and compute the mean score in a similarity radius. HSPLRank considers this computed score as the expected level of appreciation by users. For computing reliable mean values, a weight is assigned for each instance of the data set. This weight depends on their proximity with the configuration C c whose score wants to be estimated. Figure 10.7 illustrates four approaches for assigning a weight to the scores of configurations in the similarity radius. In all these weighting approaches, we consider w i = 0 when d c,i > r, where d c,i is the similarity distance between configurations C c and C i , and r the value of the similarity radius. In the first approach (Figure 10.7a), all scores within the radius are weighted equally independently of the distance between configurations. It is equivalent to a standard average computation. In the second approach (Figure 10.7b), a linear distribution of the weights is implemented. The third approach (Figure 10.7c) exponentially maximizes the weights for configurations that are close to C c . Finally, the fourth approach (Figure 10.7d), modifies the standard average approach by slightly reducing the weights of configurations that are farthest from C c . Empirical selection of the approach settings: In order to select the radius and the weighting approach, we explore different combinations to identify the one that minimizes the error rate. We rely on a 10-fold cross-validation scenario. We shuffle the instances in the data set and then we split it into 10 folds. 9 of these folds are used as configurations with real scores based on user feedback, and 1 fold is used for testing the inference of score for the missing ones. The error rate is computed based on the difference between the expected value (the computed weighted mean score) and the actual user feedback score for each instance of the test set. The evaluation is a numeric prediction so we selected the mean absolute error (MAE) as error rate metric. Equation 10.2 shows how MAE is calculated, where T is the number of instances in the test set, si is the weighted mean score of C i computed with the training set (which represents the expected score) and s i is the score of C i in this instance of the test set (the actual score).
w i = 1 w i = a 1-d c,i r -1 a-1 , a > 1 a) Standard average c) Exponential weighting w i = 1 - d c,i r w i = 1 -a d c,i r -1 a-1 ,
mean absolute error = T i=1 | si -s i | T (10.2)
Based on the empirical investigation of the different radius and weighting approaches using the 10-fold, we tune the ranking computation parameters to reliable values. Concretely, we aim at minimizing MAE rates while maximizing the coverage of configurations. Regarding the latter, besides the MAE values, we are also interested in knowing how the choice of different radius values impacts the coverage of the test set. If the radius is small there is a possibility that, for some configurations, the neighboring is empty, thus preventing the computation of any mean score. Because such instances are not taken into account for the computation of the MAE, it is important to know the average percentage of the test sets that is covered. We calculate the test set coverage for each radius as the number of configurations from the test set from which we can estimate a score (i.e., the training set has assessments in the neighborhood), divided by the size of the test set.
Ranking creation:
We exhaustively compute the weighted mean for all possible configurations. In the cases where a configuration C c has no neighbor within its similarity radius, no score is computed and it is ignored in the final ranking. In our case studies, an exhaustive calculation for estimating all configurations was computationally feasible. In other cases, where exhaustive calculation might not be feasible, other approaches will be needed to limit the amount of configurations to analyse in order to make the approach more scalable at this phase. For instance, one research direction could be the use of non-euclidean centroid calculations.
Confidence levels for ranking items
The obtained ranking is based solely on the weighted mean scores computed with the data set instances within the similarity radius. However, to yield a comprehensible ranking, it is important to assign a confidence level for each computed score. We define three main metrics for measuring confidence. Concretely, we focus on the average distance between a given configuration and its neighbors, and on the number of neighbors that were relied upon for computing the mean.
Neighbors similarity confidence:
The first metric explores the average distance with the neighbors. Figure 10.8 illustrates the importance of this metric: on the left, a unique neighbor, with a score value of 5 (e.g., let be 5 the maximum score), is shown to be very close to the center (C c ). On the right, we see another case with a unique neighbor which is farther from the center. In both cases, the weighted mean is 5, however, intuitively, one is more confident about the accuracy of the mean score for the left than for the right case. The instance that was assessed in the case of the left is more similar to the configuration that we want to estimate. We define a neighbors similarity confidence as in Equation 10.3 where N is the number of data instances within the radius of C c (i.e., neighbors). nsimc c = 1 when all data instances are in the center, i.e., all configurations in the radius are identical to the configuration for which the score is inferred. Also, when N = 0, this metric is not applicable. For the calculation of nsimc in the case studies of next chapter, we will use the linear weighting approach. For the example scenarios of Figure 10.8 and using a similarity radius of 1.5, the C i in the left of the figure is at d c,i = 0.2 from C c , and the nsimc is thus 87%. In the right of the figure, the C i is at d c,i = 1, 4 so the nsimc is 7%.
nsimc c = N i=1 w i N (10.3)
Neighbors density confidence:
The second metric concerns the density of neighbors in the similarity radius. Figure 10.9 illustrates the importance of this metric: on the left side, a unique data instance, with a score value of 5, is shown within the radius; on the right side a different case is presented where several data instances are present in the similarity radius, all with a score value of 5. In both cases, the weighted mean has a value of 5, but intuitively again, the confidence is greater for the second case as we have more data to support the estimation. The neighbors density confidence is computed as in Equation 10.4 where N represents the number of data instances within the radius of C c , max is the highest value of N found in all the possible configurations, and the function withN eighbors(i) returns the number of instances from the data set containing exactly i neighbors within their radius.
ndenc c = N i=1 withN eighbors(i)/ max i=1
withN eighbors(i) (10.4) With this metric we obtain a neighbors density confidence of 0.5 when the number of data instances within the radius of the configuration corresponds to the median of withN eighbors(i).
Global confidence: Finally we define a global confidence metric, shown in Equation 10.5, that takes into account the previous metrics by assigning weights. For the case studies of next chapter, we decided to put more weight to ndenc giving emphasis to the number of instances used to compute the mean. We made the design decision of setting w ndenc to 0.75 and w nsimc to 0.25 in the computations presented in next chapter.
gconf c = w ndenc • ndenc + w nsimc • nsimc (10.5)
Conclusions
Leveraging human perceptions in the context of SPLE is an emerging challenge dealing with an important aspect of products success: user expectations on product variants. We present the HSPLRank approach towards estimating and predicting human assessments on variants for the creation of a ranking of all the possible configurations. This ranking, enhanced with confidence metrics for each ranking item, has the objective to serve as input for selecting the most relevant variants. HSPLRank contains three phases at which we use 1) the injection of extra domain constraints (soft constraints) to enclose the viable variants, 2) an interactive genetic algorithm for the initial data set creation and 3) a tailored data mining interpolation technique for reasoning about the data set to infer the ranking and the confidence metrics.
We have presented two scenarios for HSPLRank: The first one is related to an scenario that can be considered as the worst software project case dealing with human assessments subjectivity, which is the case of computer-generated artworks production. The second one is related to variability in UI design. A computer-generated art was presented and it will be evaluated in next chapter together with the presentation and evaluation of another case study on UI design.
Introduction
In previous chapter, Chapter 10, we introduced HSPLRank, an approach to rank the configurations of an SPL to identify relevant products to a given end user context. In Section 10.2 we presented two scenarios where its usage is relevant which are SPL-based computer generated art and UI design. The objective of this chapter is to detail the results of the case studies that we conducted as well as presenting a detailed discussion about our experiences with HSPLRank and its threats to validity.
Contributions of this chapter.
• The Paintings case study: A large scale case study in SPL-based computer generated art in collaboration with a professional painter as described in Section 10.2.2.
• The Contact List case study: A case study in UI design concerning the implementation of an SPL for Contact List systems.
This chapter is structured as follows: Section 11.2 presents the results of the paintings case study. Section 11.3 introduces the Contact List case study and the results of applying HSPLRank. Then, Section 11.4 discusses important general topics of our experience in the two case studies. Section 11.5 details the threats to validity and Section 11.6 concludes this chapter.
Paintings case study
This section explains the realization of HSPLRank and evaluate the results in the case study presented in Section 10.2.2 dealing with an SPL for digital landscape paintings.
Objectives and Settings
The objectives for this case study are:
• Study the error rates of the estimations provided by HSPLRank to evaluate its soundness.
• Qualitatively study the opinion of the artist regarding the relevant variants obtained through HSPLRank.
We proposed a validation on the feasibility of HSPLRank after applying it to a real-world installation for computer-generated art. The installation for collecting user feedback was available to the public as part of an art festival at Théâtre de Verre at Paris in 2014. This provided the location for the HSPLRank variant assessment phase with the attendees. The artistic project of evolutionary paintings is called AdherentBeauty i . For further evaluations, the experiment was repeated but instead of using a collective of persons, it was operative only for one person each time (individualized experiments).
HSPLRank realization
Phase 1: Variant reduction
In this scenario, the artist was the domain expert which decided not to introduce any restriction to the viable space. That means that the configuration space is equal to the viable space and that the IGA will not be seeded. This was motivated by not adding prejudgements about what the attendees should like or dislike in the paintings. On the contrary, in Section 11.3.2 that concerns a case study on UI design, the variant reduction phase will be used.
Phase 2: Variant assessment
In order to address the combinatorial explosion of possible paintings, as proposed by HSPLRank, we rely on an IGA which explores the possible configuration space trying to reach optimal or suboptimal solutions. The IGA permits to create more data set instances in the regions that are more adapted to the fitness function. If we have more density in these regions we will have more confidence about user expectations regarding the most appreciated configurations.
In our case study, the fitness function for the IGA is based on the assessment captured using the device shown at the bottom of Figure 11.1. This device implements a physical 5-point scale. The values range from 1 (strong dislike) to 5 (strong like). When a user votes, the displayed painting (as shown at the top of Figure 11.1) vanished and the next painting of the genetic algorithm population is displayed. When all paintings from the population have been assessed, a new population is yielded based on the calculations of the genetic algorithm, and the exploration towards optimal paintings continues, until it is manually stopped at the end of the session. Thus, in our installation, the data set is constructed during a unique session with users. Algorithm 2 shows the implementation of the IGA for this case study. In genetic algorithms terminology, this algorithm is a non-elitist, generational, panmictic IGA, and the Data section of the algorithm shows how the genotype of a landscape painting phenotype was designed. Concretely, Sky type, middle and ground positions are assigned with values representing all possible parts. In the case of the middle part, which is optional, value 9 represents its absence in the composed painting. Further, for this special case, the flip and extra size features are irrelevant, and thus a special treatment in the operators of the genetic algorithm is performed.
At line 1, the initialization operator creates a pseudo-random initial population where we force all sky, middle and ground parts to appear in the initial population. The evolution starts at line 2 until it is manually stopped. During the evolution, from line 3 to line 6 each member of the population is evaluated using an evaluation operator based on user assessment.
The parent selection operator is then based on a fitness proportionate selection (line 7). At line 8, the crossover operator is based on one-point crossover with the peculiarity that it is not possible to select the last two positions to force to crossover the sky, middle and ground parts. Then the mutation operator used at line 9 is uniform with p = 0.1. Such a high mutation factor is meant to prevent a loss of motivation from users (user fatigue) by reducing the likelihood that they will keep assessing similar products from the population, while thus enabling us to explore new regions. Finally, at line 10, the survivor selection operator is based on a complete replacement of the previous generation with the new generation. The installation was operative for 4 hours and 42 minutes and 1620 votes were collected. The IGA and its exploration process towards optimal products led to 1490 paintings being voted once, 62 paintings being voted twice and only 2 being voted three times. No data was gathered to make distinctions about different user profiles nor any control mechanism was used to limit the number of votes per person. On average we were able to register 5.74 votes per minute from around 150 people of different ages and sociocultural backgrounds who voted for one or more paintings.
Phase 3: Ranking creation
General approaches for the calculation of the similarity distance between two configurations were presented in Section 10.3.3. Instead of using general approaches, we designed and defined a domain-specific ad hoc distance calculated through the algorithm that we present in Algorithm 3. Other approaches might have been used. In our case study, to compare two configurations, we start by assuming that they are the same (distance = 0). The distance between them increases by 1 point for each different painting part. If a part is the same in both configurations, we check the flip feature and, in case of dissimilarities, we increase the distance by 0.2. Finally, and whether the parts are identical or not, we increase the distance by 0.2 if a part has an extra size in one configuration and not in the other: extra size is independent to the part as it has an impact on the whole composition. To account for the fact that the optional middle part may not be present, we check that part P is not null. Using this algorithm, when the three parts are different and every part has extra size in one configuration and not in the other, we reach the maximum distance between two configurations which is 3.6. Thanks to this definition of distance, we can now reason about the neighborhood of a configuration in the space.
After the variant assessment phase, we took advantage of the user assessments in the data set to create the ranking. Using the 10-fold cross-validation explained in Section 10.3.3, Figure 11.2 shows the performances of different combinations of radius values and weighting approaches. The graph reveals that we start covering all instances of the test set with a similarity radius that in this case is 1.3. In Figure 11.2, we marked with a circle our selected parameters. We set the radius to 1.5 and selected the standard average weighting approach for the weighted mean score computation. A radius of 1.5 further fits artist's developed intuition of the maximum distance for two paintings to be considered similar paintings. We computed the weighted mean score for all 59,200 possible configurations of our case study and ranked them accordingly. Figure 11.3 depicts the paintings that were derived from the top 10 configurations with highest weighted mean scores (wmean). The highest wmean was established at 4.75 for only one configuration. None of these configurations in the top 10 were part of the data set created in Phase 1 when collecting user feedback. For example, the third best configuration score was obtained based on the wmean of 11 different configurations with scores computed from actual user feedback. Figure 11.4 depicts the bottom 5 configurations of the ranking. We observed that bottom configurations in the ranking had in general less confidence given the effect of the IGA that tried to avoid these regions. We also used gconf to filter and reorder the ranking items. Figure 11.5a shows the product that was derived for the configuration with the highest gconf (82%). This configuration got a wmean of 3.27 and holds the rank 14,532. The configuration with the highest gconf for configurations that are liked (i.e., score > 4) holds the position 102 and the painting is shown in
Evaluation
We present a quantitative evaluation using a collective of persons and a qualitative evaluation from the artist's perspective. We completed the evaluation with another quantitative evaluation repeating the whole approach with separated individuals.
Controlled assessment
The objective of the controlled assessment is to study the error rates in how the scores were estimated. In Section 11.2.1, we have already performed a 10-fold cross-validation to empirically select an optimal value for the similarity radius and the weighting approach. Now we also estimate the error rates using the whole data set with real user assessments to draw both training and test sets. The goal is to predict the score of a configuration and assess against the actual scores provided by users. We also compute the resubstitution error which is an optimistic case for evaluating classification approaches. Table 11.1 provides the MAE for 10-fold and resubstitution to evaluate the accuracy of the approach. The results suggest that any prediction has a margin of error around 1. In a scale of five this is a substantial number, however, this means that if the actual score of the painting is 4 (like), the margin of error is between 3 (normal) and 5 (strong like). We consider this, in conjunction with the confidence metrics, to be a good performance when attempting to capture collective understanding of beauty within the boundaries of our FM.
Artist perspective
The artist obtained and analyzed the ranking and the confidence metrics using the approach. He claimed that the collective as a whole stated their scores in a very coherent fashion according to the parameters of traditional and classical painting principles of perspective and contrast. For example in Figure 11.5b, the brightness in the sky found its counterpart in the brightness of the sea but only in the left side because of the mountain on the right. People understood that they were dealing with landscapes and they disliked the ones that tended to be flat or that they did not respect some of these principles.
The objective of the installation was to explore his painting style in a feasible way to leverage user feedback. The resulting ranking was very interesting to understand people's sensibility about the possible configurations. The ranking showed him liked configurations with high global confidence that he had never considered and that he liked them too. The results of HSPLRank exposed him to novelty that, as added value, he considered that they have some guaranties of acceptance when exposing them to the public. For example, before this exercise, he painted mountains or sea but never together in the same composition. He considered that he has learnt many things about his own painting style as well as about the perception of the people about it.
Individualized evaluation
Regarding the previous controlled assessment, it was not feasible to bring the whole collective back for evaluating the created ranking. However, by conducting the experiment with only one person we can create a ranking with his or her own assessments and then evaluate the validity of our estimations with the person on site. The objective was to evaluate if the ranking successfully discriminated between the liked and disliked for the perception of each user. We selected 10 persons for this evaluation. Each user was voting in a session of 20 minutes. In average this duration corresponded to a data set of 16 populations (320 paintings).
Once the ranking was created we took 10 liked and 10 disliked that were not shown during the evolutionary phase. Specifically, we took the first 10 paintings with a weighted mean score from 4.5 to 5 that had the highest global confidence. In the same way, we took the first 10 with a weighted mean score from 1 to 1.5 that had the highest global confidence.
After splitting these 20 paintings randomly, we obtained an average of 91% accuracy in the prediction between like and dislike. The results suggest that the predictions in the extremes of the ranking are accurate in the case of individualized estimation.
Contact List case study
Contact Lists are widely used HCI applications to obtain personal information such as telephone numbers or email addresses. We can find them on mobile phones for personal use, communication systems for elderly people, corporate intranets or web sites. Despite of sharing the same objective, the final UI implementations are very diverse. In this case study we focus on corporate contact information. The Master Detail Interface is an optional feature that split the screen in two parts: the master and the detail. The master contains the list while the details interface, after a selection in the master, shows the corresponding contact information. There is variability concerning the Ratio of the screen split and the Position of the master interface. Finally, the Details Grid feature represents different alternatives to organize the contact information on the screen (e.g., telephone number, address etc.) as for example including all information in one column or determine the position of the textual information with respect to the photo.
An SPL for Contact List design alternatives
We implemented the SPL using the Variability-aware Model-Driven UI design framework [START_REF] Sottet | Variability management supporting the model-driven design of user interfaces[END_REF] based on AME (Adaptive Modeling Environment) [START_REF] García Frey | AME: an adaptive modelling environment as a collaborative modelling tool[END_REF] which is able to derive, through source code generation, any configuration of the presented FM. The target framework for the derived products is the JQueryMobile web framework ii .
ii JQueryMobile web framework: https://jquerymobile.com
We present screenshots to show the diversity of UIs that can be obtained. We decided to anonymize these screenshots to avoid displaying personal information. shows a UI variant whose configuration does not have a master detail. It only displays on the screen either master or detail (note the presence of the back button at the top left of the screenshot for coming back to the master). In the case of master detail, the ratio indicates whether we have a big master interface with a small details interface (e.g., Figure 11.8c) or vice-versa. Alternatively, we can have the split into two equal parts (Figure 11.7 and 11.8a).
The Position variability is related to a horizontal or vertical split of the screen and whether the master is in one side or the other (in Figure 11.8b the master and detail have been swapped). If the Master Detail Interface feature is not selected in a configuration, the window split will be replaced during the navigation: one first window for selecting the person to be displayed and the other one for seeing the details. Finally, the Details Grid feature represents different alternatives to organize the contact information on the screen. For instance, in Figure 11.8a the grid is four columns and two rows whereas in Figure 11.8d it is two columns and four rows.
Objectives and settings
In order to know if HSPLRank enables to select relevant configurations from user assessments, we evaluated two hypotheses:
1. The variant assessment selects better configurations than a randomized algorithm for a given number of iterations. We quantitatively evaluated the improvement of the user scores through the IGA compared to random selection within the configuration space. We further investigated the diversity of the population along the generations to show the convergence of the IGA towards relevant UI designs.
2. The top positions of the ranking created with HSPLRank are configurations that usability experts confirm as relevant UI designs. We qualitatively discussed the findings with a usability expert and we checked if the top ranked configurations are close to configurations elicited by usability experts.
We deployed HSPLRank using the Contact List SPL in two organizations, the LIST, and the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg. The objective was to design their web-based corporate contact lists. We have set up independent experiments in these two different organizations in order to have more than one experimental result. In order to make the Contact List more realistic to the participants context, we fed the contact list database using the corresponding real contact information as publicly available in the organization websites. We excluded persons without photo. The contact list at LIST contained 276 persons and the contact list at SnT contained 154 persons. The difference in the number of persons is not significant for the tasks at hand since we did not measure the users time performance.
HSPLRank realization
Phase 1: Variant reduction
In order to reduce the number of variants, a usability expert from LIST was involved in defining the soft constraints. These soft constraints are used in both LIST and SnT organizations to define the viable space. The usability expert has 5 years of experience on usability analysis but no knowledge about SPL. We explained him the Contact List FM and the variability-aware UI design models as well as the formalisms and the concept of soft constraint. Then we let him try our configuration interface and SPL derivation in order to have a first grasp of the UI variability and the representation of the different features in JQueryMobile. Then he discussed the possible usability problems and established the following soft constraints for targeting these corporate contact lists:
• sof t(¬DropDownList): The drop-down list does not seem appropriate as it introduces an additional interaction step which is the expansion of the list items.
• sof t(Index ∨ Filter): It is required at least one functionality to facilitate the search. Searching persons on a list composed uniquely of name or photo is easier if the elements are ordered (indexed) or if we can search for a person name.
• sof t(TileList ⇒ Photo): By essence, tiles are made for graphical representation of elements. As a result, having only a name in a tile list is not recommended.
From an initial configuration space of 1365 possible configurations, the introduction of the presented soft constraints reduced the viable configuration space to 715. This represents a 48% reduction of the possible configurations. Even with such a reduction of configurations, it is still too expensive to perform user assessments for all the 715 remaining configurations.
Phase 2: Variant assessment
We followed the recommendations of Nielsen regarding the minimal number of end users to involve in iterative user tests [START_REF] Nielsen | A mathematical model of the finding of usability problems[END_REF]. As such, we decided to run user tests with 5 participants in order to collect their feedback. According to the predictions of the Poisson model [START_REF] Nielsen | A mathematical model of the finding of usability problems[END_REF], involving 5 users gives rise to an expected probability of reporting 85% of the usability issues.
Given that usability problems impact their assessments, we consider that 5 users allow to evolve the UIs at an optimal cost regarding the number of involved users.
At LIST, the age of the participants was between 26 and 36 with a mean of 32. Their role in the organization ranged from PhD students and R&D engineers to researchers. 4 out of 5 declared having previous experiences in UI design and 3 out of the 5 already designed UIs in an industrial context. At SnT, the age of the participants was between 25 and 33 with a mean of 31. Their role in the organization were 4 PhD students and 1 post-doc. They had limited knowledge of HCI design except one who has been also confronted to UI design in an industrial context for a short period of time. Participants from both organizations do not have sufficient practical years of experience to be considered as professional UI designers.
However, they already have been confronted with real-world problems in UI design and they have extensive experience on interaction with computers as end users.
We made the decision of using 10 configurations per generation of the IGA. Figure 10.5, presented in previous chapter, illustrates the iterations of the IGA to produce the different generations with 5 users and 10 configurations per generation. A bigger number of configurations will represent more diversity in the initial population but it will require a greater number of user assessments before obtaining the benefits of an evolutionary approach. On the contrary, a smaller number of population members will represent a very limited diversity. We considered 10 configurations to evaluate a complete generation as a number that keeps the balance between diversity and effort. Therefore, in each generation, the 5 users are provided with 2 variants to evaluate.
To carry out the study, we split the evaluation protocol into two different phases common to both groups. In the first phase, we introduced the test to the users, we explained the main steps of this test and we gave them a questionnaire to evaluate their profile and skills regarding UI design. In the second phase, each user had access to a web application that was driving the tests. For each UI assessment, the user received the task to accomplish that consisted in obtaining information from a particular person (randomly selected by our system) in the contact list. Then, the UI was displayed and the user was able to interact with it. At the end, the user finished the test by filling up a satisfaction form. This form, apart from standard usability related questions [START_REF] Brooke | SUS -A quick and dirty usability scale[END_REF], encompassed a global satisfaction note on a scale from 1 to 7 to capture the global impression. An extra free-text field was included to report comments and usability issues. When the test of one UI was completed, the system proposed the second user interface to test.
One important decision to implement the genetic algorithm is how to represent the individuals.
We used an array, as in the previous case study presented in Section 11.2.1, but this time we consider a binary array. Figure 11.9 shows an example of the chromosome of an individual that conforms to the Contact List SPL. The phenotype consists of the non-abstract features of the FM. Concretely, the leaves of the FM and the Master Detail Interface feature (see Figure 11.6) are coded on a binary string of 20 bits. The features are the fixed indexes of the array where the value 1 means that the feature is activated and 0 that it is not. Representing a FM configuration chromosome as an array of bits is a common practice in the use of genetic algorithms in SPLE [EBG12, HPP + 14]. The details of the implemented IGA is shown in Algorithm 4. First, the population is randomly initialized at line 1 taking into account the defined soft constraints (i.e., only viable configurations). After this, the evolution starts at line 2 until the stop condition (line 15) is satisfied. In our case we used the termination condition when reaching a fixed number of generations. We set it to 6 generations that correspond to a total of 60 variant assessments (12 UI variants per user). We decided that this amount will be sufficient and that more generations will be not feasible in terms of user-fatigue and time consumption. These 60 configurations correspond to only the 4.4% of the configuration space.
From line 3 to 5, each member of the population is assigned to one user for assessment. In our case study, each of the 5 users is assigned to 2 members to cover the whole population.
The user feedback for all the population is obtained from line 6 to 9. Once the fitness of the whole population is set, we can proceed to the parent selection for the next generation. The parent selection operator is based on a fitness proportionate selection (line 10). At line 11, the crossover operator is based on the half uniform crossover scheme [START_REF] Syswerda | Uniform crossover in genetic algorithms[END_REF]. The crossover, as well as the mutation operators of the GA, can end up with invalid configurations because of FM constraints. Existing works have solved this by penalizing the fitness function or trying to recover the configuration to a valid state. In our case, at line 12 and 14 we repair the offspring if hard constraints are violated. The mutation operator used at line 13 is uniform with p = 0.1. This mutation factor is meant to prevent a loss of motivation from users (i.e., user fatigue) by reducing the likelihood that they will keep assessing very similar UI configurations from the population, while thus enabling us to explore new regions of the configuration space. Finally, at line 15, the survivor selection operator is based on a complete replacement of the previous generation with the new generation.
The variants assessment phase was conducted independently for the two organizations collecting two separated data sets. The results of this phase will be presented in next section.
Algorithm 4
Interactive genetic algorithm for data set creation in the Contact List case study.
input: Genetic representation of a configuration = 20 bits, Population = 10 configurations, Users = 5 output: Data set of user assessments 1: population ← initializePopulation() 2: while stopConditionNotSatisfied() do 3:
for confi ∈ population do 4:
confi.assignedU ser ← assignUser(users,confi) 5:
end for 6:
for confi ∈ population, in parallel do 7:
confi.f itness ← getUserFeedback(confi) 8: registerDataInstance(confi) 9: end for 10: parents ← parentSelection(population) 11:
offspring ← crossover(parents) 12:
offspring ← repair(offspring) 13:
offspring ← mutate(offspring) 14:
offspring ← repair(offspring) 15:
population ← survivorSelection(offspring) 16: end while
Phase 3: Ranking creation
For the similarity distance between configurations, we used the Hamming distance, presented in Section 10.3.3. Figures 11.10 and 11.11 show the results of the 10-fold schema in the two organizations. A similarity radius with values less than 4.1 fails to cover the whole test set in both cases, and the exponential weighting approach outperforms the other weighting approaches. In this case study, we decided to use 4.1 as similarity radius and exponential weighting for the estimation of the configurations allowing the ranking creation.
Evaluation
This section presents the results after applying HSPLRank and discusses the objectives of this case study presented in Section 11.3.1. Figure 11.13 presents the results of the IGA for the two organizations. On the horizontal axis we have the different generations and the vertical axis is the mean of the scores of the user assessments for this generation. We show the score mean along the six generations including the standard deviations. An ascendant progression means that for each new generation, globally, the UI variants are being better appreciated by the pool of users. In Figure 11.13a we observe a quick ascension until generation four while in Figure 11.13a we observe the ascendant progression starting at generation two. Despite that we do not have the explanation for the descending effect in LIST case for generations five and six, we consider that it is caused by the experience that the users had in UI design. The score mean improved quickly from generation one to four by filtering really inappropriate variants, and then decreased a little bit because of their capacity to criticize the proposed variants. They may not evaluate the variant itself but its capability to be different from what they already evaluated. Furthermore, some of these critics were related to non-variability related issues as we will discuss in Section 11.4. Other possible explanation could be user fatigue. In order to observe if the IGA tends to converge, Figure 11.14 shows the progression of the generations in the two dimensional space of mean score and diversity. We calculated the genotype diversity along the different generations (g1 to g6 ) as the average of the Hamming distance of all pair of configurations in the generation. The diversity decreases if we approach to the left side of the horizontal axis. For example, we can observe how the diversity is not increasing more than its value at g1 which is the randomly created population. For LIST, as shown in Figure 11.14a, g4 has both the lowest diversity and the maximum mean score.
In the case of SnT, as shown in Figure 11.14b, the last generation (g6 ) has both the lowest diversity and the maximum mean score. In the LIST case, the user pool was able to reach better variants for them earlier.
UI quality improvement
Regarding the first hypothesis, we evaluate if our process based on evolutionary techniques selects better variants than a randomized algorithm for a given number of iterations. We repeated the experiments with the same participants using random selection. In this approach, for each generation, 10 configurations were automatically selected from the viable space which is the same size of the population that we used for the IGA. Basically, for the random selection, we used the same operator as the one used for seeding in the IGA. Despite that we still call each group of 10 random configurations a generation, no genetic information was propagated from one generation to the next. Figure 11.15 shows the results of the genetic algorithm and the random selection in order to compare them. The most important observation is that random selection failed to obtain a global score mean greater than 5 in any of the generations while the genetic algorithm did achieve it. We can see how the genetic algorithm outperforms the random selection approach except in the first two generations where the effect of evolution is still trying to find relevant regions of the configuration space. Table 11.2 presents the representative improvements obtained in the two independent experiments by comparing the global score mean. The global score mean is the mean of the assessment scores in all the generations. The genetic algorithm approach has a global score mean which is around 0.5 points better (i.e., 0.45 in LIST and 0.55 points in SnT). We have seen that the proposed genetic algorithm got better results over the generations than the random approach, and now we show that the algorithm tries to converge in this search of better UI configurations. To show this, we calculated the diversity of the members of each generation. If the diversity has a tendency to decrease, it is a sign of convergence. Figure 11.16 shows the graph of the results at organization LIST and SnT for both the genetic algorithm and the random process. We can see how the random approaches in both organizations do not decrease the diversity while, for these 6 generations, we observe how the genetic algorithm performs better than the random approach to reduce the diversity. The random approach failed to decrease the diversity to values lower than 5 while this was achieved by the genetic algorithm approach. As result, we can conclude that, compared to the random approach, we both increase the global mean score and we reduce the diversity along the generations. These two aspects allow the genetic algorithm to try to converge to optimal or suboptimal solutions which means to relevant UI designs.
Analysis of the relevant variants by a usability expert
In order to confirm our second hypothesis we required a usability expert with nine years of experience to assess that the better variants found by HSPLRank satisfy usability criteria. This expert is independent in order to provide an impartial assessment. He does not belong to the team that developed the considered project, nor participated during the variant assessment, nor defined the soft-constraints in the variant reduction. We summarize the expert qualitative evaluations on the three HSPLRank relevant variants shown in Figure 11.12:
• The first variant, shown in Figure 11.12a, is the simplest list with no master detail. It satisfies many usability criteria [START_REF] Dominique | Ergonomic criteria for evaluating the ergonomic quality of interactive systems[END_REF] such as low workload, explicit control, homogeneity/consistency or compatibility with traditional contact applications. The search bar and the simplicity of the UI allows the end user to go directly to what he/she is looking for. However, as drawback, it is not possible to browse through the contacts' photos or to do a visual research if the name of the person is unknown.
• The second variant, shown in Figure 11.12b, has a better appearance (aesthetic consideration) and has more information (i.e., photo and index). It also complies well with Scapin and Bastien's usability criteria [START_REF] Dominique | Ergonomic criteria for evaluating the ergonomic quality of interactive systems[END_REF]. Notably the adaptability criteria is well implemented here: the application can be convenient to the different situations of use (e.g., on large screen and small screen display, etc.). However, it seems visually overloaded. Reducing the number of persons displayed in the list can be an option.
Another important point noticed is that the users can just play with the UI (e.g., browse through colleague photos) and be distracted from the prescribed task.
• The third variant, shown in Figure 11.12c is very close to the previous one, except for the master/detail pattern. It also complies with most of the usability criteria. The list of persons is more compact than the previous variant (Figure 11.12b) giving a better impression. The information is accessible directly without the need to navigate which is a plus for large screens but not necessarily the best solution. In the configuration with a Master Detail interface, the layout is important, and in this variant the vertical grid fits perfectly with this layout.
The usability expert claimed that the relevant variants that have emerged from applying the HSPLRank approach satisfy most of usability criteria.
Comparing HSPLRank results with usability expert choices
We asked the usability expert to create configurations of the Contact List using strictly expert judgement. HSPLRank approach is intended to explore design alternatives using the users as the stakeholders to make the last decisions about the correct alternatives. However, we wanted to compare the alternative that a usability expert will determine against the obtained relevant variants with our approach.
Regarding the different configuration possibilities, the usability expert suggests, according to his opinion, two UI variants that are the two best possible configurations regarding global user experience (i.e., including usability, but also aesthetic and functional aspects). The configuration of these two variants are the following: 1) TileList, Indexed, Filter, Photo, Name and 2) ListView, Indexed, Filter, Name. The first variant is a complete UI with aesthetic aspects first while the second is simpler and efficient in its design. For the contact information detail layout (Detail Grid), two are possible: the Vertical Grid which gives the best organization of information for large screen size and the Photo Left which gives the best organization for smaller screen size.
As we can observe, the configurations elicited by the expert are very similar to the relevant ones shown in Figure 11.12. We calculated the Hamming distance between each of the best configurations given by HSPLRank and the ones provided by the usability expert. We found that the minimal distance between the expert variants and the two best configurations of HSPLRank is 1. For the last one, the distance is 7. This can be explained by the inclusion of a MasterDetail feature in this configuration that increases the Hamming distance by 3. As conclusion, the results of HSPLRank do not diverge from the usability expert judgement but in our case, the expertise emerges from the assessments of a set of potential end users.
Discussion
This section discusses certain general aspects about our experiences with HSPLRank in the paintings and the contact list case studies.
Quality of the Soft constraints
In the paintings case study, it was decided that no soft constraints will be introduced. However, in UI design, trying to formalize usability issues in the variability is a difficult manual task.
The variants reduction phase performed by usability experts seems to need further support.
In our current approach this is a manual task which may not be scalable on very large FMs. The usability expert defined the soft constraints presented at Section 11.3.2 but, according to a posteriori analysis of the results, the inclusion of other soft constraints could have prevented some features or feature combinations that were highly undesired. For example, a soft constraint setting the feature Name as mandatory. When the feature Name was not present, 83% percent of the users were not able to complete the task because they needed either to know the person or to guess the person name through the photo. As result, they could not finish the task which had a great negative impact on their assessment.
The inclusion of the soft constraint sof t(Name) would have reduced the viable configuration space from 715 to 585 possible configurations which is a 18% reduction. In addition, the initial population of the IGA would have ignored these configurations. Other relevant features that affected negatively to the user scores were the absence of Filter and layout issues. If both sof t(Name) and sof t(Filter) were added in the variants reduction phase, the viable configuration space would be reduced to 390 configurations which is a 45% reduction.
Non-variability factors during assessment
Figure 11.17 presents a categorization of user comments on issues found in the contact list usability. The presented usability issues, at both LIST and SnT, are the ones that conditioned their scores in the variants assessment process. We observe how 21% of the comments in the LIST experiment are related to non-variability factors. These segments are shown in Figure 11.17 in black. We call non-variability factors to elements of the variants that can not change because they are not included in the FM and, therefore, they can not be presented differently in the products implementation. Given that the users assess the variants as a whole, they are not aware about the elements that can change. However, these elements can impact the results of the evolution as the users can negatively assess a variant due to elements that do not have the possibility to change in next generations.
From this 21% of non variability related issues, some examples are the size of the picture in the list. They consider that the pictures should be smaller. Another example is that they would like auto-focus in the search field when the user opens the Contact List UI. Some of these reported issues can be fixed independently of variability, like the auto-focus for the Filter feature. Others could be registered and added as variability for further evaluation (e.g., the picture size big or small). The usability expert also suggested some enhancements that were not present in the planned variability (e.g., the order of the telephone and mail fields in the detail view).
At SnT, all the reported usability issues are related to variability. In the current evaluation of HSPLRank, we can not state if it had an impact. How to manage the reported issues is out of the scope of HSPLRank. However, we suggest to consider and study the impact of non variability related issues in future works applying IGAs in Human-centered SPLs to further understand its impact.
The importance of feature combinations
Due to actual user feedback, the IGA favoured some features that appeared frequently in products with high scores. Figure 11.18 illustrates the impact of the IGA to the middle parts of the paintings case study. For example we obtained less user assessments in configurations that have M 5 as the evolution found it less adapted while we have more data set instances containing other middle parts. However, as we can see in the figure, the score distribution for these Middle parts are quite similar. This also occurs for Sky and Ground features as well as for Flip and ExtraSize features of each part. This shows that no feature by itself has a great impact on the voting. We applied data mining attribute selection algorithms in an attempt to discriminate between relevant and irrelevant features. Concretely, we evaluated the worth of each feature by computing the value of the chi-squared statistic with respect to the class (i.e., a score in the range of 1 to 5). The ranked features showed that the alternatives for Ground, Sky and Middle parts were more relevant features than MiddleFlip, GroundExtraSize, GroundFlip, SkyFlip, MiddleExtraSize or SkyExtraSize. After applying a priori association discovery algorithms to discover relations between features and the scores, we found no rules even by setting very low levels of confidence. We then used a classification model to explore the issue attempting to predict the score of a combination of features. We relied on decision trees which led to incorrectly classifying 80% of instances in the test sets. Linear regression implementations also performed poorly.
These findings suggest that the collective understanding of relevant variants in this context was truly built upon highly subjective opinions and that only combinations of features were relevant in user assessments. On the contrary, in the contact list case study, as discussed in Section 11.4 regarding the importance of appropriate soft constraints, there were some features, like Name of Filter, that had a great impact on the user assessments.
Threats to validity
In this section we categorize the threats to the validity of HSPLRank.
Dealing with subjective assessments
The inherent subjectivity of ratings is an important threat. As presented by Martinez et al. [START_REF] Perez Martínez | Don't classify ratings of affect; rank them! T[END_REF], one threat is the non-linearity of the scores scale. HSPLRank considers the rates as numbers in order to summarize the variant assessments for calculating the score means. In rating scales, for example from one to five, depending on the person, the distance from one to two may not be the same as the distance from two to three. This is related to personal and cultural factors. The conversion of numerical values to nominal values is an alternative for the ranking creation phase that is worthy to explore and compare [START_REF] Perez Martínez | Don't classify ratings of affect; rank them! T[END_REF].
Also, using the contact list case study as example, one user can justify that photos distract him from the task of finding someone while other user find it useful or appealing. This fact is related to different opinions which are contradictory. Even the same person assessing the same UI in two different moments can report different scores. Another problem is the already mentioned user fatigue. It should be possible to mitigate these threats by increasing the number of users in our tests and by establishing a better distribution of the testing workload between users. Embedding the IGA in an online crowdsourcing platform can be envisioned.
HSPLRank operators
During the variant assessment and ranking creation phases of HSPLRank there are automatic operators that condition the results of the approach. We should investigate different similarity distance metrics between two configurations. For example, for the paintings case study, algorithms based on image difference metrics or on distance matrices for each of the features could be explored. In our opinion, general similarity distance metrics like the Hamming distance can be used but it is worthy to explore domain-specific ones. We should investigate different genetic algorithm operators for the same case studies to try to find the optimal settings. Also, we should compare our results to other approaches not relying on genetic algorithms. Also, other methods for empirically selecting the similarity radius and approaches for weighting the scores should be investigated.
The principles and techniques of the presented approach are repeatable for any case study dealing with user feedback on SPL variants. However, this approach will not scale in the ranking creation phase for FMs that can produce large amount of possible configurations. To solve this, instead of calculating the weighted mean score for all the possible configurations, we will investigate on non-euclidean centroid-based approaches or filtering mechanisms to restrict the calculation to a feasible amount of configurations.
Generalization of the findings
Another threat to validity is the need of more experiments that can further support our claims and confirm our hypothesis. Notably, we still need to understand the impact of non-variable elements. Variability in UI design can be manifold and in the experiments presented in our case study we only tackled some facets of this variability (i.e., layout, widgets) that are related to design alternatives. However, variability can be defined for different interaction devices, interaction contexts, graphical frameworks, etc. We consider that our approach can be used for any kind of UI variability but more experiments will be needed in this sense. Also, our case studies may be considered as medium size experiments in terms of the size of the configuration space.
Conclusion
HSPLRank can be applied in SPLE application domains where HCI components play an important role in the final products. In an SPL-based computer-generated art scenario, artists can use HSPLRank to understand people perception of their work, to inspire and refine their style, and eventually help them in the decision making process to select the variants that have more guarantees of collective acceptance. In a UI design scenario, design alternatives are assessed for trying to select the most adapted. We apply and validate the approach on two Human-centered SPL: an SPL-based computer-generated art system dealing with landscape paintings, and a contact list SPL where we aim to find the best UI design alternative for two different institutions.
During the case studies we have evaluated that 1) the interactive genetic algorithm performs better than random selection of the variants to be assessed, 2) the estimations and predictions provided by HSPLRank are acceptable using a 10-fold cross-validation and using individuals to assess variants that were not in the data set, and 3) domain experts confirms that variants exposed by HSPLRank are actually relevant.
The results of the presented case studies are promising and, as further work, enhanced versions of the approach could be applied to other case studies. There are also open research directions to extend HSPLRank. For example, it is interesting to learn users preferences during the execution of an IGA. When the prediction of HSPLRank is able to estimate the user assessment with high confidence, we can skip it and use the next artefact variant. There are already some works in this direction, Liapis et al. [START_REF] Liapis | Adaptive game level creation through rank-based interactive evolution[END_REF] focused on creating computational models of user preferences. Also, Hornby and Bongard [START_REF] Hornby | Accelerating human-computer collaborative search through learning comparative and predictive user models[END_REF] and Li [START_REF] Li | Adaptive learning evaluation model for evolutionary art[END_REF] proposed to learn to predict user aesthetic preferences. These will reduce the need for user involvement or at least it will allow to speed-up the evolutionary process.
Part VI
Conclusions
In Part V, we tackled an important topic in SPLE not directly related to SPL adoption. We presented an approach to predict and estimate end user assessments in artefact variants in order to rank the configuration space and help in selecting the most relevant artefacts (HSPLRank). We conducted experiments with an SPL-based computer generated art system dealing with high subjectivity in assessments as well as with an SPL of user interface design alternatives.
Open research directions
In the conclusions of each chapter we already suggested research directions which can be considered short-term goals. In this section, we present the long-term goal which considers the synergies among the different parts of this thesis. The challenge is to advance towards:
Extractive SPL adoption in the wild assisted by visualisation and end user assessment
In Chapter 7, we presented the identification of families of artefacts in the wild. These families are interesting candidates for extractive SPL adoption because of their shared implementation elements. However, with more advanced techniques, we should be able to consider not only similarly implemented artefacts, but also those that, belonging to the same domain, were implemented independently from each other. We presented in Section 4.5 that semantic comparisons have limitations. The analysis and unification of artefacts in overlapping domains has been studied, e.g., in the field of domain-specific languages [DCB + 15].
Even if semantic comparisons find interesting software parts, there is still many technical difficulties in their integration (e.g., different interfaces or architectures). To cope with these challenging scenarios, we can bring to extractive SPL adoption the advances in automated software transplantation [BHJ + 15]. The objective would be to transplant feature implementations from a donor in the SPL by extracting reusable assets. This might be tackled through a combination of extractive and reactive SPL adoption [AMC + 07, Kru01]. In this context, finding and analysing previous integrations of a similar feature in other products could help during the integration process of this feature in the SPL. These techniques were already explored for advising in library migration in single systems [START_REF] Teyton | A study of library migrations in java[END_REF]. Mining version control systems, which have been proposed to identify and locate features [START_REF] Li | Semantic slicing of software version histories (T)[END_REF], can help also to analyse the changes needed in previous integrations of features. This envisioned process is complex. As a consequence, providing dedicated interactive visualisation paradigms for domain experts is a challenging task.
In industrial scenarios, when merging companies of the same domain, each one may have its own software product, which gives rise to long integration projects. In some cases, as in the reported case of eight companies in the home rental market [START_REF] Krueger | Homeaway's transition to software product line practice: Engineering and business results in 60 days[END_REF], they may be willing to extract an SPL instead of building a "one-size-fits-all" solution. Apart from this scenario, we can also leverage artefact variants from public software repositories to extract an SPL targeting a specific domain. We consider that, mining repositories to start extractive SPL adoption in a given domain, create opportunities to provide highly customizable products.
As an illustrative example, Github or markets of mobile apps host hundreds of task list ("to-do list") products showing great diversity in terms of features. Apart from adding tasks, some have the feature to mark tasks as repeatable (e.g., weekly), add a map location, add a date, assign colors, associate alarms or speech-to-text to create tasks, to name a few. Extracting a to-do list SPL would enable to create tailored solutions. Finally, it could happen that, having an operative SPL, we want to analyse the domain in the wild to discover and integrate features that might be of our interest. Remarkably, this opens new research directions in terms of SPL scoping [START_REF] Schmid | A comprehensive product line scoping approach and its validation[END_REF].
In this vision, user assessments are important at the beginning of the extractive SPL adoption as well as during the process. Firstly, we need to gather information about the products that will be used for extracting the features. Concretely, user assessments about these products will give us confidence about the mined artefacts quality [KGB + 14] and will help in prioritizing feature donors. In this direction, we can analyse user reviews or project development activity. Secondly, automated software transplantation is strongly based in the existence of automatic tests, which are sometimes difficult to obtain. Transplantation was evaluated from one product to another but, in SPLE, feature interactions must be taken into account representing new challenges for SPL testing. Because of this complexity, during the extractive SPL adoption process, user assessments can be exploited through crowdsourcing campaigns with interactive evolutionary approaches in the background to guide in creating valid products. Assuming that valid products emerge because of the evolutionary process, the user assessments would then be based on usability issues. Therefore, in our opinion, in the present dissertation we have dealt with all the pieces towards the extractive SPL adoption of the future, i.e., harmonization, experimentation, visualisation and estimation.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Software product lines and extractive adoption . . . . . . . . . . . . . . 1.2 Overview of challenges and contributions . . . . . . . . . . . . . . . . 1.2.1 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Summary of contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Organization of the dissertation . . . . . . . . . . . . . . . . . . . . . . .
Figure 1 . 1 :
11 Figure 1.1: Organizational culture shift after extractive SPL adoption.
Figure 1 . 2 :
12 Figure 1.2: Software product line engineering processes: Domain and application engineering.
Figure 1 . 3 :
13 Figure 1.3: Overview of the context and challenges faced in this dissertation: Harmonization, experimentation and visualisation in extractive SPL adoption, and estimation of user assessments within the possible SPL products.
Figure 2 . 1 :
21 Figure 2.1: Feature model example of an electronic shop with feature diagram notation and a configuration example with selected features.
Negative variability. A maximal product is used to remove the non-selected features of a configuration. Positive variability. A minimal core is used to add the selected features of a configuration.
Figure 2 . 2 :
22 Figure 2.2: Negative and positive variability are two approaches to implement SPL product derivation from feature configurations.
Figure 2 . 3 :
23 Figure 2.3: Example of preprocessor directives to implement negative variability using Antenna.
Figure 2 . 4 :
24 Figure 2.4: Example of compositional approach to implement positive variability using FeatureHouse.
Figure 2 . 5 :
25 Figure 2.5: Example of a car feature model [CSW08].
Contents 3 . 1
31 Mining artefact variants in extractive SPL adoption . . . . . . . . . 22 3.1.1 Towards feature identification . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1.2 Feature naming during feature identification . . . . . . . . . . . . . . . . 23 3.1.3 Feature location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.4 Constraints discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.1.5 Feature model synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.1.6 Reusable assets construction . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2 The model variants scenario . . . . . . . . . . . . . . . . . . . . . configuration spaces and end users . . . . . . . . . . . . . . . . . 31 3.6.1 Interactive analysis of configuration spaces . . . . . . . . . . . . . . . . 31 3.6.2 Dealing with high subjectivity . . . . . . . . . . . . . . . . . . . . . . . 32
Figure 3.1: Two feature diagrams with the same configuration semantics [SRA + 14].
Figure 3 . 2 :
32 Figure 3.2: Common Variability Language [HWC13].
Figure 4 .
4 2 presents two ASTs of two different variants presented in Section 2.1.2 regarding the example of an EShop user interface. The root of the tree (eshop) is the Java package, the second level are classes, and the third are methods and fields. Concretely, Figure 4.2a shows the AST elements of an EShopUI without the Search feature, and Figure 4.2b shows the AST elements obtained from decomposing another variant with Search. Apart from source code, a Text file can be decomposed in Line elements, or EMF Models [Ecl16a] can be decomposed in Meta-Object Facility Elements [OMG06] such as Class, Attribute and References. More details about the available adapters are presented in Section 4.3.1.
Figure 4 . 1 :
41 Figure 4.1: Artefact type examples and elements representation creation through the adapters. The adapter can also construct reusable assets taking a set of elements as input.
Figure 4 . 2 :
42 Figure 4.2: Two Abstract Syntax Trees (AST) to illustrate how a source code adapter can decompose the structure of different source code artefacts. In this example, variant 2 has four AST elements that are not present in variant 1.
Figure 4 . 3 :
43 Figure 4.3: Design decisions for the images adapter.
Figure 4 . 4 :
44 Figure 4.4: Example of image variants and the result of applying the images adapter.
Figure 4 .
4 Figure 4.4 illustrates how the images adapter enables mining artefact variants of images. The adapter is the "piece" that need to be implemented to chain extractive SPL adoption activities towards the creation of an images SPL. The extracted SPL can be used to create new variants, as the one shown on the right side of the figure.
Figure 4 . 5 :
45 Figure 4.5: Relevant activities during extractive SPL adoption within our framework.
Figure 4 . 4 ,
44 through the Structural constraints technique explained below. Feature model synthesis: Feature model creation demands a feature model synthesis technique to obtain comprehensible feature diagrams. In our running example, the identified features and constraints can be used to automatically create the feature diagram of Figure 4.4 by including the alternative features notation. As in the previous layers, different techniques can be integrated in this layer.
Figure 4 . 7 :
47 Figure 4.7: Feature location results of a feature present in A1 and A2 using two feature location techniques.
Figure 4 .
4 8 illustrates the LSI technique by showing how a feature has associated words (the tag symbol) and how the blocks also have associated words retrieved from their elements.
Figure 4 . 8 :
48 Figure 4.8: Feature location using latent semantic indexing.
F1SFS
Figure 4 . 9 :
49 Figure 4.9: Three different feature location techniques using SFS and term frequency.
Figure 4 .Figure 4 . 10 :
4410 Figure 4.10 illustrates, on the left side, an example when an element from a block (B1) has a dependency with an element from another block (B2) and thus a requires constrain exists between B1 and B2.
Figure 4.11 presents a bars visualisation example showing how blocks are distributed across the mined artefacts.On the left side there is a bar for each of the variants, and on the right side, there is the list of identified blocks. The height of each bar is proportional to the number of elements in the artefact and each stripe is colored accordingly to the associated block. In this example, they are vending machines EMF Models [IBK11, MZKT14] and the order of the stripes represents the sequence of elements as returned by the adapter, which for the EMF Models adapter is a tree-traversal of the model.
Figure 4 . 11 :
411 Figure 4.11: Visualisation showing the Blocks (colors) on the artefacts (bars).
Figure 4 . 12 :
412 Figure 4.12: Relation of Blocks and Features regarding their presence in the artefact variants displayed using a Heat Map visualisation.
Figure 4 . 13 :
413 Figure 4.13: An Eclipse package decomposed in Plugin elements (white) and File elements (grey) with the structural dependencies between them.
Figure 5 . 1 :
51 Figure 5.1: Three UML model variants fulfilling different bank system needs and the manual actions for their creation.
2 shows the decomposition of a banking system UML model. This figure shows only an excerpt of the 164 AMEs of this model that contains 67 classes including UML classes, properties, operations, etc. We will show that these principles for decomposing a model are generic and apply for other DSLs such as Capella system engineering models[START_REF] Polarsys | [END_REF] (Section 5.4.3) or Yakindu statecharts[START_REF]Itemis. Yakindu statechart tool[END_REF] (used as case study in Chapter 8).
Figure 5 . 2 :
52 Figure 5.2: Bank UML model and excerpt of its atomic model elements.
Figure 5 .Figure 5 . 3 :
553 Figure5.3 shows the AMEs of Bank 1, Bank 2 and Bank 3 and their corresponding dependencies using a directed graph visualisation. In this case, these graphs are clockwise-based directed graphs. The direction of the edges between two elements is defined through the clockwise direction of the arc. For example, in Figure5.3a, we show how Package bs depends on Model Bank (the root element of the model). We can see how the class AMEs (in black color) are surrounded by the attribute AMEs (light color) and reference AMEs (dark color) that depend on each class AME. We can also see, as black edges, how some class AMEs depend on other class AMEs through a containment relation (given that a class depends on its parent class AME). The graph shows how attribute AMEs only depend on their corresponding
Figure 5 . 4 :
54 Figure 5.4: Excerpts of the CVL models implementing the banking systems MSPL.
Figure 5.5 shows an excerpt of the resolution model for our running example. In the case of No CurrencyConverter we can see a Placement fragment encompassing all the classes related to CurrencyConverter. In the previous Figure 5.4b, the classes inside the placement fragment of No CurrencyConverter are highlighted in dark color. The placement also includes the FromPlacement. In the FromPlacement we specify all the classes that are referenced from any class of the placement classes and from any class in the contents of the placement classes. These classes are highlighted in light color in Figure 5.4b. This placement information allows, at derivation time, the removal of the CurrencyConverter feature by its substitution with the empty replacement.
Figure 5 . 5 :
55 Figure5.5: Excerpt of the CVL resolution model. When No CurrencyConverter is selected, the model elements in its placement will be removed. If WithdrawWithoutLimit is selected, the attribute name of a class will be changed with a defined value.
Figure 5 . 6 :
56 Figure 5.6: Identified blocks in the Bank system variants which are mapped to features. The dependencies among their associated Atomic Model Elements are used for constraints discovery.
Figure 5 . 8 :
58 Figure 5.8: Illustrative example of structurally invalid model as result of composing two features.
Figure 5 . 9 :
59 Figure 5.9: Structural constraints discovered for the ArgoUML case study.
Figure 5 . 10 :
510 Figure 5.10: Excerpt of the CVL Realization layer for the ArgoUML case study.
Figure 5 . 11 :
511 Figure 5.11: Operational analysis diagram of the In-Flight Entertainment system variant showing some model elements related to Wi-Fi access for passengers.
Figure 5 . 12 :
512 Figure 5.12: In-Flight Entertainment system variant decomposition in Atomic Model Elements.
Figure 6 . 1 :
61 Figure 6.1: Eclipse Kepler SR2 packages and a mapping to their 437 features. For example, Eclipse CVS Client is present in all packages except in the automotive package.
Figure 6 . 2 :
62 Figure 6.2: Feature dependencies in the Eclipse Kepler SR2 packages. Plugins of the Eclipse CVS Client feature
Figure 6 . 4 :
64 Figure 6.4: EFLBench: Eclipse package variants as benchmark for feature location.
Figure 6 . 5 :
65 Figure 6.5: Automatic and parametrizable generation of Eclipse package variants to construct a feature location benchmark. The use of different strategies in the step to select configurations enables to construct benchmarks exhibiting different characteristics.
Figure 6 . 6 :
66 Figure 6.6: Different settings of the three strategies for selecting configurations taking as input the features and constraints extracted from the Modeling package of the Eclipse Kepler SR2. Each boxplot shows the number of features in the selection of 1000 configurations.
Figure 7 . 1 :
71 Figure 7.1: Screenshots of six app variants of the 8684 family.
Figure 7 .Figure 7 . 2 :
772 Figure 7.2 illustrates the blocks of File elements automatically identified among the variants in these two Java packages using the pruned concept hierarchy visualisation [Pet01] supported by BUT4Reuse. The six variants are at the bottom of the figure, and we can visualise, by following the arrows, the blocks integrating each variant. For example, LongWayBus, which is tagged in Figure 7.2, contains Blocks 10, 5, 6, 2 and 0. The blocks appearing in the concept of the variants (e.g., Block 10 for LongWayBus), are the ones that are specific to each variant. Block 0, at the top, are source code files shared by the six apps except TrainTicket in the bottom right corner. TrainTicket is only sharing files with the Train app.
Figure 7 . 4 :
74 Figure 7.4: A simplified example showing how meta-data of app variants of com.baidu are stored.
Figure 7 . 5 :
75 Figure 7.5: Boxplot on the number of variants per each family.
Figure 7 . 6 :
76 Figure 7.6: Distribution of the similarity of the variant pairs.
Figure 7 . 7 :
77 Figure 7.7: Pre-processing of the apk file for its decomposition in elements by the Android adapter.
..] features to your app [...]. Interactive maps, photo galleries, blog/news feeds, embedded websites, mobile websites, HTML5/Javascript code, YouTube videos, Twitter feeds, Facebook pages, music tracks, soundboards [...]".
Figure 7 . 8 :
78 Figure 7.8: Screenshots of applications generated with Andromo. The first app is a thematic app with several features, the second app uses the RSS feed feature, the third uses the photo gallery feature and the fourth uses the radio feature.
Figure 7 . 9 :
79 Figure 7.9: Screenshots of two different apps with content-driven variability.
Figure 7 . 10 :
710 Figure 7.10: Screenshots of two apps with device-driven variability.
Figure 7 . 11 :
711 Figure 7.11: Screenshots of a mined app family that is only reusing libraries.
package ubc.midp.mobilephoto.sms; public class SmsMessaging { private String smsSendPort; private String smsReceivePort; private String destinationPhoneNumber; private Message msg; public boolean sendImage(byte[] imageData) { // code of sendImage } public byte[] receiveImage() { // code of receiveImage } } (b) Excerpt of a Java class from the MobileMedia case study [FCS + 08] to illustrate the meaningful words retrieved using the source code adapter.
Figure 8 . 1 :
81 Figure 8.1: Examples of words retrieved from two excerpts of artefact variants.
Figure 8 .Figure 8 . 2 :
882 Figure8.2 shows the process of VariClouds from the perspective of the domain experts which consists of two phases: 1) Preparation and 2) Block naming. At the top of the figure, we show the BUT4Reuse process which is needed to use VariClouds. Concretely, the artefact variants are decomposed in elements through an adapter, and a block identification technique is used to obtain the blocks. In the use of VariClouds, we assume that an effective block identification technique is available and yields the necessary blocks for use as part of the feature identification process. Thus, it is important to clarify that block identification techniques themselves are out of the scope of VariClouds. We presented block identification techniques in Section 4.3.2.
Figure 8 . 3 :
83 Figure 8.3: Vending machine variants and statechart diagrams of variants 1 and 4.
Figure 8 . 4 :
84 Figure 8.4: Word cloud visualisations of vending machine variants.
Figure 8 .FiltersFigure 8 . 5 :
885 Figure 8.5 shows the automatic create word cloud step of VariClouds shown in Figure8.2, and illustrates a possible chaining of filters. Regarding the implementation of the word cloud creation, we have implemented a set of well-established filters in the information retrieval community, and for displaying the word cloud, we used OpenCloud i . We present some details about the implemented filters whose activation are optional to the domain experts:
Figure 8 . 6 :
86 Figure 8.6: Word clouds of all vending machine statechart variants without any filter and word cloud after filters refinement.
Figure 8 . 7 :
87 Figure 8.7: Word cloud visualisations of the identified blocks in the vending machine statechart variants.
Mining configurations to create feature relations graphs . . . . . . . 136 9.3.1 Data abstraction and formulas . . . . . . . . . . . . . . . . . . . . . . . 136 9.3.2 Graphical aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 9.3.3 Interaction possibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 9.3.4 Rejected alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 9.4 Electric Parking Brake case study . . . . . . . . . . . . . . . . . . . . 145 9.4.1 Introduction to the domain . . . . . . . . . . . . . . . . . . . . . . . . . 145 9.4.2 Visualising the feature relations . . . . . . . . . . . . . . . . . . . . . . . 146 9.4.3 Extension considering feature conditions . . . . . . . . . . . . . . . . . . 149 9.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
FeaturesFigure 9 . 1 :
91 Figure 9.1: Existing configurations obtained as part of the extractive SPL adoption process.
Figure 9 . 2 :
92 Figure 9.2: Feature relations graph for the Manual feature of the Car example. Manual is in the center.The Automatic and DriveByWire features are in the excludes zone. The excludes constraints between Manual and Automatic are formalized in the FM, and the excludes constraint with DriveByWire is inferred from other constraints. DriveByWire and ForNorthAmerica belong to stakeholders that are not the same as the one from Manual. Manual seems not to have any relation with ForNorthAmerica. The confidence of the constraint Manual excludes Automatic is 100% because, in the existing configurations, we already have all the possible configurations where Manual can be selected.
Figure 9 . 3 :
93 Figure 9.3: Zones of a feature relations graph.
Figure 9 . 4 :
94 Figure 9.4: Graphical notation for formalized constraints.
Figure 9 . 5 :
95 Figure 9.5: Default color scheme for the nominal data of stakeholder perspectives.
Figure 9 . 6 :
96 Figure 9.6: Screenshot of FRoGs visualisation tool focusing on the PBSAutomatic feature of the Electric Parking Brake SPL.
Figure 9 . 7 :
97 Figure 9.7: Rejected color scheme for the nominal data of stakeholder perspectives.
Figure 9 . 8 :
98 Figure 9.8: Different relations regarding presence or absence of features, and the direction of the relation regarding the FRoG central feature.
Figure 9 . 10 :
910 Figure 9.10: FRoG of EffortSensor feature without filters showing that it does not representatively affect other features.
Figure 9 . 11 :
911 Figure 9.11: FRoG of PullerCable feature with a filter hiding Independent zone features to illustrate great impact on other features.
Figure 9 . 12 :
912 Figure 9.12: Visualising the impact of the selection of two features in the rest of the features.
Figure 10 . 1 :
101 Figure 10.1: Feature model of the landscape paintings.
Figure 10 . 2 :
102 Figure 10.2: Painting derivation process and examples.
Figure 10 . 3 :
103 Figure 10.3: HSPLRank is built on top of an SPL and it consists of the variant reduction, variant assessment and ranking creation phases.
Figure 10 . 4 :
104 Figure 10.4: Configuration space and viable space.
Figure 10 . 5 :
105 Figure 10.5: Variant assessment driven by the interactive genetic algorithm.
Equation 10 .Figure 10 . 6 :
10106 Figure 10.6: Similarity radius of a given configuration C c .
Figure 10 . 7 :
107 Figure 10.7: Different options to calculate the weight.
Figure 10 . 8 :
108 Figure 10.8: Illustration of the importance of Neighbors similarity confidence. The weighted mean is 5 in both cases but, in the case of the left, a more similar configuration was assessed by a user.
Figure 10 . 9 :
109 Figure 10.9: Illustration of the importance of Neighbors density confidence. The weighted mean is 5 in both cases but, in the case of the right, we have more user assessments.
Figure 11 . 1 :
111 Figure 11.1: Displayed painting and voting device with a five points scale.
Figure 11 . 2 :
112 Figure 11.2: Mean absolute error for different radius values and weight calculation approaches.
Figure 11 . 3 :
113 Figure 11.3: Top 10 ranking items defined by wmean { gconf% (ndenc%, nsimc%)}.
1. 5 Figure 11 . 4 :
5114 Figure 11.4: Bottom 5 ranking items defined by: wmean { gconf% (ndenc%, nsimc%)}.
Figure 11 .Figure 11 . 5 :
11115 Figure 11.5: Key paintings with high global confidence.
Figure 11 .
11 Figure 11.6 presents the FM defining the variability of the Contact List application domain. The FM was created by HCI experts from the Luxembourg Institute of Science and Technology (LIST) with whom we collaborated in this case study. This FM encodes knowledge about the interface design defining a configuration space of 1365 valid configurations. UI design choices, even for this apparently simple case, give raise to voluminous configuration spaces.The ContactList variability is decomposed into three main features: List depicting the possible choices to be made in terms of widgets for representing the list, Master Detail Interface which states the global layout of the application and Details Grid which sets the layout for the detailed information of a person. The ListType variability defines the different alternatives of List widgets: DropDownList is a select box showing only one item when inactive, ListView is a classic navigation list and TileList is a list of thumbnails represented as tiles. The Indexed optional feature separates and ranks the list items by the first letter of the name. The Filter optional feature adds a search functionality implemented through a text box that automatically filters the list items according to the text introduced by the user. The ListItem consists of the Name of the person or the Photo, or both. Four
Figure 11 . 6 :
116 Figure 11.6: Contact List feature model.
Figures 11.7 and 11.8 present UI variants from which we enumerate their corresponding features. Figure 11.7 shows an example that includes ListView with only the Name (see left side of the figure). The list view is neither Indexed nor has a Filter feature. It has Master Detail Interface with Equal Area and Master Left given that the screen is split in two identical parts with the list (master) at the left and the details grid at the right. The details are displayed with Photo Right Grid.
Figure 11 . 7 :
117 Figure 11.7: Configuration and screenshot of its associated Contact List UI variant.
( c )
c DropDownList, Name, Master Detail (with Big Master Area, Master Left).
( d )
d Master Detail Feature (The list variability is not illustrated in this figure) and Details Grid with Photo Bottom Right.
Figure 11 . 8 :
118 Figure 11.8: Screenshots of derived variants from the Contact List SPL and enumeration of their associated features.
Figures 11 .
11 Figures 11.8a and 11.8b show how the TileList is realized (see right side of the figure) and Figure11.8c how the DropDownList is displayed (see left side of the figure). Figure11.8d shows a UI variant whose configuration does not have a master detail. It only displays on the screen either master or detail (note the presence of the back button at the top left of the screenshot for coming back to the master). In the case of master detail, the ratio indicates whether we have a big master interface with a small details interface (e.g., Figure11.8c) or vice-versa. Alternatively, we can have the split into two equal parts (Figure11.7 and 11.8a).
Figure 11 . 9 :
119 Figure 11.9: Example of the chromosome of an individual of the Contact List SPL.
Figure 11 . 10 :
1110 Figure 11.10: LIST: Mean absolute error for different radius values and weight calculation approaches.
Figure 11 . 11 :
1111 Figure 11.11: SnT: Mean absolute error for different radius values and weight calculation approaches.
( a )
a ListView, Name and no MasterDetailInterface (b) TileList, Name, Photo, Indexed, Filter and no Master-DetailInterface (c) TileList, Name, Photo, Filter, MasterDetailInterface (with Big Master Area and Master Left) and Vertical Grid
Figure 11 . 12 :
1112 Figure 11.12: Screenshots of relevant variants found using HSPLRank.
Figure 11 . 13 :
1113 Figure 11.13: Results of the genetic algorithm evolution in the two organizations.
Figure 11 . 14 :
1114 Figure 11.14: Generations progress in terms of mean score and diversity.
Figure 11 . 15 :
1115 Figure 11.15: Comparing variant selection based on genetic algorithm and random.
Figure 11 . 16 :
1116 Figure 11.16: Genotype population diversity.
Figure 11 . 17 :
1117 Figure 11.17: Usability issues reported by users.
Figure 11 . 18 :
1118 Figure 11.18: Relating middle parts with the number of data set instances and the scores.
Table 2 . 1 :
21 Configurations of the electronic shop feature model. The eight configurations represent the configuration space given that any other configuration will not satisfy the feature constraints.
Catalogue Payment Security BankTransfer CreditCard High Standard Search
Payment Payment Security Security
Conf 1
Conf 2
Conf 3
Conf 4
Conf 5
Conf 6
Conf 7
Conf 8
In the realization of their approach, the merge-in produces a feature for each input variant and uses it to annotate all elements which originated from that variant. Therefore, they extract a maximal model with annotations that can be used on MSPL based on an annotative derivation mechanism.Regarding feature identification, Ryssel et al. compare model variants that are represented as function-blocks[START_REF] Ryssel | Automatic variation-point identification in function-block-based models[END_REF].Ziadi et al. propose an approach to analyse the source code through the use of UML class diagrams of a set of software variants and identify commonality and variability among them[START_REF] Ziadi | Feature identification from the source code of product variants[END_REF]. Apart from not covering the reusable assets construction activity, these approaches are not generic to any MOF-based scenario. The tool FeatureMapper enables manual and automatic mapping of model elements to features[START_REF] Heidenreich | Featuremapper: mapping features to models[END_REF]. However, their automatic mapping is based on monitoring the developers when they are implementing a feature.
Tools for variability management (e.g., implementations of CVL
[START_REF]CVL Tool[END_REF][START_REF] Haugen | BVR -better variability results[END_REF]
) are not focused on extractive SPL adoption.
For extractive MSPL adoption,
Zhang et al.
propose the CVL Compare process
[START_REF] Zhang | Model comparison to synthesize a model-driven software product line[END_REF]
. Concretely, they use EMF Compare [Ecl16b] to analyse a set of model variants and CVL to create a preliminary MSPL model. This approach is similar to the one that we present in Chapter 5, however, there are three differences: 1) while we automatically calculate the Base Model, CVL Compare relies on the SPL developer to choose it, 2) CVL Compare is based on EMF Compare two-way comparison mechanism, i.e., model variants are only compared with each other but not simultaneously, 3) CVL Compare does not consider the identification of constraints. The structural constraints are important to construct valid MSPLs.
Model comparison techniques have received a lot attention in the MDE community. However, this is often limited to a comparison between two versions of a model.
Koschke et al.
propose an automated approach for comparing specific model variants that only concern the static architectural view
[START_REF] Koschke | Extending the reflexion method for consolidating software variants into product lines[END_REF]
. Stephan et al.
[START_REF] Stephan | A survey of model comparison approaches and applications[END_REF]
and Kolovos et al.
[START_REF] Kolovos | Different models for model matching: An analysis of approaches to support model differencing[END_REF]
analysed model comparison approaches and provided a classification for them. Model splitting can be used to reduce the complexity of maximal models for human comprehension and team collaboration by hiding part of it
[START_REF] Struber | Splitting models using information retrieval and model crawling techniques[END_REF]
. These approaches provide heuristics to modularize the model before splitting. However, they focus on single models, not model variants. Heuristics for model splitting can be complementary in the feature identification and location processes. Rubin et al. propose the merge-refactoring framework [RC12a, Rub14]. Merge-refactoring is a formal framework to compare UML model variants using what is referred as n-way model merging [RC13a]. Their merge-in operator creates a maximal model merging the input variants. Their work is mostly focused in the challenge to merge and they do not tackle the problem of trying to analyse the identified blocks nor that of finding comprehensive feature names. Regarding feature constraints discovery, Bosco et al. targeted the challenge of generating invalid models from MSPL to be used as counter-examples to refine variability models [FFBA + 14]. Blanc et al. propose an approach for validating sub-models of a maximal model [BMMM08]. Czarnecki et al. propose feature-based model templates verification against OCL constraints [CP06].
Other visualisations provide assistance for other specific tasks. For instance, Lopez-Herrejon et al. propose the use of visualisation techniques to display features for pairwise testing[START_REF] Roberto | Towards interactive visualization support for pairwise testing software product lines[END_REF]. However, few visualisation paradigms have been proposed that help analysing the artefact variants during the extractive SPL adoption process. In this dissertation we tackle the assistance during feature naming and constraints discovery activities. Regarding feature naming, we already discussed in Section 3.1.2 the lack of visualisation for this activity, so we present related work regarding feature constraints visualisation.Understanding constraints among features is challenging for human comprehension given that real-world SPLs yield FMs that can be large and complex. VISIT-FC [NTB + 08] or the tool by Pleuss et al. use arcs between features[START_REF] Pleuss | Visualization of variability and configuration options[END_REF].Trinidad et al. proposed the cone tree visualisation paradigm introducing a 3D representation of FMs[START_REF] Trinidad | Threedimensional feature diagrams visualization[END_REF]. However, this visualisation represent the hierarchical information of features without considering constraint visualisation. Understanding feature relations is specially relevant in extractive SPL adoption where feature constraints needs to be identified. Techniques to extract constraints from existing configurations, as the ones presented in Section 3.1.4, focused on automation and they do not propose visualisation support for the involvement of domain experts.
Apart from coloring, related work in SPL visualisation mainly consider assistance to domain
experts or customers during product configuration (i.e., selecting the features that fit the
customer needs). Pleuss et al. conducted a survey on the use of visualisation techniques
for this purpose [PRB11]. Specific examples are VISIT-FC (Visual and Interactive Tool for
Feature Configuration) from Nestor et al. [NTB
Nestor et al., "the use of visualisation techniques in SPL can radically help stakeholders in supporting essential work tasks and in enhancing their understanding of large and complex SPLs" [NTB + 08]. One popular visualisation in SPLE is the use of colors to associate features to implementation elements. Kästner et al. propose CIDE (Colored Integrated Development Environment) to visualise the mapping of features to source code by extending source code editors with color highlighting [KTA08]. Concretely, each feature is associated with a color. In a similar way, Heidenreich et al. propose in FeatureMapper to use color highlighting inside model diagrams to visualise the traceability between features and model elements [HKW08]. + 08]. Botterweck et al. propose another visual configurator based on tree visualisations
[START_REF] Botterweck | A design of a configurable feature model configurator[END_REF]
. Other approaches, such as the one proposed by Schlee and Vanderdonckt, are based on transformation patterns from feature modeling concepts to traditional user interfaces (e.g., windows with checkboxes, buttons and dialogs)
[START_REF] Schlee | Generative programming of graphical user interfaces[END_REF]
.
3.4. Benchmarks and case studies
Formal Concept Analysis (FCA): FCA
groups elements that share common attributes. A detailed explanation about FCA formalism in the same context of extractive SPL adoption can be found in Al-Msie'deen et al. [ASH + 13a] and Shatnawi et al.[START_REF] Shatnawi | Recovering architectural variability of a family of product variants[END_REF]. FCA uses a formal context as input. In our case, the entities of the formal context are the artefact variants and the attributes (binary attributes) are the presence or absence of each of the elements. With this input, FCA discovers a set of concepts. The concepts containing at least one element ("non empty concept intent" in FCA terminology) are considered as a block. At technical level, we implemented FCA for block identification using Galatea. iii
Artefact variants
Similarity analysis Block
Interdependent elements or Formal Concept Analysis (FCA) Formal Concept Analysis with Structural Splitting Similar elements
split
Figure 4.6: Block identification techniques.
Interdependent elements: This block identification technique is borrowed from Extrac-
torPL [ZFdSZ12]. The idea behind the algorithm is to separate the different intersections of
the artefact variants. In Figure 4.6, we illustrate it on the left side of the figure, obtaining
one block for each of the intersections of the artefact variants. This technique was used in
the illustrative example of image variants in Section 4.2.2.
This technique is based on a formal definition of a block that uses the notion of interdependent
elements. Interdependent elements are defined as follows: Given a set A of artefacts that we
want to compare, two elements (of artefacts of A) e 1 and e 2 are interdependent if and only if
they belong to exactly the same artefacts of A. Therefore, e 1 and e 2 are interdependent if
the two conditions in Equation 4.1 are fulfilled.
∃a ∈ A e 1 ∈ a ∧ e 2 ∈ a (4.1)
∀a ∈ A e 1 ∈ a ⇔ e 2 ∈ a
Since interdependence is an equivalence relation on the set of elements of A, when using this
algorithm, the following definition can be provided for a block: Given A a set of artefacts, a
block of A is an equivalence class of the interdependence relation of the elements of A.
ArgoUML case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
. 68
5.3.3 Similarity metric definition: Relying on extensible techniques . . . . . . 69
5.3.4 Reusable assets construction: Generating a CVL model . . . . . . . . . 70
5.4 Experimental Assessment . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.4.1 BUT4Reuse settings for the case studies . . . . . . . . . . . . . . . . . . 73
5.4.2
• Jabier Martinez, Tewfik Ziadi, Tegawendé F. Bissyandé, Jacques Klein, and Yves Le Traon. Automating the extraction of model-based software product lines from model variants (T). In 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9-13, 2015, pages 396-406. IEEE Computer Society, 2015 • Jabier Martinez, Tewfik Ziadi, Jacques Klein, and Yves Le Traon. Identifying and visualising commonality and variability in model variants. In Modelling Foundations and Applications -10th European Conference, ECMFA 2014, Held as Part of STAF 2014, York, UK, July 21-25, 2014. Proceedings, pages 117-131, 2014
Contents 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.2 Extraction of Model-based Product Lines . . . . . . . . . . . . . . . . 64 5.3 Designing the Model Adapter . . . . . . . . . . . . . . . . . . . . . . . 66 5.3.1 Elements identification: A meta-model independent approach . . . . . . 66 5.3.2 Structural dependencies identification . . . . . . . . . . . . . . . . . . 5.4.3 In-Flight Entertainment Systems case study . . . . . . . . . . . . . . . . 78 5.4.4 Discussions about MoVa2PL . . . . . . . . . . . . . . . . . . . . . . . . 80 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table 5 . 1 :
51 Number of Atomic Model Elements of ArgoUML UML model variants and number of dependencies between them.
Variant AMEs Class Attr Ref Depend
ActivityDisabled 157,896 51,235 77,707 28,954 180,623
CollabDisabled 158,535 51,418 78,046 29,071 181,338
DeployDisabled 157,314 51,033 77,450 28,831 179,949
Original 159,771 51,820 78,667 29,284 182,738
SequenceDisabled 155,231 50,349 76,417 28,465 177,646
StateDisabled 156,193 50,699 76,805 28,689 178,785
UsecaseDisabled 157,504 51,056 77,547 28,901 180,184
Table 5 . 2 :
52 Number of Atomic Model Elements of the blocks identified in the ArgoUML case study. Activity diagrams. These blocks were the bigger ones in terms of number of AMEs given that the rest of the blocks only contain few of them as shown in the table (e.g., Block 7 only contains one reference AME).
Block AMEs Class Attr Ref
Block 0 -Core 143,894 46,724 70,696 26,474
Block 1 -UseCase 2,260 760 1,117 383
Block 2 -Sequence 4,509 1,461 2,233 815
Block 3 -Collaboration 1,204 392 604 208
Block 4 -State 3,499 1,095 1,818 586
Block 5 -Deployment 2,457 787 1,217 453
Block 6 -Activity 1,796 559 916 321
Block 7 1 0 0 1
Block 8 1 0 0 1
. . . . . . . . . . . . . . .
Block 40 4 2 2 0
We manually analysed the blocks in order to identify the features. The meaning of the blocks from zero to six was easily recognisable by using the visualisation presented in Chapter 8. Block 0 corresponds to the Core of ArgoUML, Block 1 to UseCase diagrams edition, Block 2 to Sequence, Block 3 to Collaboration, Block 4 to State, Block 5 to Deployment and Block 6 to
Table 5 . 3 :
53 Number of Atomic Model Elements of IFE system model variants and number of dependencies between them.
Variant AMEs Class Attr Ref Depend
Original 16,624 5,345 4,081 7,198 24,205
LowCost1 16,551 5,321 4,066 7,164 24,098
LowCost2 16,594 5,335 4,075 7,184 24,161
Table 5 . 4 :
54 Number of Atomic Model Elements of the features identified in the In-Flight Entertainment model variants. MoVa2PL automatically discovers the structural constraints among the different features. Thus, we discover that both Wi-Fi and ExteriorVideo requires the Core feature. With the gathered information, the CVL models are automatically constructed obtaining an MSPL for the IFE models. From this MSPL, we are able to generate a new variant that does not contain Wi-Fi nor ExteriorVideo.
Block AMEs Class Attr Ref
Block0 -Core 16,521 5,311 4,060 7,150
Block1 -Wi-Fi 73 24 15 34
Block2 -ExteriorVideo 30 10 6 14
Then,
As the Eclipse project evolves over time, new packages appear and some other ones disappear depending on the interest and needs of the community. For instance, in 2012, one package for Automotive Software developers appeared and, recently, in 2016, another package appeared for Android mobile applications development. The Eclipse Packaging Project (EPP) is the technical responsible for creating entry level downloads based on defined user profiles.Continuing with Eclipse terminology, a simultaneous release (release hereafter) is a set of packages which are public under the supervision of the Eclipse Foundation. Every year, there is one main release, in June, which is followed by two service releases for maintenance purposes: SR1 and SR2 usually around September and February. For each release, the platform version changes and traditionally celestial bodies are used to name the releases, for example Luna for version 4.4 and Mars for version 4.5.
The Eclipse community, with the support of the Eclipse Foundation, provides integrated development environments (IDEs) targeting different developer profiles. The IDEs cover the development needs of Java, C/C++, JavaEE, Scout, Domain-Specific Languages, Modeling, Rich Client Platforms, Remote Applications Platforms, Testing, Reporting, Parallel Applications or for Mobile Applications. Following Eclipse terminology, each of the customized Eclipse IDEs is called an Eclipse package.
Table 6 . 1 :
61 Eclipse feature example. The Eclipse CVS Client feature and its associated plugins.
Plugin id Plugin name
org.eclipse.cvs
Feature id: org.eclipse.cvs name: Eclipse CVS Client description: Eclipse CVS Client (binary runtime and user documentation).
Table 6 . 2 :
62 Eclipse releases and their number of packages, features and plugins.
Year Release Packages Features Plugins
2008 Europa Winter 4 91 484
2009 Ganymede SR2 7 291 1,290
2010 Galileo SR2 10 341 1,658
2011 Helios SR2 12 320 1,508
2012 Indigo SR2 12 347 1,725
2013 Juno SR2 13 406 2,008
2014 Kepler SR2 12 437 2,043
2015 Luna SR2 13 533 2,377
14,[START_REF] Grünbacher | Model-based customization and deployment of eclipse-based tools: Industrial experiences[END_REF][START_REF] Shatnawi | Recovering architectural variability of a family of product variants[END_REF]. For instance, experiences in an industrial case study were reported by Grünbacher et al. where they performed manual feature location in Eclipse packages to extract an SPL involving more than 20 package customizations per year[START_REF] Grünbacher | Model-based customization and deployment of eclipse-based tools: Industrial experiences[END_REF].
Table 6 . 3 :
63 Precision and recall of the different feature location techniques.
SFS SFS+ST SFS+TF SFS+TFIDF
Release Precision Recall Precision Recall Precision Recall Precision Recall
Europa Winter 6.51 99.33 11.11 85.71 12.43 58.69 13.07 53.72
Ganymede SR2 5.13 97.33 10.36 87.72 11.65 64.31 12.80 52.70
Galileo SR2 7.13 93.39 10.92 82.01 11.82 60.50 12.45 53.51
Helios SR2 9.70 91.63 16.04 80.98 25.97 63.70 29.46 58.39
Indigo SR2 9.58 92.80 15.72 82.63 19.79 59.72 22.86 57.57
Juno SR2 10.83 91.41 19.08 81.75 25.97 61.92 24.89 60.82
Kepler SR2 9.53 91.14 16.51 83.82 26.38 62.66 26.86 57.15
Luna SR2 7.72 89.82 13.87 82.72 22.72 56.67 23.73 51.31
Mean 8.26 93.35 14.20 83.41 19.59 61.02 20.76 55.64
ii https://github.com/but4reuse/but4reuse/wiki/Benchmarks
Table 6 . 4 :
64 Time performance in milliseconds for feature location.
Preparation Concrete techniques
Release Adapt FCA SFS SFS+ST SFS+TF SFS+TFIDF
Europa Winter 2,397 75 6 2,581 2,587 4,363
Ganymede SR2 7,568 741 56 11,861 11,657 23,253
Galileo SR2 10,832 1,328 107 17,990 17,726 35,236
Helios SR2 11,844 1,258 86 5,654 5,673 12,742
Indigo SR2 12,942 1,684 100 8,782 8,397 16,753
Juno SR2 16,775 2,757 197 7,365 7,496 14,002
Kepler SR2 16,786 2,793 173 8,586 8,776 16,073
Luna SR2 17,841 3,908 233 15,238 15,363 33,518
Mean 12,123 1,818 120 9,757 9,709 19,493
Table 6 . 5 :
65 Precision, recall and time measures in milliseconds of the FCA+SFS feature location technique in sets of randomly generated Eclipse packages using the percentage-based random strategy.
Percentage-based FCA+SFS Time
random using 40% Precision Recall FCA SFS
10 variants 33.40 96.55 122 84
20 variants 47.91 96.02 415 320
30 variants 55.62 95.41 502 630
40 variants 58.60 95.41 1,268 905
50 variants 61.01 93.10 2,168 1,105
60 variants 62.57 90.73 2,455 1,382
70 variants 64.78 90.63 2,636 1,717
80 variants 65.40 90.02 4,137 4,049
90 variants 66.02 89.57 6,957 7,774
100 variants 66.02 89.57 7,515 7,251
Table 6 . 6 :
66 Precision, recall and time measures in milliseconds of the FCA+SFS feature location technique in sets of randomly generated Eclipse packages using the random strategy.
Random FCA+SFS Time
Precision Recall FCA SFS
10 variants 72.83 86.33 328 190
20 variants 90.49 84.97 400 260
30 variants 91.81 84.97 451 394
40 variants 93.13 84.97 802 603
50 variants 93.13 84.97 1,122 905
60 variants 93.13 84.97 1,485 866
70 variants 93.13 84.97 1,878 2,961
80 variants 93.13 84.97 3,692 1,637
90 variants 93.80 84.97 4,539 1,567
100 variants 93.80 84.97 7,967 2,177
Table 7 . 1 :
71 Top 10 families in number of variants.
Package Prefix Variants
com.andromo 12,702
com.conduit 11,766
sk.jfox 9,055
com.reverbnation 3,932
air.com 2,732
com.jb 2,203
com.appsbar 2,143
com.skyd 1,922
com.appmakr 1,787
com.gau 1,764
end, we leverage code-based comparison (step 3) in order to mitigate those potential false positives. As our pairwise code comparison between apps is not scalable to families with large number of variants, to evaluate the soundness of this step we randomly selected 100 families of less than 10 variants. Figure
7
.6 illustrates the distribution of similarities between pairs of variants within the families. The median value is around 77%, showing that variants of most families are actually sharing a lot of common code. This fact makes them good samples for exploring and assessing feature identification techniques.
Table 8 . 1 :
81 Characteristics of the case studies for the VariClouds approach separated by artefact types (source code, models and component-based systems). For each case study, it is included the reference works in the literature and an enumeration of the variants.
Source code case studies Model case studies
Classes Methods LOC Classes Attrs Refs
CS 1: ArgoUML [CVF11a, ASH + 13c, ZFdSZ12] CS 5: In-Flight Entertainment Systems [MZB + 15a], Section 5.4.3: Capella models [Pol15]
Original 1,773 14,904 148,715 Original 5,345 4,081 7,198
No cognitive support 1,553 13,567 132,397 Low Cost 1 5,321 4,066 7,164
No activity diagram 1,756 14,710 146,428 Low Cost 2 5,335 4,075 7,184
No state diagram 1,738 14,508 144,799
No collab. diagram 1,754 14,783 147,137 CS 6: Vending Machine Statecharts
No sequence diagram 1,720 14,483 143,337 [IBK11, MZKT14], Secion 8.3: SCT models [Ite14]
No use case diagram No deploy. diagram No logging All disabled 1,736 1,740 1,773 1,357 14,684 146,002 14,667 145,569 14,901 146,206 11,934 110,933 Variant 1 Variant 2 Variant 3 Variant 4 18 22 18 23 17 21 17 22 15 19 15 20
CS 2: Notepad [TKB + 14, ZHP + 14] Variant 5 Variant 6 18 21 17 20 15 18
Notepad 1 7 44 600 CS 7: Banking Systems
Notepad 2 7 47 652 [MZB + 15a, MZKT14], Section 5.2: UML models
Notepad 3 Notepad 4 Notepad 5 Notepad 6 7 7 9 9 46 49 50 53 643 695 689 740 Bank 1 Bank 2 Bank 3 62 63 47 60 62 47 28 29 19
Notepad 7 9 52 730
Notepad 8 9 55 782
CS 3: Draw application Component-based case study
[FLLE14, LLE16] Plugins
P1 P2 3 3 25 28 134 174 Eclipse Kepler packages [MZB + 15b], Section 4.4.1
P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 4 3 3 3 3 3 3 4 4 4 36 26 26 27 29 29 30 37 37 38 219 170 147 183 210 187 223 257 233 271 Standard Java Java EE C/C++ Scout Java and DSL Modeling Tools RCP and RAP Testing Java and Reporting
CS 4: Mobile media [YM05, FCS + 08] Parallel Applications Automotive Software
MobileMedia R1 15 92 831
MobileMedia R2 24 124 1,159
MobileMedia R3 25 140 1,314
MobileMedia R4 25 143 1,363
MobileMedia R5 30 160 1,555
MobileMedia R6 37 200 2,051
MobileMedia R7 46 239 2,523
MobileMedia R8 50 271 3,016
Table 8 .2:
8 Evaluation of the quality of the word clouds.
MRR2 MRR MRank Rank of each feature, ∅ for not found
CS 1 0.71 0.63 1.57 Core (∅), Logging (1), Activity diagram (1), State diagram (1),
Collaboration diagram (1), Sequence diagram (4),
Use case diagram (1), Deployment diagram (2),
Cognitive support (∅)
CS 2 0.83 0.62 1.33 Base (∅), Cut-Copy-Paste (1), Find (1), Undo-Redo (2)
CS 3 1.00 0.80 1.00 Base (∅), Line(1), Rect (1), Color (1), Wipe (1)
CS 4 0.63 0.57 3.33 Core (∅), ExceptionHandling (1), LabelMedia (2), Sorting (6),
Favourites (1), Photo (2), Music (2), Video (1),
Sms (1), CopyMedia (14)
CS 5 1.00 0.66 1.00 Core (∅), Wi-Fi (1), ExteriorVideo (1)
CS 6 0.76 0.69 1.83 Main (4), Soda (1), Coffee (1), Tea (1),
Cash payment (4), Credit card payment (3), Ring tone alert (1)
CS 7 0.62 0.50 1.33 BankCore (∅), CurrencyConverter (2), WithdrawWithLimit (1),
Consortium (1), WithdrawWithoutLimit (∅)
Mean: 0.79 0.63 1.62
.2. Other examples can be found in the literature of extractive SPL adoption. For instance, AL-msie'deen et al. [ASH + 13b] refer to a Java class named ImagePath in the package Drawing.Shapes.Image using the textual representation Class(ImagePath_Drawing.Shapes.Image). The same author [ASH + 13c], referring to a Java method named DatabaseState in the Database class, used Method(DatabaseState()_Database). Ziadi et al. use a textual notation where each element is represented as a construction primitive [ZHP+ 14]. For example, CreateTerminal(deposit, method, Account) refers to add, in the AST, a method named deposit in a class named Account.
•
Jabier Martinez, Tewfik Ziadi, Raúl Mazo, Tegawendé F. Bissyandé, Jacques Klein, and Yves Le Traon. Feature relations graphs: A visualisation paradigm for feature constraints in software product lines. In Second IEEE Working Conference on Software Visualization, VISSOFT 2014, Victoria, BC, Canada, September 29-30, 2014, pages 50-59, 2014
Table 9 . 3 :
93 Number of the identified constraints using FRoGs
FRoG central feature Hard Inferred Soft
PBSManual 2 0 2
PBSAutomatic 2 1 4
PBSAssistance 2 1 3
ClutchPedal 1 1 4
GBAutomatic 2 0 4
GBManual 1 0 4
AuxiliaryBrakeRelease 0 0 4
PullerCable 5 2 0
ElectricActuator 5 2 0
Calipers 4 1 0
Traditional 4 2 0
TemporaryMonitor 1 2 2
PermanentMonitor 1 1 1
DCMotor 1 1 3
PairEActuator 1 1 3
TiltSensor 0 0 5
EffortSensor 0 0 4
ClutchPosition 0 0 4
DoorPosition 0 0 4
VSpeed 0 0 4
Total 32 15 55
Average 1.6 0.75 2.75
Standard Deviation 1.67 0.79 1.68
Algorithm 2 Interactive genetic algorithm for data set creation in the Paintings case study.
output: data set of user assessments
1: population ← initializePopulation()
2: while ManualStopNotPerformed do
3: for memberi ∈ {population} do
4: memberi.f itness ← getUserFeedback(memberi)
5: registerDataInstance(memberi)
6: end for
7: parents ← parentSelection(population)
8: offspring ← crossover(parents)
9: offspring ← mutate(offspring)
10: population ← offspring
11: end while
input: population = 20 members, Genetic representation of a member = 9 positions: SkyType, SkyFlip, SkyExtraSize; MiddlePart, MiddleFlip, MiddleExtraSize, GroundType, GroundFlip, GroundExtraSize; Type value from 0 to 9, Flip and ExtraSize values from 0 to 1; Example: 801510210
Algorithm 3 Distance function for formalizing the similarity between two painting configurations. Two configurations of paintings Ci and Cj output: The distance di,j between Ci and Cj
1: di,j ← 0
2: for P ∈ {S, M, G} do
3: if Pi = Pj then
4: di,j = di,j + 1
5: else if P = null && isF lipped(Pi) = isF lipped(Pj) then
6: di,j = di,j + 0.2
7: end if
8: if P = null && isEnlarged(Pi) = isEnlarged(Pj) then
9: di,j = di,j + 0.2
10: end if
11: end for
12: return di,j
input:
Table 11 . 1 :
111 Controlled assessment results.
Resubstitution 10-Folds average
Mean absolute error 1.0523 1.0913
Table 11 . 2 :
112 Global Score Mean evaluation.
GA Random
LIST 4.65 4.20
SnT 4.40 3.95
i http://diversify-project.eu/data/
4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i OpenCloud library: http://sourceforge.net/projects/opencloud
i Video explaining the art installation: https://vimeo.com/139153572
Acknowledgements
T
me. Thanks Julia, for your encouragement and love.
Thanks Tewfik. Girum. Thanks to all. We spent uncountable hours and coffees together.
Thanks to the Luxembourg National Research Fund for the PhD grant. Thanks to the administrative staff of both universities. Thanks to my collaborators at the Luxembourg Institute of Science and Technology. Thanks to Gabriele. Thanks to the European Software Institute (Tecnalia) and Thales, places where I first got inspired to follow this research line.
List of abbreviations
AME
List of figures
List of algorithms
Java Source Code Adapter [ZHP + 14]: A folder containing source code in Java. Elements: FSTNonTerminalNode and FSTTerminalNode. FeatureHouse [START_REF] Apel | FEATUREHOUSE: Languageindependent, automated software composition[END_REF] Java source code visitor. Similarity: Feature Structure Tree (FST) [START_REF] Apel | FEATUREHOUSE: Languageindependent, automated software composition[END_REF] positions and names comparison Dependencies: FST nodes dependencies and source code dependencies detected with Puck [START_REF] Girault | Puck: an architecture refatoring tool[END_REF] Construct: FeatureHouse extraction creating code fragments C Source Code Adapter [ZHP + 14]: Same as Java Source Code Adapter but for the C language EMF Models Adapter [MZB + 15a]: MOF compliant model [START_REF] Omg | Meta Object Facility (MOF) Core Specification[END_REF]. Chapter 5 presents all the details. We ignore if the reuse was performed using copy-paste-modify or if more advanced reuse techniques were being applied. However, this analysis of artefact variants concludes that reuse is being performed. We analysed the size of the Java files in terms of lines of code (LOC). The variant-specific Blocks 8 to 11 are 20 KLOC in average, and Blocks 12 and 13 are 3 KLOC and 5 KLOC respectively. Block 0, shared by all apps except one, consists of 10 files with 333 LOC. The other shared Blocks 1 to 7 have an average of around 2 KLOC. In Figure 7.2, we included for each block, relevant words present in the file names. Concretely, for obtaining these words, we used the VariClouds approach for block naming during feature identification which will be explained in Chapter 8. Block 0 seems to be related to error handling while the other blocks can be potentially related to different features shared among the apps. From a research perspective, one objective is to apply and validate techniques for feature identification towards the extraction of an SPL that will be able to, at least, derive the same six apps by selecting their features. Nevertheless, in this chapter, our objective is to automatically identify a large number of other relevant families within app markets to support experimentation.
Identifying families of Android apps in app markets
Thanks to manual observation, we found three essential and simple characteristics which could be leveraged to identify app families in app markets, including 1) the unique package name of apps, 2) the certificate signed by app developers and 3) code similarity among apps. Figure 7.3 illustrates the working process of our approach, which takes as input a set of apps and output the families of apps. We provide details about these three steps.
1. Package-based categorization: In Android, each app is uniquely specified through a full Java-language-style package name. This package can be recognized because it is declared in the app meta-data. As officially recommended by Google i , to avoid conflicts with other apps, developers should use internet domain ownership as the basis for their package names (in reverse). As an example, apps published by Adobe should start with com.adobe. Thus, if two apps start with a same company domain in their package names, these two apps are more likely developed by the same provider.
Electric Parking Brake case study
We present FRoGs evaluation on an industrial case study in the automotive industry concerning the Renault's Electric Parking Brake (EPB) SPL [START_REF] Dumitrescu | Bridging the gap between product lines and systems engineering: An experience in variability management for automotive model based systems engineering[END_REF][START_REF] Mazo | Recommendation heuristics for improving product line configuration processes[END_REF]. In this section, we will first describe the case study and subsequently discuss the results. We will focus on constraints discovery when an available FM might have non-formalized constraints.
Introduction to the domain
The EPB system is a variation of the classical, purely mechanical, parking brake, which ensures vehicle immobilization when the driver brings the vehicle to a full stop and leaves the vehicle. The case study is focused on the variability in the Bill of Materials (BOM). The BOM is related to the logical and physical components of the EPB. Figure 9.9 presents its FM containing 20 variable features. The possible valid configurations of this FM (S) amount to 2976 while the existing configurations that were provided (EC) are 200.
The HSPLRank approach
In order to ensure that final product variants are of good quality, in this scenario we should answer the following question:
• How to plan evaluations for a limited number of users and in a reasonable amount of time against a large number of viable products in order to select relevant variants?
Consequently, this leads to two research sub questions:
• How to reduce the number of evaluations each user is involved in?
• How to rapidly converge with a rather small number of evaluations according to a given context/situation?
In Section 11.3 of next chapter, we present a specific case study in this scenario dealing with the design of a Contact List UI.
The HSPLRank approach
In order to apply HSPLRank for the estimation and prediction of user assessments, we assume the existence of an operative SPL. Concretely, the input to apply our approach is an SPL where the derived products include HCI components. Figure 10.3 presents an overview of HSPLRank. On top of the SPL, HSPLRank is built on three phases: Variant reduction, variant assessment and ranking creation. Each of them will be presented in next subsections but, before we get into the details, we summarize them:
• Variant reduction is a phase driven by domain experts aiming to reduce the configuration space of the SPL. This is possible by injecting soft constraints regarding usability aspects. Soft constraints were presented in Section 2.1.2. The subset of valid configurations satisfying soft constraints is called the viable configuration space (i.e., configurations which have sufficient usability).
• Variant assessment is a phase driven by a pool of users to explore the configuration space during a predefined amount of time or for a finite number of assessments. This phase relies on an Interactive Genetic Algorithm (IGA) [START_REF] Eiben | Introduction to Evolutionary Computing[END_REF][START_REF] Takagi | Interactive evolutionary computation: Fusion of the capabilities of EC optimization and human evaluation[END_REF] which is seeded with configurations from the viable space. This algorithm evolves a set of configurations using the result of the assessments of the users pool. The user assessment is performed using the product from the configuration assigned by the IGA. All assessments obtained during the exploration of the configuration space are used to feed a data set which is used in the next phase. Data set items are composed of a pair of a configuration and the score of the user assessment.
• Ranking creation. Using a data mining technique, HSPLRank estimates and predicts the score of any possible configuration enriched with confidence metrics about the estimation. This technique is leveraged to produce a ranking.
Summary
Software Product Line Engineering (SPLE) is a mature approach to manage a family of product variants with proven benefits in terms of quality and time-to-market. Unfortunately, SPLE demands a profound culture shift to an scenario where products should not be developed with a single-product vision. SPLs promote a centralized system that manages variability and exploits reusable assets to satisfy a wide range of customers. Before considering the high up-front investment of SPL adoption, it is often the case that companies have artefacts created with ad-hoc reuse practices, such as copy-paste-modify, to quickly respond to different customers at the expense of future maintenance costs for the artefact family as a whole. With the objective to mine and leverage artefact variants, Parts II, III and IV contributed to the advancement of extractive SPL adoption.
In Part II we presented Bottom-Up Technologies for Reuse (BUT4Reuse) which is a generic and extensible extractive SPL adoption framework that helps in chaining the technical activities for obtaining a feature model and reusable assets. Given that SPLs are being used in many application domains beyond source code artefacts, we presented the principles of adapters to support different artefact types. We described the ones available and a detailed description of the use of BUT4Reuse in an scenario with model variants which is considered a relevant domain in software engineering. Our approach, named Model Variants to Product Lines (MoVa2PL) enables to extract Model-based SPLs.
Part III is dedicated to help researchers in the field of extractive SPL adoption by providing the means for intensive experimentation. Concretely, we focused on providing a benchmark for feature location techniques using Eclipse variants (EFLBench). Therefore, we provided a method to obtain the ground truth from any set of Eclipse variants. The benchmark integration in the extensible BUT4Reuse framework makes easy to launch and compare feature location techniques. In addition, we provided a method for automatic generation of variants enabling the parametrization of some characteristics of the benchmarks. We also presented a method to identify family of variants of Android applications in large repositories (AppVariants) for experimentation with feature identification techniques. By applying our method we were able to discuss recurrent cases found when performing feature identification, such as libraries reuse, feature-based generated applications, content-driven variability and device-driven variability.
In Part IV we focused on assistance for domain experts by presenting two visualisation paradigms to support extractive SPL adoption activities. The first one is related to feature naming where word clouds are leveraged to summarize the implementation elements inside the identified blocks (VariClouds). The second one is a visualisation to help during feature constraints discovery that is built by mining the existing configurations (FRoGs). The usage of this visualisation can go beyond extractive SPL adoption and can be used for analysing feature relations (e.g., soft constraints) or as support for product configuration.
List of papers, tools & services
Papers included in the dissertation: | 433,823 | [
"960553"
] | [
"405336"
] |
01478217 | en | [
"info"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01478217/file/main.pdf | Jerry Lonlac
email: [email protected]
Engelbert Mephu Nguifo
Towards Learned Clauses Database Reduction Strategies Based on Dominance Relationship
Clause Learning is one of the more important components of conflict driven clause learning (CDCL) SAT solver that is effective on industrial instances. Since the number of learned clauses is proved to be exponential in the worse case, it is necessarily to identify the most relevant clauses to maintain and delete the irrelevant ones. As reported in the literature, several learned clauses deletion strategies have been proposed. However the diversity in both the number of clauses to be removed at each step of reduction and the results obtained with each strategy creates confusion to determine which criterion is better. Thus, the problem to select which learned clauses are to be removed during the search step remains very challenging. In this paper, we propose a novel approach to identify the most relevant learned clauses without favoring or excluding any of the proposed measures, but by adopting the notion of dominance relationship among those measures. Our approach bypasses the problem of the diversity of results and reaches to a compromise between the assessments of these measures. Furthermore, the proposed approach also avoids another non-trivial problem which is the amount of clauses to be deleted at each reduction of the learned clause database.
Introduction
The SAT problem, i.e., the problem of checking whether a Boolean formula in conjunctive normal form (CNF) is satisfiable or not, is central to many domains in computer science and artificial intelligence including constraint satisfaction problems (CSP), automated planning, non-monotonic reasoning, VLSI correctness checking, etc. Today, SAT has gained a considerable audience with the advent of a new generation of solvers able to solve large instances encoding real-world problems. These solvers, often called modern SAT solvers [START_REF] Moskewicz | [END_REF][START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF] or CDCL (Conflict Driven Clause Learning) SAT solvers have been shown to be very efficient at solving real-world SAT instances. They are built by integrating four major components to the classical(DPLL) procedure [Davis et al., 1962]: lazy data structures [START_REF] Moskewicz | [END_REF], activity-based variable selection heuristics (VSIDSlike) [START_REF] Moskewicz | [END_REF], restart policies [Gomes et al., 1998], and clause learning [START_REF] Silva | [END_REF][START_REF] Moskewicz | [END_REF]. Although a nice combination of these components contributes to improve the efficiency of modern SAT solvers [START_REF] Katebi | [END_REF], clause learning remains known as the most important component [START_REF] Pipatsrisawat | Knot Pipatsrisawat and Adnan Darwiche. On the power of clause-learning sat solvers with restarts[END_REF]. The global idea of clause learning is that during the unit propagation process, when a current branch of the search tree leads to a conflict, moderns SAT solvers learn a conflict clause that helps unit propagation to discover one of the implications missed at an earlier level. This conflict clause expresses the causes of the conflict and is used to prune the search space. Clause learning, also known in the literature as Conflict Driven Clause Learning (CDCL), refers now to the most known and used First (UIP) learning scheme, first integrated in the SAT solver Grasp [START_REF] Silva | [END_REF] and efficiently implemented in zChaff [START_REF] Moskewicz | [END_REF]. Most of the SAT solvers integrate this strong learning scheme. Since at each conflict, CDCL solvers learn a new clause that is added to the learned clauses database, and the number of learned clauses is proved to be exponential in the worse case, it is necessary to remove some learned clauses to maintain a database of polynomial size. Therefore, remove too many clauses can make learning inefficient, and keeping too many clauses also can alter the efficiency of unit propagation.
Managing the learned clauses database was the subject of several studies [START_REF] Moskewicz | [END_REF][START_REF] Silva | [END_REF][START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF][START_REF] Audemard | [END_REF][START_REF] Audemard | On freezing and reactivating learnt clauses[END_REF][START_REF] Guo | [END_REF]. These strategies were proposed with the objective to maintain a learned clause database of reasonable size by eliminating clauses deemed irrelevant to the subsequent search. The general principle of these strategies is that, at each conflict, an activity is associated to the learned clauses (static strategy). Such heuristicbased activity aims to weight each clause according to its relevance to the search process. In the case of dynamic strategies, such clauses activities are dynamically updated. The reduction of the learned clauses database consists in eliminating inactive or irrelevant clauses. Although all the learned clause deletion strategies proposed in the literature are shown to be empirically efficient, identifying the most relevant clause to maintain during the search process remains a challenging task. Our motivation in this work comes from the observation that the use of different relevant-based deletion strategies gives different performances. Our goal is to take advantage of several relevant learned clauses deletion strategies by seeking a compromise between them through a dominance relationship.
In this paper, we integrate a user-preference point of view in the SAT process. To this end, we integrate into the SAT process the idea of skyline queries [START_REF] Börzsönyi | [END_REF], dominant patterns [Soulet et al., 2011], undominated association rules [START_REF] Bouker | [END_REF] in order to learn clauses in a threshold-free manner. Such queries have attracted considerable attention due to their importance in multi-criteria decision making. Given a set of clauses, the skyline set contains the clauses that are not dominated by any other clause.
Skyline processing does not require any threshold selection function, and the formal property of domination satisfied by the skyline clauses gives to the clauses a global interest with semantics easily understood by the user. This skyline notion has been developed for database and data mining applications, however it was unused for SAT purposes. In this paper, we adapt this notion to the learned clauses management process.
The paper is organized as follows. We first present some effective relevant-based learned clauses deletion strategies used in the literature. Then, our learned clauses deletion strategy based on the dominance relationship between different strategies is presented in section 3. Finally, before the conclusion, experimental results demonstrating the efficiency of our approach are presented.
On the learned clauses database management strategies
In this section, we present some efficient learned clauses relevance measures exploited in the most SAT solvers of the literature.
The most popular CDCL SAT solver Minisat [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF]] considers as relevant the clauses the most involved in recent conflict analysis and removes the learned clauses whose the involvement in recent conflict analysis is marginal. An other strategy called LBD for Literal Block Distance was proposed in [START_REF] Audemard | [END_REF]. LBD based measure is also exploited by most of the best stateof-the-art SAT solver (Glucose, Lingeling [Biere, 2012]) and whose the efficiency has been proved empirically. LBD based measure uses the number of different levels involved in a given learned clause to quantify the quality of the learned clauses. Hence, the clauses with smaller LBD are considered as more relevant. In [START_REF] Audemard | On freezing and reactivating learnt clauses[END_REF], a new dynamic management policy of the learned clauses database is proposed. It is based on a dynamic freezing and activation principle of the learned clauses. At a given search state, using a relevant selection function based on progress saving (PSM), it activates the most promising learned clauses while freezing irrelevant ones. In [START_REF] Guo | [END_REF], a new criterion to quantify the relevance of a clause using its backtrack level called BTL for BackTrack Level based clause was proposed. From experiments, the authors observed that the learned clauses with small BTL values are used more often in the unit propagation process than those with higher BTL values. More precisely, the authors observed that the learned clauses with BTL value less than 3 are always used much more than the remaining clauses. Starting from this observation, and motived by the fact that a learned clause with smaller BTL contains more literals from the top of the search tree, the authors deduce that relevant clauses are those allowing a higher backtracking in the search tree (having small BTL value). More recently, several other learned clauses database strategies were proposed in [START_REF] Jabbour | [END_REF]Ansótegui et al., 2015]. In [START_REF] Jabbour | [END_REF], the authors explore a number of variations of learned clause database reduction strategies, and the performance of the different extensions of Minisat solver integrating their strategies is evaluated on the instances of the SAT competitions 2013/2014 and compared against other state-of-the-art SAT solvers (Glucose, Lingeling) as well as against default Minisat. From the performances obtained in [START_REF] Jabbour | [END_REF], the authors have shown that size-bounded learning strategies proposed more than fifteenth years ago [START_REF] Silva | [END_REF][START_REF] Bayardo | [END_REF][START_REF] Bayardo | [END_REF] is not over and remains a good measure to predict the quality of learned clauses. They show that adding randomization to size bounded learning is a nice way to achieve controlled diversification, allows to favor the short clauses, while maintaining a small fraction of large clauses necessary for deriving resolution proofs on some SAT instances. This study opens many discussions about the learned clauses database strategies and raises questions about the effectiveness proclaimed by other strategies of the state-of-the-art [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF][START_REF] Audemard | [END_REF]. In [Ansótegui et al., 2015], the authors use the community structure of industrial SAT instances to identify a set of highly useful learned clauses. They show that augmenting a SAT instance with the clauses learned by the solver during its execution does not always mean to make the instance easy. However, the authors show that augmenting the formula with a set of clauses based on the community structure of the formula improves the performance of the solver in many cases. The different performances obtained by each strategy suggests that the question on how to predict efficiently the "best" learned clauses is still open and deserves further investigation.
On the other hand, it is important to note that the efficiency of most of these state-of-the-art learned clauses management strategies heavily depends on the cleaning frequency and on the amount of clauses to be deleted each time. Generally, all the CDCL SAT solvers using these strategies exactly delete half of the learned clauses at each learned clauses database reduction step. For example, the CDCL SAT solver Minisat [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF] and Glucose [START_REF] Audemard | [END_REF] delete half of the learned clauses at each cleaning. Therefore, the efficiency of this amount of learned clauses to delete (e.g the half) at each cleaning step of the learned clauses database has not been demonstrated theoretically, but instead experimentally. For our knowledge, there are not many studies in the literature on how to determine the amount of clauses to be deleted each time. This paper proposes an approach to identify the relevant learned clauses during the resolution process without favor any of the best reported relevant measures and which frees itself of the amount of clauses to be removed at each time: the amount of learned clauses to delete corresponds at each time to the number of learned clauses dominated by one particular learned clause of the set of the current learned clauses which is called in the following sections, the reference learned clause.
Detecting undominated learned Clauses
We present now our learned clauses relevant measure based on dominance relationship. We first motivate this approach with a simple example, and then propose an algorithm allowing to identify the relevant clauses with some technical details.
Motivating example
Let us consider the following relevant strategies: LBD [Audemard and Simon, 2009], SIZE (which consider as relevant the clause of the short size) and the relevant measure use by minisat [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF]] that we denote here CVSIDS. Suppose that we have in the learned clauses database, the clauses c 1 , c 2 and c 3 with:
• SIZE(c1) = 8, LBD(c1) = 3, CV SIDS(c1) = 1e 100 , • SIZE(c2) = 6, LBD(c2) = 5, CV SIDS(c2) = 1e 200 , • SIZE(c3) = 5, LBD(c3) = 4, CV SIDS(c3) = 1e 300 .
The question we ask is the following: which one is relevant? In [START_REF] Audemard | [END_REF], the authors consider the clause c 1 which has the most small LBD measure as the most relevant. In contrast, the authors of [START_REF] Jabbour | [END_REF] and [START_REF] Goldberg | [END_REF] prefer the clause c 3 while the preference of the authors of Minisat [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF] leads to the clause c 3 . Our approach copes with the particular preference at one measure by finding a compromise between the different relevant measures through the dominance relationship. Hence, for the situation described above, only the clause c 2 is irrelevant because it is dominated by the clause c 3 on the three given measures.
Formalization
During the search process, the CDCL SAT solvers learn a set of clauses which are stored in the learned clauses database ∆, ∆ = {c 1 , c 2 , ..., c n }. At each cleaning step, we evaluate these clauses with respect to a set M = {m 1 , m 2 , ..., m k } of relevant measures. We denote m(c) the value of the measure m for the clause c, c ∈ ∆, m ∈ M. Since the evaluation of learned clauses varies from a measure to another one, using several measures could lead to different outputs (relevant clauses with respect to a measure). For example, if we consider the motivating example, c 1 is the best clause with respect to the LBD measure whereas it is not the case according to the evaluation of SIZE measure which favors c 3 . This difference of evaluations is confusing for any process of learned clauses selection. Hence, we can utilize the notion of dominance between learned clauses to address the selection of relevant ones. Before, formulating the dominance relationship between learned clauses, we need to define it at the level of measure values. To do that, we define dominance value as follows:
Definition To discover the relevant learned clauses a naive approach consists in comparing each clause with all other ones. However, the number of learned clauses is proved to be exponential which makes pairwise comparisons costly. In the following, we show how to overcome this problem by defining at each cleaning step of learned clauses database, a particular learned clause denoted by τ that we call here current reference learned clause which is an undominated clause of ∆ according to the set of learned clauses relevant measures M. At each cleaning step, all the learned clauses dominated by τ will be considered as the irrelevant learned clauses and thus deleted from the learned clauses database.
To define current reference learned clause, we need a new relevant measure based on all the learned clauses relevant measures of M. We call this new measure Degree of compromise, in short DegComp defines as follows:
Definition 3 (Degree of compromise) Given a learned clause c, the degree of compromise of c with respect to the set of learned clauses relevant measures M is defined by
DegComp(c) = n i=1 mi(c) |M |
, where m i (c) corresponds to the normalized value of the clause c on the measure m i .
In fact, in practice, measures are heterogeneous and defined within different scales. For example the values of the learned clauses relevant measures in [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF] are very high, in exponential order while the values of the relevant measures in [START_REF] Audemard | [END_REF]] are smallest ones. Hence, in order to avoid that the measures with the higher values make marginal the measures with smallest values in the computation of the comprise degree of a given learned clauses, it is recommend to normalize the measures values. In our case here, we choose to normalize all the measures in the interval [0, 1]. More precisely, each value of measure m(c) of any learned clause c must be normalized into m(c) within [0, 1]. The normalization of a given measure m is performed depending on its domain and the statistical distribution of its active domain. We recall that the active domain of a measure m is the set of its possible values. It is worth mentioning, the normalization of a measure does not modify the dominance relationship between two given values. If we consider the learned clause c 1 given in the motivating example in the section 3.1, with its three values : DegComp(c 1 ) = CV SDIS(c1)+ LBD(c1)+ SIZE(c1)
3
, with nV ars() the number of variables of the Boolean formula.
After giving the necessary definitions (current reference learned clause and Degree of compromise), the following lemma offers a swifter solution rather than pairwise comparisons, to find relevant clauses based on dominance relationship.
Lemma 1 Let c be a learned clause having the minimal degree of compromise with respect to the set of learned clauses relevant measures M, then c is an undominated clause.
Proof 1 Let c be a learned clause having the minimal degree of compromise with respect to the set of learned clauses relevant measures M, we suppose that there exists a learned clause c that strictly dominates c, which means that ∀m ∈ M, m(c ) m(c) and ∃m ∈ M, m (c ) m (c). Hence, we have DegComp(c ) < DegComp(c). The latter inequality contradicts our hypothesis, since c has the minimal degree of compromise with respect to M.
Property 1 Let M the set of learned clauses relevant measures, ∀c, c , c" three learned clauses, if c c and c c" then c c".
During the search process, at each cleaning step of the learned clauses database, we first find the learned clause cM in having the minimal degree of compromise with respect to M. Then, we delete from the learned clauses database all the clauses dominated by cM in.
Searching for all undominated clauses during each cleaning step can be time consuming, such that we only commpute the undominated clauses with respect to the reference learned clause during each reduction step.
Algorithm
In this section, after presenting the general scheme of a deletion strategy of learned clauses (reduceDB(∆)) adopted by most of the reported solvers, we propose an algorithm allowing to discover relevant learned clauses by using dominance relationship.
Algorithm 1 depicts the general scheme of a learned clause deletion strategy (reduceDB(∆)). This algorithm first sorts the set of learned clauses according to the defined criterion and then deletes half of the learned clauses. In fact, this algorithm take a learned clauses database of size n and outputs a learned clauses database of size n/2. This is different from our approach which first searches the learned clause having the smallest degree of compromise (called reference learned clause) and then removes all the learned clauses that it dominates. The algorithm 2 depicts our learned clause deletion strategy. It is important to note that the clauses whose size (number of literals) and LBD are less than or equal to 2 are not concerned by the dominance relationship. These learned clauses are considered as more relevant and are maintained in the learned clauses database. Hence, the minDegComp function of our algorithm 2 looks the learned clause of minimal degree of compromise among the learned clauses of size and LBD greater than 3. For our experiments, we use three relevant measures for the dominance relationship in other to assess the efficiency of our approach. Notice that the user can choose to combine different other measures. We use SIZE [START_REF] Goldberg | [END_REF], LBD [START_REF] Audemard | [END_REF] and CV SIDS [START_REF] Eén | Niklas Eén and Niklas Sörensson. An extensible sat-solver[END_REF] measures. All these measures have been proved effective in the literature [Eén and Sörens-son, 2003;[START_REF] Audemard | [END_REF][START_REF] Jabbour | [END_REF]. It is possible to use more relevant measures, but it should be noted that by adding a measure to M, the number of relevant learned clauses maintained may decrease or increase. The decrease can be explained by the fact that a learned clause can be dominated with respect to a set of measures M and undominated with respect to M , such that M ⊂ M . For example, if two learned clauses c and c are undominated with respect to M, there is a possibility that one of them dominates the other by removing one measure. The increase can be explained by the fact that a learned clause can be dominated with respect to M and undominated with respect to M . For example, consider a learned clause c which dominates another learned clause c with respect to M, by adding a measure m to M, such that m(c ) m(c), then c is no longer dominated by c.
We run the SAT solvers on the 300 instances taken from the last SAT-RACE 2015 and on the 300 instances taken from the last SAT competition 2016. All the instances are preprocessed by SatElite [START_REF] Eén | Niklas Eén and Armin Biere. Effective preprocessing in sat through variable and clause elimination[END_REF] before running the SAT solver. The experiments are made using Intel Xeon quad-core machines with 32GB of RAM running at 2.66 Ghz. For each instance, we used a timeout of 1 hour of CPU time for the SAT-RACE, and 10000s for the SAT Competition. We integrate our approach in Glucose and made a comparison between the original solver and the one enhanced with the new deletion learned clause strategy using dominance relationship called DegComp-Glucose.
Number of solved instances and CPU time
Table 1 presents results on SAT-RACE. We use the source code of Glucose 3.0 with the measure LBD (written LBD-Glucose or Glucose in what follows). We then replace LBD by each of the other measures : SIZE-Glucose that considers the shortest clauses as the most relevant, CV SIDS-Glucose that maintains the learned clauses most involved in recent conflict analysis, and finally our proposal DegComp-Glucose. Table 1 shows the comparative experimental evaluation of the four measures as well as M inisat 2.2. In the second column of Table 1, we give the total number of solved instances (#Solved). We also mention, the number of instances proven satisfiable (#SAT) and unsatisfiable (#UN-SAT) in parenthesis. The third column shows the average CPU time in seconds (total time on solved instances divided by the number of solved instances). On the SAT-RACE 2015, our approach DegComp-Glucose is more efficient than the others in terms of the number of solved instances (see also Figure 1). In fact the original solver Glucose solves 236 instances while it is enhanced with our dominance approach as 12 more instances are solved. In fact, solving such additional number of instances is clearly significant in practical SAT solving. The CV SIDS-Glucose solver solves 4 more instances than Glucose 3.0. M inisat 2.2 is the worst solver among the five solvers.
Table 2 shows 5 instances of the SAT-RACE 2015 solved by our approach but not solved by LBD-Glucose, SIZE-Glucose, nor CV SIDS-Glucose. The time used to solved those instances may also explained the increase of the average running time of DegComp-Glucose. In addition we also Figure 1 shows the cumulated time results i.e. the number of instances (x-axis) solved under a given amount of time in seconds (y-axis). This figure gives for each technique the number of solved instances (#instances) in less than t seconds. It confirms the efficiency of our dominance relationship approach. From this figure, we can observe that DegComp-Glucose is generally faster than all the other solvers, even if the average running time of LBD-Glucose is the lowest one (see Table 1). Although DegComp-Glucose needs additionnal time to compute the dominance relationship, the quality of the remained clauses on SAT-RACE helps to improve the time needed to solved the instances. solve one more instance than DegComp-Glucose which remains competitive, and solves the greatest number of satisfiable instances. Figure 2 presents the cumulated time results on the instances of the SAT competition 2016. It comes out from this second dataset that LBD-Glucose is more efficient than the others including our approach which remains competitive wrt the number of solved instances.
This outcome gives credit to the NO FREE Lunch theorem [START_REF] Wolpert | [END_REF]. We also think that the aggregated function may not be unique for all the datasets, such that it is necessary to explore the efficient combination of the prefered measures.
Common solved instances
In table 4, the intersection between two relevant measures gives the number of common instances solved by each measure. For example, LBD and SIZE solved 219 instances in common, while 234 instances are solved by LBD and DegComp. We can see than our approach solves the largest number of instances in common with each of the aggregated measures. More precisely, the number of common instances solved with another measure is lower than the number of common instances solved with our approach. To get more details, Table 5 gives the number of instances commonly solved by the considered relevant measures. This table allows to see the number of common instances solved by one, two, three or four measures. For example, there are 218 common instances solved by the four deletion strategies, while 44 instances are not solved by none of them. We can observe that 1, 1, 5, and 5 are the number of instances solved alone by respectively LBD and CV SIDS, SIZE and DegComp. Moreover, there is no instance solved by the three strategies (LBD, SIZE and CV SIDS) and not solved by our approach DegComp. Table 5: Detailed of common instances with SAT-RACE.
Combined measures
Table 6 gives the number of instances solved with our dominance approach wrt the measures used in the dominance relations. From this table, we can see that the number of instances solved by using two measures (instead of three) in the dominance relationship is always lower than the number of instances solved (248) by using three measures.
Conclusion and Future Works
In this paper, we propose an approach that addresses the learned clauses database management problem. We have shown that the idea of dominance relationship between relevant measures is a nice way to take profit of each measure. This approach is not hindered by the abundance of relevant measures which has been the issue of several works. The proposed approach avoids another non-trivial problem which is the amount of learned clauses to be deleted at each reduction step of the learned clauses database. The experimental results show that exploiting the dominance relationship improves the performance of CDCL SAT solver, at least on the SAT-RACE 2015. For the case of SAT-Competition, we still have to find a good dominance relation. The instances categories might also be an issue which should be explored.
To the best of our knowledge, this is the first time that dominance relationship has been used in the satisfiability domain to improve the performance of a CDCL SAT solver. Our approach opens interesting perspectives. In fact, any new relevant measure of learned clauses can be integrated into the dominance relationship.
•
1 (dominance value) Given a learned clauses relevant measure m and two learned clauses c and c , we say that m(c) domine m(c ), denoted by m(c) m(c ), iff m(c) is preferred to m(c ). If m(c) m(c ) and m(c) = m(c ) then we say that m(c) strictly dominates m(c'), denoted m(c) m(c ). Definition 2 (dominance clause) Given two learned clauses c, c , the dominance relationship according to the set of learned clauses relevant measures M is defined as follows: • c dominates c , denoted c c , iff m(c) m(c ), ∀m ∈ M. If c dominates c and ∃m ∈ M such that m(c) m(c ), then c stritly dominates c and we note c c .
Algorithm 1 :
1 Deletion Strategy: reduceDB function Input: ∆: The learned clauses database of size n Output: ∆ The new learned clauses database of size n/2 sortLearntClauses() ; / * by the defined 1 reduceDB-Dominance-Relationship Input: ∆: The learned clauses database; M: a set of relevant measures Output: ∆ The new learned clauses database cMin = minDegComp(M) ;
Figure 1 :
1 Figure 1: Evaluation on SAT-RACE-2015
Figure 2 :
2 Figure 2: Evaluation on SAT competition 2016
Table 1 :
1 Comparative evaluation on SAT-RACE-2015. find that there is none instance solved by all the other solvers and not solved by our approach (as detailed later). This shows on the one hand that the application of dominance between different relevant measures does not degrade the performance of all the solvers but instead takes advantage of the performance of each relevant measure, considering the SAT-RACE dataset.
Solvers #Solved (#SAT -#UNSAT) Average Time
M inisat 2.2 209 (134 -75) 585.19 s
SIZE-Glucose 230 (131 -99) 533.86 s
CV SIDS-Glucose 240 (140 -100) 622.23 s
LBD-Glucose 236(136 -100) 481.66 s
DegComp-Glucose 248 (146 -102) 571.31 s
Instances LBD SIZE CVSIDS DegComp
jgiraldezlevy.2200.9086.08.40.8 - - - 93.71
manthey_DimacsSorterHalf_37_3 - - - 2642.88
14packages-2008seed.040-NOTKNOWN - - - 1713.46
manthey_DimacsSorter_37_3 - - - 2673.39
jgiraldezlevy.2200.9086.08.40.2 - - - 3195.03
Table 2 :
2 Instances solved by DegComp-Glucose and not solved by the others on SAT-RACE.
Table 3 presents results on the instances of the SAT Competition 2016. Here LBD-Glucose and CV SIDS-Glucose
Solvers #Solved (#SAT -#UNSAT) Average Time
M inisat 2.2 138 (65 -73) 1194.85 s
SIZE-Glucose 156 (67 -89) 1396.73 s
CV SIDS-Glucose 165 (67 -98) 1368.99 s
LBD-Glucose 165 (68 -97) 1142.33 s
DegComp-Glucose 164 (69 -95) 1456.34 s
Table 3 :
3 Comparative evaluation on SAT-Competition-2016.
Table 4 :
4 Common solved Instances from SAT-RACE-2015.
M easures LBD SIZE CVSIDS DegComp
LBD 236 234
SIZE 219 230 225
CVSIDS 233 221 240 238
DegComp 248
Table 6 :
6 Combining two measures on SAT-RACE-2015.
M easures LBD SIZE CVSIDS DegComp
LBD 236
SIZE 223 230
CVSIDS 239 242 240
DegComp 248 | 31,459 | [
"744387",
"15514"
] | [
"857",
"810",
"857"
] |
01478523 | en | [
"math"
] | 2024/03/04 23:41:46 | 2016 | https://hal.science/hal-01478523/file/Bordes_ChauveauJSM2016.pdf | Laurent Bordes
email: [email protected]
Didier Chauveau
email: [email protected]
Stochastic EM-like Algorithms for Fitting Finite Mixture of Lifetime Regression Models Under Right Censoring
Keywords: Right censoring, EM algorithm, proportional hazards model, semiparametric mixture models
Finite mixture of models based on the proportional hazards or the accelerated failure time assumption lead to a large variety of lifetime regression models. We present several iterative methods based on EM and Stochastic EM methodologies, that allow fitting parametric or semiparametric mixture of lifetime regression models for randomly right censored lifetime data including covariates. Their identifiability is briefly discussed and in the semiparametric case we show that simulating the missing data coming from the mixture allows to use the ordinary partial likelihood inference method in an EM algorithm's M-step. The effectiveness of the new proposed algorithms is illustrated through simulation studies.
Introduction
In survival analysis it is frequent that the duration of interest is observed with covariates influencing its probability distribution. The semiparametric proportional hazards model (PHM) is probably the most famous lifetime regression model since [START_REF] Cox | Regression models and life-tables (with discussion)[END_REF] introduced the partial likelihood function that allows estimating the Euclidean regression parameter, considering that the baseline hazard rate function is a nuisance parameter. When the duration of interest depends on several explanatory variables and that quantitative ordinal explanatory variables are missing, then the associated survival function is simply a finite mixture of survival functions potentially dependent of the observed covariates. In the parametric case there is a huge number of papers dealing with inference methods for finite mixture models taking into account the fact that often the lifetime is incompletely observed due to censoring or truncation. See e.g. [START_REF] Chauveau | A stochastic EM algorithm for mixtures with censored data[END_REF], [START_REF] Beutner | Estimators based on data-driven generalized weighted Cramer-von Mises distances under censoring -with applications to mixture models[END_REF], Balakrishnan andMitra (2011, 2014), [START_REF] Bordes | Comments: EM-based likelihood inference for some lifetime distributions based on left truncated and right censored data and associated model discrimination[END_REF] for contributions. However very few papers deal with semiparametric finite mixture of lifetime models. Recently, Bordes and [START_REF] Bordes | Stochastic EM algorithms for parametric and semiparametric mixture models for right-censored lifetime data[END_REF] proposed to fit a semiparametric two-component mixture model under right censoring using a stochastic EM-like algorithm. Nevertheless it is worth noting that there are very special kinds of two-component semiparametric mixture models that are common in lifetime data analysis, that is the mixture of a nonparametric lifetime model and a mass at 0 (zero-inflated model) or at infinity (cure model). The later model has motivated important developments with and without explanatory variables (see for instance [START_REF] Yin | Cure rate models: A unified approach[END_REF].
During several decades, mixture models have considerably expanded from both theoretical and applied point of view (see for example [START_REF] Mclachlan | Finite mixture models[END_REF] as well as specific estimation methods, especially those based on EM algorithm (see [START_REF] Mclachlan | The EM Algorithm and Extensions[END_REF] or their stochastic versions (see e.g., [START_REF] Celeux | The SEM algorithm: A probabilistic teacher algorithm derived from the EM algorithm for the mixture problem[END_REF][START_REF] Celeux | Stochastic versions of the EM algorithm: An experimental study in the mixture case[END_REF] for which there are few theoretical results (see [START_REF] Nielsen | The stochastic EM algorithm: Estimation and asymptotic results[END_REF]. Some of the estimation methods which have been developed to fit the proportional hazards model with missing covariates are also close to estimation methods required to fit mixture models (see for instance [START_REF] Chen | Proportional hazards regression with missing covariates[END_REF].
In this paper first we briefly introduce the general framework on finite mixture of lifetime regression models, right censored data and the semiparametric estimation method for the PHM. Then, in Section 2, we introduce several classes of parametric and semiparametric finite mixture models based on the PHM. Section 3 is devoted to a genuine EM algorithm in the parametric setup while Section 4 deals with an adaptation of the stochastic EM-like algorithm for semiparametric models. Several numerical illustrations are given in Section 5 and a discussion ends the paper in Section 6.
Data and Cox regression under right censoring
Let {(X 1 , Z 1 ), . . . , (X n , Z n )} be n i.i.d. copies of (X, Z) ∈ [0, +∞) × R p where the conditional pdf of the lifetime X given Z = z is g(x|z, θ). We assume that these lifetimes data come from a finite mixture of m components
g(x|z, θ) = m j=1 α j f j (x|z), (1)
where θ = (α, f ) with α = (α 1 , . . . , α m ) ∈ [0, 1] m and m j=1 α j = 1 are the component weights and f = (f 1 , . . . , f m ) are the component conditional pdf. Considering the cdf we can write
G(x|z, θ) = m j=1 α j F j (x|z), (2)
or
Ḡ(x|z, θ) = m j=1 α j Fj (x|z), (3)
where Ḡ = 1 -G and Fj = 1 -F j are conditional survival functions. In addition we assume that lifetimes X i are possibly right censored by censoring times C i such that instead of observing X i we observe
T i = X i ∧ C i and D i = I(X i ≤ C i ) for i ∈ {1, . . . , n}. We assume that {(X 1 , C 1 , Z 1 ), . . . , (X n , C n , Z n )} are i.i.d. copies of (X, C, Z)
, that conditionally on Z = z, X and C are independent, the conditional pdf of C is written h(c|z). H(c|z) and H(c|z) are the corresponding conditional cdf and survival functions. We write v(z) the pdf of Z on R p . Finally we observe {t, d, z} = {(t 1 , d 1 , z 1 ), . . . , (t n , d n , z n )} where t i = x i ∧ c i and d i = I(x i ≤ c i ) for 1 ≤ i ≤ n. The observed covariates are not time-dependent here but considering time-dependent covariates should be possible.
We assume that for all z, G(•|z) and H(•|z) are absolutely continuous with respect to the Lebesgue measure, thus with probability one we have T i = T j for all i = j. Let (i 1 , . . . , i n ) be the permutation of (1, . . . , n) such that t i 1 < t i 2 < • • • < t in . For simplicity, from now on we rewrite
(t k , d k , z k ) ≡ ((t i k , d i k , z i k )) for 1 ≤ k ≤ n.
Let us recall that a duration X follows a proportional hazards rate model if conditionally on Z = z its hazard rate function (or risk function) is defined by
λ X|Z (x|z) = e β T z λ 0 (x),
where β ∈ R p is an unknown regression parameter and λ 0 is an unknown baseline hazard rate function. By the Cox partial likelihood principle β can be estimated by β = arg max β∈R p L n (β) where
L n (β) = n i=1 e β T z i j≥i e β T z j d i .
The cumulative hazard rate function Λ 0 (x) = +∞, then λ 0 (x) is estimated by
λ0 (x) = 1 b n i=1 K x -t i b d i j≥i e βT z j
.
Note that:
1. Maximizing L n (β) with respect to β is generally done using differential optimization method since β → L n (β) belongs to C ∞ (R p ) and is convex.
2. S X|Z (x|z) can also be estimated using a product-limit type estimator.
3. The package survival [START_REF] Therneau | survival: Survival analysis, including penalised likelihood[END_REF] for the R statistical software (R Core Team, 2013) gives all the previous quantities except λ0 .
Some finite mixtures of the proportional hazards model
We describe in this section four possible models, denoted M1-M4, for which each component in (3) follows a semiparametric Proportional Hazards Model (PHM).
M1: Common covariate effect with dependent baseline risk functions.
For
1 ≤ j ≤ m we have Fj (x|z, θ) = {S 0 (x)} exp(β T z+γ j ) , then θ = (S 0 (•), α, β, γ) where γ = (γ 2 , . . . , γ m ) (γ 1 = 0 for identifibility reasons), hence θ ∈ S × R 2m+p-2
where S denotes the set of survival functions.
M2: Common baseline risk function with independent covariate effects.
For
1 ≤ j ≤ m we have Fj (x|z, θ) = {S 0 (x)} exp(β T j z) then θ = (S 0 (•), α, β) where β = (β 1 , . . . , β m ), hence θ ∈ S × R m(p+1)-1 .
M3: Common covariate effect with independent baselines (NP).
For
1 ≤ j ≤ m, Fj (x|z, θ) = {S 0j (x)} exp(β T z) , then θ = (S 01 (•), . . . , S 0m (•), α, β), hence θ ∈ S m × R m+p-1 .
M4: Independent covariate effects and baselines.
For
1 ≤ j ≤ m, Fj (x|z, θ) = {S 0j (x)} exp(β T j z) , then θ = (S 01 (•), . . . , S 0m (•), α, β) where β = (β 1 , . . . , β m ), hence θ ∈ S m × R m(p+1)-1 .
Note that we have some hierarchy for these models: Model 1 ⊂ Model 3 ⊂ Model 4, and Model 2 ⊂ Model 4.
Genuine EM-algorithm in the parametric set-up
In the parametric situation the complete data pdf f c is defined by
f c T,D,Z,J (t, d, z, j|θ) = α j f (t|γ j , β, z) H(t|z) d F (t|γ j , β, z)h(t|z) 1-d v(z)
where v does not depend on θ = (α, γ, β) and J ∼ Mult(1, α) is the missing data, independent of (T, D, Z). In the sequel we write f c for f c T,D,Z,J . The complete data likelihood function c is defined by
t,d,z,j (θ) = log n i=1 f c (t i , d i , z i , j i |θ) = n i=1 log ( H(t i |z i )) d i (h(t i |z i )) 1-d i v(z i ) + n i=1 log α j i (f (t i |γ j i , β, z i ) d i ( F (t i |γ j i , β, z i ) 1-d i
where in the right hand side of the last equality the first term does not depend on θ and j = (j 1 , . . . , j n ) is the unobserved realization of (J 1 , . . . , J n ).
The EM genuine algorithm consists in providing iterates (θ k ) k≥0 by iteratively maximizing Q(θ|θ k ) where
Q(θ|θ k ) = n i=1 E log(f c (T i , D i , Z i , J i |θ))|t i , d i , z i , θ k .
Calculating the above conditional expectation requires to calculate the posterior probabilities
α k ij = Pr J i = j|t i , d i , z i , θ k = f c (t i , d i , z i , j i |θ k ) f c (t i , d i , z i |θ k ) = α k j f (t i |γ k j , β k , z i ) d i F (t i |γ k j , β k , z i ) m l=1 α k l f (t i |γ k l , β k , z i ) d i F (t i |γ k l , β k , z i ) . (4)
The important point here is that the posterior probabilities in (4) neither depend on the censoring distribution nor on the covariate distribution. Thus we obtain
Q(θ|θ k ) = n i=1 m j=1 α k ij log α j + d i log f (t i |γ j , β, z i ) + (1 -d i ) log F (t i |γ j , β, z i ) +R(t, d, z, h, v, θ k ),
where R does not depend on θ. Thus we delete R in the definition of Q(θ|θ k ).
Example 1 We consider a finite mixture model where the j-th component survival function is defined by F (t|γ j , z) = exp(-γ j te β T z ), corresponding to a parametric proportional hazard rate model with exponential (thus constant) baseline hazard rate function. Setting γ j = e ξ j -ξ 1 we remark that this model belongs to the M1 family of lifetime regression models with S 0 (t) = exp(-e ξ 1 t). The identifiability of this model parameters can be proved using [START_REF] Teicher | Identifiability of mixtures of product measures[END_REF] and assuming that the covariates vectors z generate R p . The j-th component pdf is therefore defined by
f (t|γ j , β, z) = γ j exp(β T z -γ j te β T z ), leading to Q(θ|θ k ) ∝ n i=1 m j=1 α k ij log α j + d i log γ j + β T z i -γ j t i e β T z i -(1 -d i )γ j t i e β T z i ∝ n i=1 m j=1 α k ij log α j + d i log γ j + β T z i -γ j t i e β T z i ,
where ∝ means "equal to, up to a term that does not depend of the parameter of interest". By solving normal equations with respect to α j for j = 1, . . . , m we obtain
α k+1 j = n i=1 α k ij n i=1 m l=1 α k il .
Then we write the normal equations for γ j :
∂Q(θ|θ k ) ∂γ j = n i=1 α k ij d i γ j -t i e β T z i = 0
for 1 ≤ j ≤ m. Considering β as known we solve the above equations by setting
γ k+1 j (β) = n i=1 α k ij d i n i=1 α k ij t i e β T z i , for 1 ≤ j ≤ m. Thus profiling the remaining part of Q(•|θ k ) as a function of β we estimate β by β k+1 = arg max β∈R p Q (β) (β|θ k ) where Q (β) (β|θ k ) ∝ n i=1 m j=1 α k ij d i β T z i -d i log n i=1 α k ij t i e β T z i .
Finally γ k+1 = (γ k+1 1 (β k+1 ), . . . , γ k+1 m (β k+1 )).
Stochastic EM-like algorithms for semiparametric models
Hereafter we consider semiparametric models 1-4. If parameter identifiability is generally well studied for parametric finite mixture models (see e.g. [START_REF] Teicher | Identifiability of mixtures of product measures[END_REF], the identifiability of semi-or non-parametric finite mixture model's parameters is generally a difficult task for which there are few general tools at the exception of [START_REF] Allman | Identifiability of parameters in latent structure models with many observed variables[END_REF]. Even if this point is not discussed in details here we can say briefly that we obtained a partial identifiability results for θ in the model
{(x, z) → Ḡ(x|z; θ) = α( F (x)) e β T Z + (1 - α)( F (x)) γ+e β T Z ; θ = (α, γ, β, F (•)) ∈ Θ = [0, 1] × (1, +∞) × R p × S}
where S is the class of continuous survival functions.
Stochastic EM-like principle
The missing data are the component numbers J 1 , . . . , J n the common distribution of which is defined by Mult(1, α). Conditionally on Z = z, the survival function of X is defined by
S(x|z) = m j=1 α j Fj (t|z, θ)
where survival functions Fj (t|z, θ) are defined by one of the formula for the models M1-M4. In the parametric set-up the general principle of the Stochastic EM (St-EM) algorithm is to produce a sequence of iterates θ k (a Markov chain) such that its ergodic mean converges to the unknown value of the Euclidean parameter θ (see [START_REF] Nielsen | The stochastic EM algorithm: Estimation and asymptotic results[END_REF]. In the semiparametric set-up there is only empirical evidence that the St-EM algorithm performs well (see, e.g. [START_REF] Bordes | Stochastic EM algorithms for parametric and semiparametric mixture models for right-censored lifetime data[END_REF]. Given the value of the parameter θ k at the kth iteration, the general St-EM algorithm follows the following steps.
Step 1. For each item i ∈ {1, . . . , n} and j ∈ {1, . . . , m} calculate
α k ij = α k j λ k j (t i |z i ) d i F k j (t i |z i ) m l=1 α k l λ k l (t i |z i ) d i F k l (t i |z i )
.
Step 2. For each item i ∈ {1, . . . , n} simulate a realization j k i of Mult(1, (α k i1 , . . . , α k im )) and for 1 ≤ j ≤ m define the m sets
X k l = {i ∈ {1, . . . , n}; j k i = l} for 1 ≤ l ≤ m.
We have ∪ m l=1 X k l = {1, . . . , n}.
Step 3. Update the Euclidean parameters: For j ∈ {1, . . . , m} α k+1 j = Card(X k l )/n. The update of the regression parameter depends on the model under consideration. We just detail here the situation for the first two models M1 and M2, but other models can be derived similarly:
(3.1) for Model 1: Calculate
(β k+1 , γ k+1 ) = arg max β∈R p ,γ∈R m-1 L (1,k) (β, γ),
where
L (1,k) (β, γ) = n i=1 exp(β T z i + γ j k i ) n j=i exp(β T z l + γ j k l ) d i .
(3.2) for Model 2: First method, for j ∈ {1, . . . , m}
β k+1 j = arg max β∈R p L (2,k) j (β)
where
L (2,k) j (β) = i∈X k j exp(β T z i ) l≥i:l∈X k j exp(β T z l ) d i .
Second method
(β k+1 1 , . . . , β k+1 m ) = arg max (β 1 ,...,βm)∈R pm L (2,k) (β 1 , . . . , β m ),
where
L (2,k) (β 1 , . . . , β m ) = n i=1 exp(β T j k i z i ) n l=i exp(β T j k l z l ) d i .
This second approach is based on a profile likelihood approach.
Step 4. Update the functional parameters: here as well we just detail the situation for M1 and M2.
(4.1) for Model 1:
Λ k+1 0 (t) = i:t i ≤t d i n l=i exp(z T l β k+1 + γ j k l
) .
(4.2) for Model 2:
Λ k+1 0 (t) = i:t i ≤t d i n l=i exp(z T l β k+1 j k l ) .
Step 5. Kernel estimators: for j ∈ {0, 1, . . . , m}
λ k+1 j (t) = n i=1 1 b K t -t i b ∆Λ k+1 j (t i ),
where the bandwidth has to be tuned following, e.g., rules proposed in [START_REF] Bordes | Stochastic EM algorithms for parametric and semiparametric mixture models for right-censored lifetime data[END_REF] and ∆Λ k+1 j
(t i ) = Λ k+1 j (t i ) -Λ k+1 j (t i -).
Remark. It is easy to check that for models 1 and 2, since the baseline hazard rate is shared by all components, in α k ij the baseline hazard rate can be factorized in the numerator and in the numerator and then it disappears. The consequence is that for these two models the above step 5 can be skipped.
Numerical study and real data analysis
M1 in a parametric case, genuine EM algorithm
We propose here an experiment in the situation of Example 1, i.e. when the j-th component survival function is defined by F (t|γ j , z) = exp(-γ j te β T z ), corresponding to a parametric proportional hazard rate model with exponential (thus constant) baseline hazard rate function. In other words, the j-th component given the covariate z comes from the exponential distribution E(γ j e β T z ). We choose here p = 2 independent and binary covariates Z = (Z 1 , Z 2 ), each of which being Bernoulli B(0.5) distributed. We simulate a m = 2component mixture with parameters α 1 =30%, γ = (0.5, 0.1), β = (0.5, -0.5). The corresponding conditional survival functions are displayed in Fig. 1.
The EM algorithm requires, as always, initialization values for the parameter, θ (0) . For this m = 2 rather simple case, we defined a data-driven initialization: From Fig. 1 we can notice that the two component are somehow separated, whatever the values of the covariates. It is possible from an histogram of the data or prior expert opinion, to define a cutpoint τ and two sub-samples t 1 and t 2 defined by the non censored t k 's that are below or above τ . Then we set α (0) 1 as the proportion of non-censored observations belonging to t 1 , γ (0) j = 1/mean{t j }, and a non informative initialization β (0) = (1, 1). For m > 2, we suggest the common procedure consisting in exploring the parameter space, running EM algorithms from several (random) initializations, and optimizing in the maximum of the log-likelihood.
Fig. 2 shows a typical result in terms of the empirical distribution of the estimates, for 300 Monte-Carlo replications of samples of size n = 500, with a censoring distribution achieving an average censoring rate of 27%. The stopping criterion here is based on the numerical stabilization of the log-likelihood, as for any genuine EM. In the present case the EM's required an average of 330 iterations. The regression parameters are β = (0.5, -0.5) and γ 2 = 3 (γ 1 = 0 for identifiability). Hence the model parameters are (α, γ 2 , β, F 0 (•)). Fig. 3 shows the corresponding conditional densities over the range of the possible values for the covariates, together with a typical sample distribution of non-censored data from this model. The simulation of sample data is done by simulating the covariates, computing the conditional scales given each i and component j as s j (i) = b 0 exp -(β T z i + γ j )/a 0 and simulating each duration
(x i |J = j, Z = z i ) ∼ W(a 0 , s j (i))
, a Weibull distribution with shape a 0 and the conditional scale. Then a censoring is applied, for an average 10% of censored observations. As in Example 1 Section 5.1, the algorithm requires an initialization and this case is more tricky than the previous one; in particular a "non informative" initialization for the parameters β as in Example 1 does not work well for this more complex model. We experiment here a new data-driven initialization procedure. First, from Fig. 3 we can notice that the two components are somehow separated, whatever the values of the covariates, so that we start by defining a cutpoint τ and two sub-samples t 1 and t 2 from an histogram of the data or prior expert opinion as in Section 5.1. A cutpoint τ = 2 has been chosen here. Then the procedure involves the following steps:
1. Fit a single weibull distribution on the sample (t, d) to get initial values for step 2.
This fit can be done by calling standard MLE packages for right censored data from standard distributions. We use the survreg() function for the survival package [START_REF] Therneau | survival: Survival analysis, including penalised likelihood[END_REF] for the R statistical software (R Core Team, 2013). 2. Fit separately two Weibull distributions to each subsample (t 1 , d 1 ) and (t 2 , d 2 ), where d are the censoring indicators corresponding to the lifetimes t for the th subsample. This is done by applying again survreg() but with initial values provided by step 1.
3. Fit a two-component mixture of Weibull distributions with censored data to the whole sample (t, d) using the specific St-EM algorithm from [START_REF] Bordes | Stochastic EM algorithms for parametric and semiparametric mixture models for right-censored lifetime data[END_REF]. This St-EM itself requires initial parameters for weight, shape and scale per components. The initial weight α 0 1 is defined as the proportion of observations belonging to t 1 , and shape and scale per components are the estimates obtained in step 2. 4. Using the posterior probabilities obtained by the St-EM algorithm in step 3, simulate a starting vector J 0 of component origin for each individual. This is similar to Step 2 of the Stochastic EM-like algorithm described in 4.1.
5. Fit a Cox PHM applying the function coxph() for a model with covariates (z, J 0 ), i.e. (t, d) ∼ z 1 + z 2 + J 0 . This gives initial values β 0 , γ 0 2 and F0 (•).
We applied the above procedure to Monte-Carlo replications and several sample sizes from n = 500 to n = 5000, with good results suggesting "empirical convergence". An example is displayed in Fig. 4, which shows the empirical distribution of the estimates for the scalar parameters, and the estimates of the baseline F0 over replications, in the case of a sample of size n = 1000.
Discussion
We have proposed several iterative methods based on EM and Stochastic EM methodologies, for parametric and semiparametric PHM's designed for randomly right censored lifetime data. In particular, we have illustrated the behavior of these algorithms for a parametric model allowing for a genuine EM, and a more complex semiparametric model requiring a St-EM algorithm.
For both strategies, we defined data-driven automated initialization procedures that perform in a satisfactory manner. This question of initialization can indeed be delicate, as illustrated by the semiparametric model and St-EM algorithm, for which a multiple stage procedure involving itself several simpler models and algorithms has been designed.
Asymptotic variance of the St-EM estimates is only available for parametric models [START_REF] Nielsen | The stochastic EM algorithm: Estimation and asymptotic results[END_REF], but in the situations experimented through Monte-Carlo simulations, our algorithms provide good estimates and decreasing MSE's when the sample size increases, suggesting numerical evidence of convergence of these algorithms. All the algorithms shown here are implemented -and will be publicly available -in an upcoming version of the mixtools package [START_REF] Benaglia | mixtools: An R package for analyzing finite mixture models[END_REF] for the R statistical software [START_REF] Core | R: A Language and Environment for Statistical Computing[END_REF].
function S X|Z (x|z) is estimated by ŜX|Z (x|z) = exp -e βT z Λ0 (x) . In addition, if K is a kernel function and b = b n a bandwidth such that (b n ) n≥1 0 and (nb n ) n≥1
Figure 1 :
1 Figure 1: Example 1, true survival functions F (t|γ j , z) = exp(-γ j te β T z ) for each covariate possible value and each component.
Figure 2 :
2 Figure 2: Example 1, empirical distribution of EM estimates based on 300 replications of sample of size n = 500. Green dotted lines are true values, red lines are estimates averaged over replications.
Figure 4 :
4 Figure 4: Exemple 2 semiparametric model: empirical distributions of St-EM estimates based on 100 replications of a sample of size n = 1000. Green (dotted) lines or curves are true values, red lines are estimates averaged over replications.
Table 1 :
1 Table 1 below gives numerical results for two sample sizes. Semiparametric example: empirical distribution of a sample of size n = 1000 of non-censored data and the true density functions for each component over the range of covariates z. Estimated means, standard deviations and MSE's from 100 replications of the semiparametric St-EM algorithm.
time | 24,658 | [
"170008",
"743456"
] | [
"87850",
"98"
] |
01478776 | en | [
"shs"
] | 2024/03/04 23:41:46 | 2009 | https://hal.univ-lorraine.fr/hal-01478776/file/Corinne%20MARTIN_Camera%20Phone%20and%20Photography%20among%20French%20young%20Users_ICA_Chicago_2009.pdf | Corinne Martin
email: [email protected]
Camera Phone and Photography Among French Young Users
The cell phone now being firmly established among French young people (voice/SMS), what are the uses of the camera phone? Is a radical transformation of the social function of photography taking place that would entail a specificity of the uses of the cell phone device? Unquestionably, those everyday life photos, more spontaneous, more intimate and emotional too, are user-generated content that take part in the construction of personal and social identity in real time -it can indeed be noted that temporality in the act of taking pictures has changed. However, most of what used to be at stake in traditional photography is still important, for instance the issue of the trace, of the authentication and evidence of the past reality. But a new question arises: could it be that those pictures are less worthy of becoming images? Are they more precarious? For two constraints remain: one is economic, the other one is technical. What young people do, therefore, is set up a real rationality of the uses in order to arbitrate between the various devices available. The manufacturers/carriers' wish to have the cell phone become the one and ultimate device does not seem to have been fulfilled in France as yet.
The methodology applied is combined: one quantitative part contains a survey including short descriptions of almost 500 photos, and one qualitative part, which is based on semi-directive interviews among 20 persons between 18 and 24 years about their uses of camera phones compared with those of digital cameras.
Introduction
The cell phone has a special place within ICT's (Information and Communication Technologies), with an equipment rate that has soared, in hardly ten years, from 10 to 80% of the French population. Various explanations can be put forward to explain such a success. Within the context of the sociology of uses and the sociology of the family, we have brought out several categories of voice/SMS uses, among teenagers as well as among their parents (Martin, 2007a).
Teenagers use the cell phone, first of all, as a mediation tool with friends. It is also a means of expressing identity and developing autonomy, its logic of uses being part of the general process of individualization at work in contemporary families. It is, finally, a personal and personalized object that can somehow become a part of its owner, an embedded object. And young people, especially girls, enjoy using it. In June 2008, 99% of the 18 to 24-year-olds, and 76% of the 12-to-17-year-olds DRAFT precarious? Whereas Pierre Bourdieu regards photography as a way of transforming reality -since it is highly coded and codified 5 (Dubois, 1990) -Roland Barthes (1980), in La chambre Claire: note sur la photographie, offers another perspective, in which photography becomes a trace of reality. What is most important indeed, in such a perspective, is the overwhelming feeling of evidence provided by photography, with its famous "That-has-been" acting as proof of a past reality whose existence it testifies to, and which makes photography, by essence (because it is mechanically generated, i.e. the imprint resulting from a physico-chemical process), "a message without a code". Yet, of course, it will be coded and codified later, when received. From its production to its reception: Philippe Dubois (1990) lays as a basic fact the impossibility to think out a photograph without considering whatever action calls it into existence. Any photo is "consubstantially an action-picture, it being understood that the word 'action' here doesn't merely stand for the very gesture that actually produces the picture (when the photo is taken), but can also include the action of receiving and that of contemplating it [italics his]" (op. cit., p. 10). We thought it extremely important, therefore, to think out those pictures all along the whole process of photography -from posing and shooting to storing, contemplating, showing, and, finally, circulating and sharing them within sociability networks. Of course, Philippe Dubois's analysis of this process originally applied to traditional photography, but we chose to believe that, for this very reason, his analysis would help us make out the changes brought about by camera phone photography (and more generally by digital photography), in relation to temporality especially.
We have used combined methodology: at first, 252 questionnaires were dealt to bachelor year 1 and bachelor year 2 marketing students and to bachelor year 3 information and communication students (80% of them are 18 to 21, 60% of them are female). This sample does not claim to be of any statistical value; it aimed, first of all, at collecting names and phone numbers for the next stage of our investigation. Some of the questions nevertheless could be used from a qualitative point of view. The qualitative stage then was based on 20 semi directive 60 minute-interviews about the uses of digital camera (DC)/camera phone photography and video. And, finally, we were able to make use of some elements from an August 2007 TNS Sofrès survey 6 . We will at first examine the changes in the social function of camera phone photography -that is to say the specificity or non specificity of its social uses -by analyzing each stage of the process of photography, using digital camera photography as another point of comparison. We will thus explore and question the commonly held hypothesis that those pictures are 'precarious' pictures. Our second part will deal with the rationality of the uses. Then 5 After the first doctrine, the mimesis, which, from the very start of the XIX th century, saw photography as a true reflection of reality 6 Survey ordered by the AFOM (association of French mobile carriers). We would like to express our special thanks to Eric de Branche, communication manager, for allowing us to use data. Thanks to Laurence Bedeau, too, group manager at TNS Sofrès. This survey comes after another one entitled "The mobile phone today. Uses and social behaviours, 2 nd edition, June 2007" carried out for the AFOM by Joëlle Menrath and Anne Jarrigeon, available at www.afom.fr DRAFT in the third part we will evoke some of the most outstanding characteristics of those circulating pictures, studying self-staging videos more particularly.
A RADICAL CHANGE IN THE SOCIAL FUNCTION OF PHOTOGRAPHY?
We must start from this unquestionable fact: camera phone photography is a mass practice among the young. According to TNS Sofrès, 74% of those between 18 and 24 are concerned, and in our sample the figure goes up to 95%, among which 70% do it from 1 to 5 times a week (i.e. less than once a day). It is well under the number of SMS sent, which is between 30 and 48 a week 7 . We are now going to examine the successive stages in the process of photography, that is to say what is at stake in the issue of the memory, at first, and then in the shooting, posing, contemplating and showing, keeping and storing of those pictures, and their circulation and sharing. But let us start with the themes: what do they shoot?
Friends come first
The respondents in the TNS Sofrès survey were asked to evoke the various themes in their photos through a multiple-choice questionnaire. The "portraits or a group portraits (including family photos)" category comes easily first, ticked by 81% of the respondents -but no distinction is made here between family and friends. As for us, we asked the students to describe in detail the two photos they like best on their camera phone. We were thus able to collect a sample of 484 photos that we classified according to their subjects, and it turns out that "my friends", "me and my friends", "my boyfriend/girlfriend" and "my boyfriend/girlfriend and me" put together represent 50% of the favourite pictures. "Family" comes second, with nearly 20%. It is also to be noticed that the respondents themselves appear on one third of their favourite photos. A striking change has obviously taken place since Bourdieu's sociological analysis of the practice of photography. First, there has been a general increase of photography as a practice (cf. above): it is more intensive, and has expanded beyond special occasion photography 8 as described by Bourdieu (1965). Secondly, the individualization process at work in contemporary families (brought to light by the sociology of the family), together with the secularization of private/family events, have transformed the way people see and take pictures. Irène Jonas (2008) shows how family portraits have evolved toward more and more "naturalness": what we may call "affective" photos (those seeking to create and share intimate and authentic moments 9 ) have become just as important as more "traditional" pictures. This evolution is also deeply connected with technological evolution, that is to say with the appearance of digital 7 Credoc, 2008: 30 SMS a week for the young of 18-24 years old and 48 SMS a week for the 12-17 years old. 8 A majority (55%) of the favourite photos are what could be called "everyday" pictures, while 20% were taken during parties, 15% on holidays and 10% for family events. 9 In the same way, the representation of children (who have now acquired the status of fully-fledged individuals) has become central.
DRAFT
photography (for instance, more and more pictures are taken: 4 or 5 times as many as with traditional cameras, (Jonas, 2008). With the portable phone used as a personal object and autonomy tool by teenagers (Martin, 2007a), the individualization of photography practices10 has reached its logical outcome: it is therefore not surprising that friends and acquaintances come first in those young students' favourite themes, although such an evolution is unquestionably a real one, considering the other uses they make of the mobile phone. Another and last change is the appearance of the comic, that is to say of humorous or even burlesque situations, spectacularly staged and often gag-like, in a great number of pictures (even more so in videos, cf. below): we are now far away from the posed situations described by Pierre Bourdieu (1965), in which the individuals were dressed in their social clothes. The evolution toward less formal and less ritualized photos is undeniable. Yet there are two things we must not disregard. First, the fact that, along with everyday commonplace pictures, some of those friends-related pictures still act as a sort of consecration of friendly sociability, not a solemn one of course, but a consecration nevertheless. Let us listen to Lauriane describing the photos in her portable phone: "the party celebrating the end of academic year", or "my first romantic week-end with my boyfriend in Disneyland". Then, the fact that the family has not completely disappeared, far from it. Clara, among others, evokes one of her two favourite photos: "a family gathering with several generations sitting round a table". All those events, although secular, can be considered as socially important since they still provide benchmarks both for friend life and family life. And it is important to have them on pictures so that a trace of them can be preserved.
The memory, the trace
"In short, the referent adheres", says Roland Barthes (1980, p. 18), and the noema of the photograph, its essence, is nothing but the famous "That-has-been" (op. cit., p. 120), like an emanation from past reality -thus acting as an evidence of reality as much as an evidence of the past. It becomes obvious here that the power to authenticate, the assertion of existence, the trace, as it were, prevails over representation -which allows Barthes to state that a photograph is "a message without a code" (even if, subsequently, symbolization does take place, of course). As for Philippe Dubois (1990), he shows that a photo, as a sign, falls within the category of the index (such as defined in Ch. Peirce's semiotics), since the index is physically connected with the referent (i.e., in traditional photography, the physico-chemical imprint) 11 . The photo therefore becomes a proof of existence, and does so within the pragmatic register. We are told that the main evolution brought about by digital photography is that one can have immediate access to the pictures (they appear on the screen as soon as taken 12 ): its power of representation directly serves reality and thereby is part of its construction; the three key-DRAFT moments, event/capture/reception, become one (Jonas, 2008; Rivière, 2005). This is true indeed, and yet, even though those digital pictures are immediately available, the issue of the trace and the past seemed to definitely strike a deep chord in the respondents: whether in their answers to the questionnaire or in the interviews, they all evoked, like a leitmotiv, their desire to "keep a trace" of this or that event in their lives. And while talking about camera phone pictures they would often end up talking about those they take with their DC, no longer distinguishing between the two things. For instance, Aurélien says that "what it's really all about is helping memory to remember"-quite a nice phrase… For Arnaud, nicknamed "the paparazzi" by his friends because he is a compulsive "shooter" (with his phone as well as his DC):
You could say that, if you only go through it once, the event is just OK, but at the same time you take pictures, so it's kind of cool, you can recall it several times afterward, so you go through it again two or three or four times and then you really have the feeling of going through it, whereas if you only go through it once (he insists), and you don't take any pictures, then, with years going by, well, it just sort of fades away… "It just sort of fades away…" confirms the index-like quality of the photo: the physical connection between the event/referent and the picture appears clearly. Without the picture, indeed, the event will fade away, and therefore disappear, sinking into the past ineluctably. Therefore we can affirm that photos, even when digital and with no other material existence than the device itself, do work as "traces". Merely by being contemplated, they allow the event to be gone through again and again, indefinitely (endlessly?), so that it won't disappear, be wiped out by time. "The fact that photography can endlessly reproduce what 'actually' happened only once and will never happen again brings out what is particular and contingent" (Macmillan, 2008, p. 43). What we can see at work here is the pleasure principle, within and through repetition, completely cut off from reality -which allows one to dodge the unpleasant issues of death and the passing of time. "Enjoyment comes through pictures: this is the great change", says Roland Barthes (1980, p. 182). Those pictures are taken to share the present moment, therefore, but also in anticipation of the pleasure they will provide later on by allowing the viewers to look back into the past. But let's go back to the shooting and the posing.
The shooting, the posing
The cell phone often truly becomes part of its owner (Martin, 2007b), an embedded object that is always at hand and that its owner almost never parts with. This is why we all tend to think it will encourage spontaneity in the shooting. In the evolution of family portraits such as described by Irène Jonas (2008), "real life" pictures, showing unique moments of spontaneity and authenticity, already tend to replace posed pictures such as described by Bourdieu (1965). We could even add that this very issue of the singularity, or the uniqueness, of the moment, is not a new one, it has been recurring from the beginnings and throughout the history of photography. Doesn't Walter Benjamin (1998) evoke the DRAFT desire shared by all photographers to take snapshots and capture the present moment? And this issue remains during the all history of the photography (Batchen, 2008). It is no surprise, therefore, that this way of talking about photography should still be very often found among a majority of our respondents, so much so that it even becomes a stereotype: they all mention the possibility afforded by the camera phone to photograph "real life", "whenever the occasion is favourable", thereby showing to what extent this "real life" shooting has become the new standard way of taking pictures. We could also imagine that the camera phone will contribute to an even greater increase of picture-taking opportunities. It probably will -yet traditional photography, as Philippe Dubois analyzed it, already encouraged what he called compulsive practice, whose power "stems from its initial connection with the referent situation" (1990, p. 80): a presence asserting absence, an absence asserting presence. And the moment of the shooting, makes that presence/absence issue especially palpable, with the ultimate paradox of the photographed object disappearing in that very moment 13 , when it is thus saved from disappearing (since the picture will become a memory, be a substitute for absence) through its very disappearance… Compulsive practices, therefore, turn out to be inherent to photography, and if they tend to increase nowadays the reason is to be found in digital technology rather than in the camera phone itself. Let us now analyze that other moment related to the shooting: the posing. Philippe Dubois (1990) refers to Medusa: all those whose eyes meet hers are instantly turned to stone. Thus things should be simplified: with real life pictures, in which the camera phone is supposed to make photography much more spontaneous, posing shouldn't even be mentioned. Yet it is frequently mentioned by the respondents themselves, often when a comparison is made with videos: many of them find videos more lively, because of the very presence of sound, voice and movement. Nadia, for instance, says that "a video conveys more emotions. A photo destroys emotions because it's frozen". It seems that this idea of photos being frozen still affects digital photos. As for Lauriane, who describes herself as "very keen on taking pictures", she explains how, during parties:
Everybody says I'm bothering them, because, yes, I often take pictures and I have to admit it does cut things off somehow --you're having fun and then "Stay still, I'm taking the picture!"
We have mentioned the evolution of family portraits towards more naturalness, as described by Irène Jonas (2008), who points out "the farewell to 'Watch the birdie'" in all those new affective photos. It seems to us here that those real life pictures of spontaneous and authentic individuals are actually nothing else but a reflection of the new standards governing relationships within contemporary families: "Be yourself and be authentic" (however paradoxical, or even tiring, such an 13 Philippe Dubois (1990, p. 5) evokes the myth of Orpheus, who "can no longer stand it and, his resistance pressed by desire, finally breaks the taboo: taking all the risks, he turns to face his Eurydice, sees her, and in the very moment when his eyes recognize and catch her, she suddenly disappears. Thus any photo, as soon as taken, sends its object back into the dark kingdom of the dead, forever. Dead for having been seen."
DRAFT
injunction to invent and develop oneself may be 14 ). In short, what we can say is that if some real life party photos are indeed taken by those young people -stolen pictures of friend partiers (often, as a matter of fact, under the influence of alcohol, cf. what we said above about the comic and spectacular dimension) -some also still resort to posing, whenever it is important to gather the whole group and immortalize the moment 15 . This shows how pregnant the issue of posing remains, in spite of the appearance of camera phone photography. But what happens next? What do young people do with those pictures?
Storing and preserving the photos
Only 22% of the youngsters (TNS Sofrès) store 16 those pictures in their computers, and only 3% of them print them. During the interviews, it seemed that technical difficulties were massively the reason for this, since transferring the pictures from the phone into the computer remains a real problem. And in the case of those who have succeeded (through cables or Bluetooth), it is interesting to remark that they distinguish camera phone pictures from DC pictures in their classification, camera phone pictures being most often kept as such, in one "camera phone" file, without any specific order, while those taken with the DC are classified in files and sub-files, according to the event and the date.
Can we say that, on the whole, technical constraint is the only factor at work here, or is there some sort of specificity behind those practices, with camera phone photos being taken with no filing intentions (Rivière, 2005), contrary to what Pierre Bourdieu (1965) had found out about traditional photography? Whatever the answer is, this paradox remains: some of the respondents suggested an impossibility to erase them, like Lauriane, who says:
It would feel like erasing a memory, maybe… I don't know, it's all in the mind.
It looks as if the referent adheres as much as ever. Within our little sample, we found out that the standard way consisted in transferring the photos into the computer as often as possible, but rarely in erasing them, except for less valued ones (cf. infra). Yet we can still wonder, with Irène Jonas (2008) what will become of those computer-stored pictures and what those people will do with them. Will they last just as long as the device (computer or phone)? And what will become of memory (which is the result of sorting out, classifying, selecting)? In any case, the disposed of/disposable pictures hypothesis seems quite an interesting one to be explored in the future. As for now, those photos remain in the camera phone and can therefore be contemplated and shown around. 14 Cf. Ehrenberg A.: La fatigue d'être soi, Paris: O. Jacob, 1998 [Tired of Being Onself, O. Jacob, 2008]. 15 Observing people, in any tourist-attracting place, taking pictures of friend or family groups, is enough to realize that posing is still the norm -it being understood that such photos belong to the category of traditional pictures (such as analyzed by Bourdieu, 1965), along with more affective photos. 16 While hyper-storing (more than 60% of the photos taken, cf. API barometer 2007) is the norm with DC photos.
DRAFT
Contemplating and showing the photos
The cell phone being a very personal object and a self-reassuring tool (Martin, 2007a), the constant presence on oneself of one's photos in one's camera phone is obviously an important thing for the respondents: it provides reassurance and pleasure at the same time. Is it a sort of digital photo album 17 ? Daisuke Okabe (2004) identifies a first kind of camera phone usage pattern that he calls "personal archiving", which constitutes a resource for personal identity construction. In our survey, young people indeed do look at their photos now and then (52% of our respondents), or often (25%), but, during the interviews, the pleasure they take in contemplating the pictures was always evoked as related to printed photos in an album, or sometimes to those that can be viewed on the computer, very seldom to those on the screen of the camera phone. For Roland Barthes "The photo in itself is not in the least moving […], it moves me: which is what any adventure does" (1980, p. 39), and this happens because of the punctum, that stinging detail 18 . It is precisely that power of metonymical extension concentrated in the punctum that will restore the physical presence of the object/the person in the very picture. Now, it must be said that the poor quality of camera phone photos goes against them. This was a recurrent remark in all the interviews: although in technical terms they never go beyond the pixel explanation, the respondents complain about poor picture quality, using mere common sense arguments, like Arnaud: "You just have to transfer them into the computer to see the difference !". Amy Voida and Elizabeth D. Mynatt (2005) also evoke some usability complaints related to the bad quality of the camera phone photos. Therefore we shall lay the hypothesis that camera phone pictures have a sort of intrinsic flaw in them that makes them fail, as far as the punctum is concerned, to really affect the viewer's self. Let us listen to Cannelle:
Well, to me, it's not the same thing at all, I have no pleasure in looking at a camera phone photo, I can't say "Wow!", I really can't, because it's not good enough. I may look at it and say "OK, I understand what you're talking about, yeah, I can see what kind of place you live in", but I don't think I'd say "This is a splendid picture", no, never. [Question: Why? What is missing?] Well, it has to do with the size, the colours; it doesn't cause any click in me whatsoever, nothing that would make me think "Wow, this is a great picture!" Not with a camera phone picture, no way…
The "click": here Cannelle offers a splendid example of how the punctum works -that stinging detail generally absent from camera phone pictures. What is present, on the other hand, is the proof of existence -related to the authentication power of photography -that explains or shows that "one was there". Such a proof seems one of the basic motives for those 54% of the 18 to 24-year-olds (TNS 17 Digital photo albums sales have soared in the last few years. 18 As opposed to the studium -which evokes knowledge and culture -the punctum has to do with emotions "for punctum also means pinprick, little hole, little spot, little cut -and throw of the dice, too. The punctum of a photo is whatever in it chances to sting me (and wound me, and break my heart)" (Barthes, p. 49). "What the punctum of a photograph hits is the self of the viewer" (Macmillan, p. 41).
DRAFT
Sofrès) who show their photos to close relations -although to us it rather corresponds to the studium as described by Roland Barthes. And, though they do show their photos, close relations are but a small, not to say very small number of people. What is more, the respondents often say that if they do, it is because they are asked to, "I show them if people ask me to": could this be prudish reserve on their part when being confronted with what we may call a form of extimity? Anyhow, this phenomenon seems to us, for the time being, much less important than the sudden emergence of portable phones (and private conversations) as a new social object into public space in the years 94-97 (Martin, 2008). Now, after evoking the showing around of the photos in face-to-face interactions 19 , it is time for us to study their remote circulation within social networks. A circulation which, at first, seems like a new dimension of those camera phone photos, and yet…
Sharing and circulating the photos
Barbara Scifo (2005) speaks about MMS as a gift. Like SMS, they are indeed affectionate winks that young people address to each other, meant to reassert the emotional bond by referring to a common past and providing the possibility to convey emotions in the very moment when they are felt, almost instantaneously. This endows those photos with the function of communicating and representing reality instantaneously (Rivière, 2005). This is exactly what Angélique talks about:
For example, my cousin and I were once crazy about ficus, because my aunt has one at home and we just love it, and it had that little "ficus forever" label on it that we found funny, so we scanned it and put it on our cars, it became a sort of slogan, our slogan. And a few months later I was a trainee for some time at the Regional Council and there was a ficus in my office, so I took a photo and sent it to her, just for fun, as a sort of reminder, you see…
We can clearly see complicity here, at work through an almost coded language. But an important economic constraint remains: Richard Ling (2008) asserts that the pricing of the service is a significant barrier to general use 20 , and MMS are a "poor alternative", which is only used by young people in situations where there is a need for immediacy. From his part, Daisuke Okabe (2004) describes a second kind of camera phone usage pattern that he calls "intimate sharing", which creates a sense of "distributed co-presence" with close friends, family and loved ones who are not physically co-present. We can indeed agree but we also notice the photos are more shared in face-to-face interactions, by Bluetooth and we think that the photos taken with a DC circulate much more. With the computer -which remains at the centre -as a starting basis, various media intertwine, through e- 19 Julien Morel and Marc Relieu (2007) show how looking at those pictures can make conversation start (after one has picked up the other one's camera phone on the table to take a look at his/her photos). 20 In France, MMS are not a big success, since only 283 millions were sent in 2007, compared with 18.7 billion SMS, available at www.arcep.fr DRAFT mailing, blogs or internet Websites. What those young people want is precisely to share the pictures of shared moments, a party, for instance, or a holiday, as Charline explains:
From our holiday in Italy last year, we [2 couples of friends] brought back 1000 photos, so it is true we actually often have the same, with the four of us shooting the Leaning Tower of Pisa from the same angle, it's almost stupid somehow, but then you can store as many as you want and it costs nothing, so…
To us, this is where digital technology really changes things, since what people can do and try to do is to collect and possess everything (cf. above: hyper-storing "costs nothing"), that is to say all the photos, without exception, taken (with their DC) by those who were part of the event. So on the whole, now that we have examined all the moments in the action of photographing, drawing whatever final conclusion on the issue of the specificity or non specificity of the uses of camera phone pictures remains difficult. But let us go on and proceed with the comparison between camera phone photos and DC photos -a comparison made by young users themselves -and see how a real rationality of the uses is being set up.
A Real Rationality of the Uses
What first emerged as a major element from the interviews was the recurring remark made by the respondents about using their phones to take pictures only "if I don't have my DC with me", "temporarily" -for which there can be many reasons (forgetting the DC, fearing that it might be stolen or broken or lost, etc.). Actually, it seems that the camera phone is used as a camera only by default 21 .
The notion of a hierarchy can be made out in what the interviewers say. We may wonder, therefore, if it could be that those camera phone photos have less value. And if so, then why? Does it mean they are less worthy of becoming images? Or that they are acquiring a new status, as evoked in our disposed of/disposable pictures hypothesis? This issue probably holds quite a number of surprises in store for us, since we never heard such remarks made as far as voice/SMS uses of the cell phone were concerned: the respondents all mentioned the advantage of being reachable everywhere, all the time. What can actually be observed is the fact that every user sets up a real rationality of the uses with the various devices available, this rationality aiming at defining a general direction for practice and being the result of a personal analysis of all the criterions we have examined so far: picture quality, at first, depending on what one intends to do with them (store them in the computer or not), material constraints related to the object (size of the DC and therefore possibility/impossibility to carry it constantly 22 ), technical constraints (format compatibility), economic constraints (free e-mails vs expensive MMS; economic value of the phone vs that of the DC), etc. And such arbitration is not 21 More than two thirds of our 252 respondents own a personal DC. 22 Compact DC can also become part of their owners. absolute. It is, on the contrary, relative, since it is constantly updated according to whatever other devices are available, and the criterions constantly re-examined. As Lauriane puts it:
The use of the cell phone depends on what other device is available.
Here is her story: a few years ago, for Christmas, she was offered a DC by her parents, and she used it, even though she now describes it as "not fantastic, really, when there's not enough light the pictures are horrible", but nevertheless "better than the phone I had back then, which was the first camera phone to be found, in fact". So when it was time for her to buy a new mobile phone, she took the photo criterion very attentively into account, and that is how she came to use her camera phone to take pictures, somewhat neglecting her DC. Then she went to university -an important time in her life since it meant new important friendships (cf. the "photos of end of academic year parties" or "the strike", two intense moments of her student life) -and met her "boyfriend" on campus, something she will later immortalize, since one of her favourite camera phone pictures is "my first romantic week-end with my boyfriend in Disneyland". At that time she thought of offering her boyfriend a DC which of course takes much better pictures than her camera phone, and it very quickly turned out that she would become the main user of her boyfriend's DC, always reminding him to take it with him whenever they went to a party. She admits she used her camera phone less and less, mainly because of technical reasons and the quality of the photos:
The problem with the camera phone is that it takes about 5 seconds before picture stabilizes, so it's quite hard to get a sharp picture.
Her parents, having finally got the message (she never uses her old DC any more), have just offered her a new one which of course is better than anything she has had so far. We can easily imagine that she has dropped all her other devices and now only uses that new DC ("so now I have it with me all the time") -which has truly become part of her by now. This particular case is meant to show how rationality of the uses sets in and can only be understood through considering other available devices as well as the user's singular story. We may add that such usage rules can also be set to work within the group of friends. Let us listen to Lucile, who doesn't own a DC herself, explaining why, in some specific situation, she did not think of using her camera phone:
Using the camera phone to shoot them [friend partiers] just doesn't occur to me, and then the viewfinder is so small, and you have to stand so far away just to have everybody in the frame […].
As a matter of fact, some time ago we had a party, and I simply didn't think of using my phone to take a picture, I mean… [question: why?] some of us had their DC, they think of taking it with them and using it, so knowing they will send us the photos, we don't take pictures.
It is clear here that those usage rules are most often born from technical problems that have to do with the poor quality of camera phone pictures, especially when the photos are meant to be DRAFT preserved in order to remember some evening spent together. The photos taken with the DC are therefore those that will be important and will circulate, be exchanged and shared within the group of friends. The rationality of the uses therefore goes as far as taking into account elements from current situations and contexts such as related to the availability of other devices (and their characteristics) within the sociability network. Before reaching a conclusion, we have to briefly examine videos -but then why do it separately? Because, contrary to a "photo [which] stops movement" (Dubois, 1990, p. 173), video, like the cinema, is made of motion pictures and therefore allows narrativity to set in.
And this is precisely where most new things are to be found, it seems to us, as far as uses are concerned.
Videos: Staging Oneself?
According to TNS Sofrès, more than 43% of the 18 to 24-year-olds make videos with their camera phone. Among our 252 students, 73% do so. And in the interviews, some said they liked video because it is more lively than frozen photographs. Let us listen to Floriane:
You can see reality, what really took place before and after, whereas a photo is frozen. 'Cause a video may last a few minutes, which means nothing will be lost of that very moment you're going through […]. Video is closer to life, it allows you to go through the memory again and get as close to it as possible.
The issue of how to immortalize the memory is just as important here as it is with photography, but it seems that movements, duration, and, some will add, sound and voice, are what create that more lively, "more expressive" dimension (Romain) that "conveys more emotions" (Nadia). You can "see reality" (Emeline), which is no mean feat, and thereby get "as close as possible" to the memory. The tone here gets almost pompous, which shows how keen young people are on video. And the humorous dimension, that we have already spotted in photography, is almost consubstantial here with the forming of such slices of life. That is, on the one hand, parties between friends, very often on the "school-kid pranks" mode:
Very funny, with everybody singing, we all were slightly drunk (Caroline). A completely drunk friend who falls off his chair while crying with laughter (Emilien) Also in this category, are the videos of various feats, in which risk, of course, plays an important part23 . On the other hand, real staging and directing can be found, that nevertheless often pretends to be improvisations. Most of them are sketches, in which parody plays an important part.
DRAFT
The wildlife report I made on the Island of the Saulcy [campus] about the copulation of ducks. An incongruous scene caught in the act that deserved a little video with National Geographic-style commentaries.
Young people themselves are very often part of it, subject/object of such staging. Can we say that this is self-staging continuing reality TV shows? (Rivière, 2005). The comparison might be extreme -we shall have to keep an eye on the evolution of such a trend. However, after evoking those new dimensions of videos, we would like to make a few remarks on the limits that remain (thereby showing we are here in the middle of an unfinished process). On the one hand, those videos are still few. This is confirmed by the TNS Sofrès survey, since half of those who carry videos in their camera phones actually own less than five. On the other hand, camera phone videos are not that much circulated. Here again, the technical aspect prevails, as it is the case with photos: which accounts for the fact that concert videos -the most likely type of videos to be broadcast on Youtube or Dailymotion -will rarely be shot with a camera phone, the respondents say, because of poor quality… sound 24 .
From all this, it follows that even though camera phone photography is indeed a massive practice among the young, we must avoid drawing hasty conclusions about the specificity of the uses, keeping in mind the fact that we face with a long use-forming process. Koskinen and Kurvinen (2002), in one of the first surveys of mobile phone photos, published in 2002, speak of practices "repeating traditional practices". Amy Voida and Elizabeth D. Mynatt (2005) also notice the existence of inertial forces: in general, their participants just wanted to take the same kind of photos they had always taken25 . Daisuke Okabe (2004) whereas concludes that the function of the camera has shifted, by embodying the characteristics of the mobile phone as a "personal, portable, pedestrian" device. From our own survey, we can conclude that those photos are not all as spontaneous or 'real life' as one might have thought at first, and often do raise the issue of the memory and the trace. Of course, young people's themes unquestionably focus more on friendly sociability, including when they aim at sharing the present moment, but the family sphere is not absent for all that. It seems important to them to have those photos constantly at hand, but the pleasure they have in contemplating them on the phone itself is weakened by the flaw that characterizes those pictures as far as the punctum is concerned; and, finally, those photos circulate much less than DC photos and, above all, tend to acquire a precarious, not to say ephemeral, status -since they seem destined to last just as long as the artefact. That is why it seems obvious to us that a transformation of the social function of photography toward less ritualized and formal picture taking is indeed taking place -but to us such an evolution is much more the result of digital technology than of the camera phone itself. It would be easier, in any case, to say DRAFT that there is a sort of continuum from camera phone to DC, in which a complex series of likenesses and differences interplay throughout the whole process of picture taking. Real dialectic tensions are at work between a global similarity of camera phone/DC uses, on the one hand, and, on the other hand, specialization depending on which of the artefacts is being used: we were indeed able to make out the tendency to use the DC in a thought-out, anticipated way, in parties or other events seen as socially important (this including affective photos), while the camera phone, on the other hand, tends to be reserved for everyday use -which could account for its lesser value. This is why it seems clear to us that the disposed of/disposable pictures hypothesis must be further looked into through some new research work. It seems that the newest and most specific characteristics of those pictures are to be found on the video side, but here we must insist on the fact that whatever the respondents say on the subject is still full of contradictions, as a result of the unfinished process of the forming of uses. It would be interesting, therefore, to make up exhaustive corpuses of all the pictures taken with and stored in the camera phone. And, at last, another research work could examine amateur photo reporting (a number of mainstream press and television media internet Websites are currently appealing for amateur pictures, agencies are being set up, go-between people are starting to collect those snapshots/instantaneous pictures taken by witnesses and sell them to the media, etc.). But it turned out that our respondents, focusing on everyday uses, never spontaneously evoked such uses.
Finally, the technical dimension of the artefact (such as expressed in terms of picture quality, that is) is obviously a very important one, having a direct influence upon the practice and uses of photography whereas it remained somewhat secondary as far as voice/SMS uses of the cell phone were concerned. Pictures are indeed more difficult to deal with -which, for the time being, allows us to have reservations about manufacturers' and carriers' great hope: the camera phone as the one and ultimate device. In any case, the camera phone is still a device used to phone or send SMS. Maurizio Ferraris (2006) tries to tell the future and sees the cell phone, the only artefact that is both hand and tabula, as the absolute symbol of the return of the written word that will therefore fulfil the recording function that no human society could do without. But let him alone be responsible for such a prophecy…
Credoc, 2008, The diffusion of ICT in French society, CGTI (General Council of ICT's)/Arcep (electronic communication regulation authority), available at www.arcep.fr. Credoc is the Research Center for the Study of Living Conditions).
A survey ordered by the API, an association for the promotion of images, summary available at www.sipec.fr, the SIPEC being the union for companies dealing with pictures, photography and communication.
Among which 1 out of
owns more than one DC. From now on we will be using DC for "digital camera", as opposed to "camera phone".
The growing phenomenon of families owning several DC also contributes to individualization.
Whereas it is through likeness that the icon is related to the referent, the way the symbol relates to the referent is defined through general convention.
It must not be forgotten, though, that the Polaroid camera, the first instant-print photocamera, appeared as soon as the middle of the XX th century in the USA.
Cf. the works of David Le Breton, particularly: "Between Jackass and happy slapping, an erasing of shame", Adolescences, 2007, n°61, 3. But our respondents are way off happy slapping.
And whenever they are, they will be shown around to close relations to say "I was there", cf. supra.
Nearly two-thirds of the participants' photos were of classic 'Kodak Culture' subjects. | 45,091 | [
"11991"
] | [
"226789"
] |
01478968 | en | [
"info"
] | 2024/03/04 23:41:46 | 2017 | https://minesparis-psl.hal.science/hal-01478968/file/ICCAE_2017_Sportillo_et_al.pdf | Daniele Sportillo
Alexis Paljic
Mehdi Boukhris
Philippe Fuchs
Luciano Ojeda
Vincent Roussarie
An immersive Virtual Reality system for semi-autonomous driving simulation: a comparison between realistic and 6-DoF controller-based interaction
Keywords: CCS Concepts, Human-centered computing➝Human computer interaction Virtual Reality, Driving Simulation, HMD, Interaction devices, Semi-Autonomous vehicles, self-driving cars
This paper presents a preliminary study of the use of Virtual Reality for the simulation of a particular driving task: the control recovery of a semi-autonomous vehicle by a driver engaged in an attention-demanding secondary activity. In this paper the authors describe a fully immersive simulator for semi-autonomous vehicles and present the pilot study that has been conducted for determining the most appropriate interface to interact with the simulator. The interaction with the simulator is not only limited to the actual car control; it also concerns the execution of a secondary activity which aims to put the driver out of the loop by distracting him/her from the main driving task. This study evaluates the role of a realistic interface and a 6-DoF controllerbased interaction on objective and subjective measures. Preliminary results suggest that subjective indicators related to comfort, ease of use and adaptation show a significant difference in favor of realistic interfaces. However, task achievement performances do not provide decisive parameters for determining the most adequate interaction modality.
INTRODUCTION
Autonomous and semi-autonomous vehicles are likely to become a reality in the coming years. Progress toward full self-driving automation has already started with the introduction of systems able to automate some simple driving tasks. In the near future it is likely that systems able to perform a complete journey without human intervention will be introduced [START_REF] West | Moving forward: Self-driving vehicles in China, Europe, Japan, Korea, and the United States[END_REF]. Since autonomous cars will not require constant supervision, the driver will be free to undertake a secondary activity, such as talking to passengers, reading a book, or using a smartphone or tablet. These scenarios therefore require an interface within the vehicle to switch control (Transfer of Control) when the automated driving system notifies to the human driver that s/he should promptly begin or resume performance of the dynamic driving task (Take-Over) [START_REF] Blanco | Automated Vehicles: Take-Over Request and System Prompt Evaluation[END_REF] or when the human driver wishes to leave the control to the system (driving delegation).
In this context, Virtual Reality technologies can be deeply exploited. VR systems can be used not only as testing and validation environments but also as a training environment for people who are coming in contact with this kind of interface for the first time. The use of VR for this last purpose is the subject of this work. In this paper, we evaluate the use of Light Virtual Reality systems for the acquisition of skills for the Transfer of Control (ToC) between the human driver and the semiautonomous vehicle. Light refers to VR systems that are easy to setup and manage, not cumbersome and preferably low-cost. In this study, the intention to develop a simulator accessible anywhere for training a large number of people in a fast and reliable way suggests the need for light systems. The system will be used as a training environment where users can become familiar with the novel equipment in the vehicle and can learn how to properly interact to gain or release the driving control in a variety of everyday driving situations.
In this study this system and two simulation interfaces are presented, and the following research topic is addressed: the interaction in Virtual Reality driving environments with a particular focus on the interaction with a semi-autonomous vehicle. For this study an HMD is used as the display device for our Light Virtual Reality system. By wearing the HMD the user loses his/her capability to see the external world. This provides a high sense of immersion, while also preventing the use of traditional interaction devices such as keyboards and conventional joysticks. Considering the need to deploy a light and easy to set up system, it is important to investigate which kind of interaction device is the most adequate to simulate a control recovery task in a highly automated driving scenario.
RELATED WORKS
The use of Virtual Reality for driving simulation has been widely addressed by researchers. Several studies have been conducted with the purpose of evaluating the usability [START_REF] Schultheis | Examining the Usability of a Virtual Reality Driving Simulator[END_REF] and the physiological responses [START_REF] Deniaud | An Investigation into Physiological Responses in Driving Simulators: An Objective Measurement of Presence[END_REF] of a VR driving simulator, as well as the driving differences between the real and the virtual experience [START_REF] Milleville-Pennel | Driving for Real or on a Fixed-Base Simulator: Is It so Different? An Explorative Study[END_REF].
Regarding the simulation of autonomous vehicles, there are only a few ongoing studies about the simulation of critical scenarios [START_REF] Gechter | Towards a Hybrid Real/Virtual Simulation of Autonomous Vehicles for Critical Scenarios[END_REF] and the design of the interface [START_REF] Sadigh | User Interface Design and Verification for Semi-Autonomous Driving[END_REF][START_REF] Politis | Language-Based Multimodal Displays for the Handover of Control in Autonomous Cars[END_REF] for a complete driving simulation in a fully immersive virtual reality system. Most of the research in this field is aimed at evaluating driver behavior [START_REF] Dogan | Evaluating the Shift of Control between Driver and Vehicle at High Automation at Low Speed: The Role of Anticipation[END_REF][START_REF] Merat | Transition to Manual: Driver Behaviour When Resuming Control from a Highly Automated Vehicle[END_REF] and the cognitive load [START_REF] Johns | Effect of Cognitive Load in Autonomous Vehicles on Driver Performance during Transfer of Control[END_REF][START_REF] Hettinger | Virtual and Adaptive Environments: Applications, Implications, and Human Performance Issues[END_REF] during the Transfer of Control from the vehicle to the human driver.
Although to date, literature lacks studies on the use of Virtual Reality for training purposes on (semi-)autonomous vehicles, there are a few studies which have addressed the problem of regaining control of a semi-autonomous vehicle for drivers engaged in a secondary task [START_REF] Gold | Take over!" How long does it take to get the driver back into the loop?[END_REF][START_REF] Funkhouser | Reaction Times When Switching From Autonomous to Manual Driving Control A Pilot Investigation[END_REF]. In [START_REF] Gold | Take over!" How long does it take to get the driver back into the loop?[END_REF], the authors evaluated the point in time in which the driver's attention must be directed back to the driving task. In particular, they examined the take-over process of inattentive drivers engaged in an interaction with a tablet computer. Our pilot study is based mainly on this work; however, it differs by two aspects: the type of simulator (full vehicle mockup vs VR headset) and the secondary activity. In [START_REF] Funkhouser | Reaction Times When Switching From Autonomous to Manual Driving Control A Pilot Investigation[END_REF], the authors investigated reaction times with relation to the duration of autonomous driving before regaining control. They found that the longer the time disengaged from the driving task, the longer the reaction time.
Concerning interaction in a virtual environment, the literature includes several examples of work that evaluate the effects of realism on the user. In [START_REF] Mcmahan | Evaluating display fidelity and interaction fidelity in a virtual reality game[END_REF], the authors explore the differences in performance with respect to very high and very low levels of both display and interaction fidelity.
No previous work has attempted to determine the impact of different interaction devices for a Transfer of Control scenario in semi-autonomous vehicles. Therefore, we focus our study on this matter and present a new HMD-based simulator to investigate control recovery in a driving task.
USER STUDY
Ten subjects participated in the experiment that took place in our immersive simulator: they were asked to react to a request of control to avoid an obstacle on the road. For each subject, the experimental study consisted of two parts, executed in random order, which differed for the mode of interaction with the virtual environment.
The purpose of the experiment was to determine the most adequate interaction interface to be used in an HMD-based simulator to recover control of a semi-autonomous vehicle for drivers focused on an attention demanding secondary activity.
Simulator and Virtual Environment
The immersive simulator consists of a system for visually and acoustically for presenting the virtual environment, and several devices for interacting with it. The simulator is able to display the virtual environment on a variety of systems, from simple screens to VR headsets and CAVEs. In this study an HTC Vive, which provides a 90 Hz refresh rate as well as high-frequency and low latency orientation and position tracking, is used as visualization system and headphones are used as the acoustical system for playing 3D spatialized audio. For the driving task the simulator provides different interfaces with different levels of realism. In fact, it is possible to drive using a gaming steering wheel as well as a joystick and smartphone (running an appropriate application).
In the proposed experiment, the users wear the HMD while situated inside a virtual environment resembling the interior of a car with which they have to interact [Fig. 1]. The driver is free to move inside the car, and s/he can control the longitudinal and lateral speed. A button on the dashboard allows the user to delegate the vehicle control to the autonomous system.
The simulated vehicle is able to perform simple automated driving tasks such as line-keeping and static and dynamic obstacle avoidance. Additionally, the system provides real-time data collection of relevant vehicle and user data.
The virtual environment is developed in Unity 3D. Graphically, inside the virtual environment, the vehicle is placed on a two-lane dual-carriageway road. Three guardrails delimit the carriageways (two for the outer limits and one in the middle) and props, such as trees, buildings and power-poles populate the roadsides. Moderate fake traffic is simulated in the two directions.
Secondary activity
Fig. 2 -The secondary activity on the tablet
In order to simulate a non-driving secondary activity, a 9.4 inch virtual tablet computer was placed on the right of the driver. During the autonomous driving phase, the subjects were asked to perform a non-driving activity involving interaction with the virtual tablet: they played some rounds of the memory skill game "Simon" [Fig. 2]. In each round of the game the device lights up one or more colored squares in a random order: the player must then reproduce that order. As the game progresses, the number of buttons that must be pressed increases. To implement the game, the tablet screen was split into 4 colored squares (red, green, yellow and blue), each of which represented one of the 4 buttons game. Simon was chosen as the non-driving activity because the game requires constant attention and fixed gaze in order to advance.
Interaction interfaces
In this experiment two different modes of interaction with a virtual reality driving environment are compared by evaluating objective and subjective criteria. The interaction consists of both controlling the longitudinal and lateral speed of the car and playing some rounds of the Simon game using a virtual tablet. The following, describes the two interaction modalities chosen in the study: the first modality makes use of a steering wheel and direct user hand manipulation; the second uses the HTC tracked controllers. The choice of this selection was motivated by the following reasons:
Steering wheel and pedals are the most realistic interfaces for driving tasks. They allow users to perform the driving task as they normally would in real life.
Controllers are a general purpose device, but they are specifically designed for interaction in HMD-based Virtual Environments.
6-DoF controller-based interaction
Fig. 3 -6-DoF controller-based interaction
The first mode of interaction makes use of the two 6-DoF controllers provided by the HTC Vive [Fig. 3]. Inside the Virtual Environment the controllers are tracked in position and orientation via Lighthouse, the HTC Vive's tracking system. The controllers are used both to interact with the virtual tablet and to drive the vehicle in manual mode. To start driving the vehicle, the subject must join the controllers together [Fig. 3]. The longitudinal speed is then controlled with two trigger buttons on the controllers: the right trigger is used to increase the speed, while the left one is used to decrease the speed. The touchpad on the controller is used to interact with the virtual tablet. More precisely, the subject touches the pad to move a pointer on the virtual screen, and s/he clicks the pad to fire a click event at that point.
Realistic interaction
Fig. 4 -Realistic interaction
In the second mode, the participants use their hands, a gaming steering wheel and pedals to interact with the environment [Fig. 4]. During the manual driving phase the steering wheel is used to control the lateral speed of the vehicle, and the throttle and brake pedals are used to adjust the longitudinal speed. The real and the virtual steering wheel have the same size and position with respect to the user. In addition, the angle of the virtual steering wheel matches the angle of the real one. For the secondary task execution, we use a Leap Motion controller placed on the front face of the HMD to retrieve the relative position and orientation of the user's hands as well display a graphical representation. The contact between the index fingers of the user hands and the virtual tablet screen fires a click event in the contact point.
Take Over Request
Fig. 5 -Take Over Request displayed on the HUD
To communicate the Take Over Request (TOR), the automation system alerts the user with a sound and a visual message. The sound consists of a looped "beep" emitted through the vehicle speakers, while the visual message "TAKE OVER" [Fig. 5] is displayed on an HUD in front of the user with a ten second countdown; as soon as the driver takes back control, the TOR ends and the HUD displays the message "MANUAL". If after this period the user has not yet taken the control of the vehicle, the system employs an emergency brake.
Design and variables
In a within-subject design we chose to study the impact of the interaction interface on the following sets of variables: Objective measures: -Response time: time needed to take back control of the vehicle after the alert notification. -Driving stability after regain of control in terms of number of steering turns while avoiding the obstacle on the road. Subjective measures:
A post-experience questionnaire designed to assess physical realism and comfort as well as ease of use and adaptation.
Participants
Ten subjects aged between 22 and 37 (mean = 27,6) years old participated in the experiment. All participants had normal or corrected-to-normal vision, and all the subjects except one, had a valid driving license with 2 to 18 (mean = 8,22) years of driving experience.
Seven participants are used to playing video games, and three participants did not have any previous virtual reality experience seven participants had previous virtual reality experiences (one, had more than 10 experiences).
Procedures
Fig. 6 -Experiment timeline
The experiment contains 5 parts: (1) the pre-experience questionnaire to collect demographic data and information about driving skills and habits and previous experiences in Virtual Reality; [START_REF] Blanco | Automated Vehicles: Take-Over Request and System Prompt Evaluation[END_REF][START_REF] Deniaud | An Investigation into Physiological Responses in Driving Simulators: An Objective Measurement of Presence[END_REF] two simulator sessions, one for each interaction mode, executed in random order; [START_REF] Schultheis | Examining the Usability of a Virtual Reality Driving Simulator[END_REF][START_REF] Milleville-Pennel | Driving for Real or on a Fixed-Base Simulator: Is It so Different? An Explorative Study[END_REF] the post-experience questionnaire after each session to collect information about physical comfort, realism and acceptability. For this particular experiment, the maximum speed for the car was set to 80km/h for the autonomous mode and 130 km/h during the manual driving. After an acclimatization phase in which the subjects became familiar with the simulator, the virtual environment and the given interaction interface, they were asked to perform the following sequence of steps three times Avoid obstacle: the subjects change the line, adjusting longitudinal speed in case, in order to avoid the obstacle on the road. After doing this they returned on the right line.
Results
To evaluate the impact of the interaction interface on the driving task, relevant data such as position and orientation of the vehicle in the lane, and its longitudinal speed and steering angle were collected in real time during the experiment.
Based on this data we defined the following set of variables to describe the quality of control regain recovery:
Reaction time: time between the notification of the TOR and the actual regain of control. Number of steering oscillations: how many times the steering angle changes sign. Tab. 1 shows the results.
Tab. 1 -Objective measures results
Variable
6-DoF Realistic
Reaction time (mean, [s]) 2.17 2.67
Num. of steering turns (median) 8.5 5
The reaction time is better when the user interacts with the simulator using the 6-DoF controllers. However, since the number of steering turns is lower in the realistic condition, it appears that the subjects were able to control the vehicle in a more stable way using steering wheel and pedals. The trajectories followed by the user are shown in Fig. 6 (Steering Wheel) and Fig. 7 (Controllers). These images provide a qualitative representation of the concept of stability. In fact, we can observe from the trajectories that the use of the controllers to regain control produce a higher number of lane departures (pink zone) with respect to the use of steering wheel and pedals.
With respect to the subjective measures, the participants expressed a preference for the realistic interface according to all the indicators. Fig. 9 shows the results of the post-experience questionnaire.
Fig. 9 -Subjective measures results
Considering the objective performance criterion, it is not possible to determine which of the two interaction modalities is the most adequate. This is because even if we have a lower reaction time with 6-DoF controllers, the stability of control recovery is better with the realistic interface. On the other hand, the indicators related to comfort and ease of use and adaptation provide us a clear predilection for the realistic interface.
CONCLUSION AND FUTURE WORK
This paper presents an high immersive HMD-based simulator for semi-autonomous vehicles and a pilot study to evaluate the most adequate interface to drive and perform a secondary activity during autonomous driving. The task proposed to the subjects was to recover control of the vehicle from the autonomous driving while they were focused on a non-driving activity.
This paper analyzed two different interaction modalities. The first was based on a realistic interface and uses a steering wheel and pedals for driving and a finger-tracking device for performing the secondary activity (Simon game). The second interaction modality uses two hand-held 6-DoF controllers for the driving task and the embedded touchpad for the game activity. The subjective criteria, such as physical comfort and ease of use and adaptation, show that the subjects prefer the realistic way of interaction. With respect to objective measures, it's observed that realistic interaction elicits more stable trajectories, but controller-based interfaces provides better reaction times.
Future work will aim at addressing the question of semiautonomous driving skills acquisition by end users. On the basis of this study, we will implement our Virtual Reality learning environment using realistic interfaces.
Fig. 1 -
1 Fig. 1 -The driver POV
[Fig. 6]: Delegate control: the subjects press the button on the dashboard to delegate control of the vehicle to the autonomous system. The vehicle starts the autonomous journey with a maximum speed of 80 km/h. Perform the secondary activity: the subjects interact with the virtual tablet to perform the secondary activity, the Simon game. Regain control: the subjects continue the secondary activity until the TOR alerted him after 4, 5 or 6 completed rounds. The subjects react to the TOR, stopping the execution of the secondary activity and taking back the control of the vehicle.
Fig. 7 -Fig. 8 - 6 -
786 Fig. 7 -Steering wheel and pedals trajectories
ACKNOWLEDGMENTS
This research was supported by the French Foundation of Technological Research under grant CIFRE 2015/1392 for the doctoral work of D. Sportillo at PSA Group. | 21,913 | [
"12710"
] | [
"27997",
"136342",
"27997",
"27997",
"27997",
"136342",
"136342"
] |
01479163 | en | [
"math"
] | 2024/03/04 23:41:46 | 2016 | https://hal.science/hal-01479163/file/25-2016-1.pdf | Davide Baroli
email: [email protected]
Cristina Maria Cova
email: [email protected]
Simona Perotto
email: [email protected]
Lorenzo Sala
email: [email protected]
Alessandro Veneziani July
Hi-POD solution of parametrized fluid dynamics problems: preliminary results
Keywords: model reduction, POD, advection-diffusion-reaction problems, Navier-Stokes equations, modal expansion, finite elements
. These methods -under the name of Hi-Mod approximation -construct a solution as a finite element axial discretization, completed by a spectral approximation of the transverse dynamics. It has been demonstrated that Hi-Mod reduction significantly accelerates the computations without compromising the accuracy. In view of variational data assimilation procedures (or, more in general, control problems), it is crucial to have efficient model reduction techniques to rapidly solve, for instance, a parametrized problem for several choices of the parameters of interest. In this work, we present some preliminary results merging Hi-Mod techniques with a classical Proper Orthogonal Decomposition (POD) strategy. We name this new approach as Hi-POD model reduction. We demonstrate the efficiency and the reliability of Hi-POD on multiparameter advection-diffusion-reaction problems as well as on the incompressible Navier-Stokes equations, both in a steady and in an unsteady setting.
Introduction
The growing request of efficient and reliable numerical simulations for modeling, designing and optimizing engineering systems in a broad sense, challenges traditional methods for solving partial differential equations (PDEs). While general purpose methods like finite elements are suitable for high fidelity solutions of direct problems, practical applications often require to deal with multi-query settings, where the right balance between accuracy and efficiency becomes critical. Customization of methods to exploit all the possible features of the problem at hand may yield significant improvements in terms of efficiency, possibly with no meaningful loss in the accuracy required by engineering problems.
In this paper we focus on parametrized PDEs to model incompressible fluid dynamic problems in pipes or elongated domains. In particular, we propose to combine the Hi-Mod reduction technique, which is customized on problems featuring a leading dynamics triggered by the geometry, with a standard POD approach for a rapid solution of parametrized settings.
A Hi-Mod approximation represents a fluid in a pipe as a one-dimensional mainstream, locally enriched via transverse components. This separate description of dynamics leads to construct psychological 1D models, yet able of switching to a locally higher fidelity [START_REF] Ern | Hierarchical model reduction for advection-diffusion-reaction problems[END_REF][START_REF] Perotto | A survey of Hierarchical Model (Hi-Mod) reduction methods for elliptic problems[END_REF][START_REF] Perotto | Hierarchical local model reduction for elliptic problems: a domain decomposition approach[END_REF][START_REF] Perotto | Coupled model and grid adaptivity in hierarchical reduction of elliptic problems[END_REF][START_REF] Perotto | Hierarchical model reduction: three different approaches[END_REF]. The rationale behind a Hi-Mod approach is that a 1D classical model can be effectively improved by a spectral approximation of transverse components. In fact, the high accuracy of spectral methods guarantees, in general, that a low number of modes suffices to obtain a reliable approximation, yet with contained computational costs.
POD is a popular strategy in design, assimilation and optimization contexts, and relies on the so-called offline-online paradigm [START_REF] Gunzburger | Perspectives in Flow Control and Optimization[END_REF][START_REF] Hinze | Error estimates for abstract linear-quadratic optimal control problems using proper orthogonal decomposition[END_REF][START_REF] Kahlbacher | Galerkin proper orthogonal decomposition methods for parameter dependent elliptic system[END_REF][START_REF] Rozza | Certified Reduced Basis Methods for Parametrized Partial Differential Equations[END_REF][START_REF] Volkwein | Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling[END_REF]. The offline stage computes the (high fidelity) solution to the problem at hand for a set of samples of the selected parameters. Then, an educated basis (called POD basis) is built by optimally extracting the most important components of the offline solutions (called snapshots), collected in the so-called response matrix, via a singular value decomposition. Finally, in the online phase, the POD basis is used to efficiently represent the solution associated with new values of the parameters of interest, a priori unknown.
In the Hi-POD procedure, the Hi-Mod reduction is used to build the response matrix during the offline stage. Then, we perform the online computation by assembling the Hi-Mod matrix associated with the new parameter and, successively, by projecting such a matrix onto the POD basis. As we show in this work, Hi-POD demonstrates to be quite competitive on a set of multiparameter problems, including linear scalar advection-diffusion-reaction problems and the incompressible Navier-Stokes equations.
The paper is organized as follows. In Section 2, we detail the Hi-POD technique and we apply it to an advection-diffusion-reaction problem featuring six parameters, pinpointing the efficiency of the procedure. Section 3 generalizes Hi-POD to a vector problem, by focusing on the steady incompressible Navier-Stokes equations, while the unsteady case is covered in Section 4. Some conclusions are drawn in Section 5, where some hints for a possible future investigation are also provided.
2 Hi-POD reduction of parametrized PDEs: basics Merging of Hi-Mod and POD procedures for parametrized PDEs has been proposed in [START_REF] Lupo Pasini | HI-POD: HIerarchical Model Reduction Driven by a Proper Orthogonal Decomposition for Advection-Diffusion-Reaction Problems[END_REF][START_REF] Lupo Pasini | [END_REF], in what we called Hi-POD method. We briefly recall the two ingredients, separately. Then, we illustrate a basic example of Hi-POD technique.
The Hi-Mod setting
Let Ω ⊂ R d be a d -dimensional domain, with d = 2, 3, that makes sense to represent as Ω ≡ x∈Ω 1D {x} × Σ x , where Ω 1D is the 1D horizontal supporting fiber, while Σ x ⊂ R d-1 represents the transverse section at x ∈ Ω 1D . The reference morphology is a pipe, where the dominant dynamics occurs along Ω 1D . We generically consider an elliptic problem in the form
find u ∈ V : a(u, v) = F (v) ∀v ∈ V, (1)
where
V ⊆ H 1 (Ω) is a Hilbert space, a(•, •) : V ×V → R is a coercive, continuous
bilinear form and F (•) : V → R is a linear and continuous form. Standard notation for the function spaces is adopted [START_REF] Lions | Non Homogeneous Boundary Value Problems and Applications[END_REF]. We refer to u in (1) as to the full solution. The solution to this problem is supposed to depend on some parameters that we will highlight in our notation later on.
In the Hi-Mod reduction procedure, we introduce the space
V h m = v h m (x, y) = m k=1 ṽh k (x)ϕ k (y), with ṽh k ∈ V h 1D , x ∈ Ω 1D , y ∈ Σ x , where V h 1D ⊂ H 1 (Ω 1D
) is a discrete space of size N h , {ϕ k } k∈N + is a basis of L 2orthonormal modal functions to describe the dynamics in Σ x , for x varying along Ω 1D . For more details about the choice of the modal basis, we refer to [START_REF] Aletti | Educated bases for the HiMod reduction of advection-diffusion-reaction problems with general boundary conditions[END_REF][START_REF] Mansilla-Alvarez | Pipe-oriented hybrid finite element approximation in blood flow problems[END_REF][START_REF] Perotto | Hierarchical local model reduction for elliptic problems: a domain decomposition approach[END_REF], while V h 1D may be a classical finite element space [START_REF] Ern | Hierarchical model reduction for advection-diffusion-reaction problems[END_REF][START_REF] Perotto | Hierarchical local model reduction for elliptic problems: a domain decomposition approach[END_REF][START_REF] Perotto | Coupled model and grid adaptivity in hierarchical reduction of elliptic problems[END_REF][START_REF] Perotto | Hierarchical model reduction: three different approaches[END_REF] or an isogeometric function space [START_REF] Perotto | HIGAMod: a Hierarchical IsoGeometric Approach for MODel reduction in curved pipes[END_REF].
The modal index m ∈ N + determines the level of detail of the Hi-Mod reduced model. It may be fixed a priori, driven by some preliminary knowledge of the phenomenon at hand as in [START_REF] Ern | Hierarchical model reduction for advection-diffusion-reaction problems[END_REF][START_REF] Perotto | Hierarchical local model reduction for elliptic problems: a domain decomposition approach[END_REF], or automatically chosen via an a posteriori modeling error analysis as in [START_REF] Perotto | Coupled model and grid adaptivity in hierarchical reduction of elliptic problems[END_REF][START_REF] Perotto | Space-time adaptive hierarchical model reduction for parabolic equations[END_REF]. Index m can be varied along the domain to better capture local dynamics [START_REF] Perotto | Hierarchical model reduction: three different approaches[END_REF][START_REF] Perotto | Space-time adaptive hierarchical model reduction for parabolic equations[END_REF]. For simplicity, here we consider m to be given and constant along the whole domain (uniform Hi-Mod reduction).
For a given modal index m ∈ N + , the Hi-Mod formulation reads as
find u h m ∈ V h m : a(u h m , v h m ) = F (v h m ) ∀v h m ∈ V h m . (2)
The well-posedness of formulation (2) as well as the convergence of u h m to u can be proved under suitable assumptions on space V h m [START_REF] Perotto | Hierarchical local model reduction for elliptic problems: a domain decomposition approach[END_REF]. In particular, after denoting by {ϑ j } N h j=1 a basis of the space V h 1D , for each element v h m ∈ V h m , the Hi-Mod expansion reads
v h m (x, y) = m k=1 N h j=1 ṽk,j ϑ j (x) ϕ k (y).
The unknowns of (2) are the mN h coefficients ũk,j N h ,m j=1,k=1 identifying the Hi-Mod solution u h m . The Hi-Mod reduction obtains a system of m coupled "psychologically" 1D problems. For m small (i.e., when the mainstream dominates the dynamics), the solution process competes with purely 1D numerical models. Accuracy of the model can be improved locally by properly setting m. From an algebraic point of view, we solve the linear system A h m u h m = f h m , where A h m ∈ R mN h ×mN h is the Hi-Mod stiffness matrix, u h m ∈ R mN h is the vector of the Hi-Mod coefficients and f h m ∈ R mN h is the Hi-Mod right-hand side.
POD solution of parametrized Hi-Mod problems
Let us denote by α a vector of parameters the solution of problem (1) depends on. We reflect this dependence in our notation by writing the Hi-Mod solution as
u h m (α) = u h m (x, y, α) = m k=1 N h j=1 ũα k,j ϑ j (x) ϕ k (y), (3)
corresponding to the algebraic Hi-Mod system
A h m (α)u h m (α) = f h m (α). (4)
The Hi-Mod approximation to problem (1) will be indifferently denoted via [START_REF] Mansilla-Alvarez | Pipe-oriented hybrid finite element approximation in blood flow problems[END_REF] or by the vector u h m (α). The goal of the Hi-POD procedure that we describe hereafter is to rapidly estimate the solution to (1) for a specific set α * of data, by exploiting Hi-Mod solutions previously computed for different choices of the parameter vector. The rationale is to reduce the computational cost of the solution to ( 4), yet preserving reliability.
According to the POD approach, we exploit an offline/online paradigm, i.e., -we compute the Hi-Mod approximation associated with different samples of the parameter α to build the POD reduced basis (offline phase);
-we compute the solution for α * by projecting system (4) onto the space spanned by the POD basis (online phase).
The offline phase
We generate the reduced POD basis relying on a set of available samples of the solution computed with the Hi-Mod reduction. Even though off-line costs are not usually considered in evaluating the advantage of a POD procedure, also this stage may introduce a computational burden when many samples are needed, like in multiparametric problems. The generation of snapshots with the Hi-Mod approach, already demonstrated to be significantly faster [START_REF] Mansilla-Alvarez | Pipe-oriented hybrid finite element approximation in blood flow problems[END_REF], mitigates the costs of this phase. The pay-off of the procedure is based on the expectation that the POD basis is considerably lower-size than the order mN h of the Hi-Mod system. We will discuss this aspect in the numerical assessment.
Let S be the so-called response matrix, collecting p Hi-Mod solutions to (1), for p different values α i of the parameter, with i = 1, . . . , p. Precisely, we identify each Hi-Mod solution with the corresponding vector in (4),
u h m (α i ) = ũα i 1,1 , . . . , ũα i 1,N h , ũα i 2,1 , . . . , ũα i 2,N h , . . . , ũα i m,N h T ∈ R mN h , (5)
the unknown coefficients being ordered mode-wise. Thus, the response (or snapshot) matrix S ∈ R (mN h )×p reads
S = u h m (α 1 ), u h m (α 2 ), . . . , u h m (α p ) = ũα 1 1,1 ũα 2 1,1 . . . ũαp 1,1 . . . . . . . . . . . . ũα 1 1,N h ũα 2 1,N h . . . ũαp 1,N h ũα 1 2,1 ũα 2 2,1 . . . ũαp 2,1 . . . . . . . . . . . . ũα 1 2,N h ũα 2 2,N h . . . ũαp 2,N h . . . . . . . . . . . . ũα 1 m,N h ũα 2 m,N h . . . ũαp m,N h . ( 6
)
The selection of representative values of the parameter is clearly critical in the effectiveness of the POD procedure. More the snapshots cover the entire parameter space and more evident the model reduction will be. This is a nontrivial issue, generally problem dependent. For instance, in [START_REF] Huanhuan | Efficient estimation of cardiac conductivities via POD-DEIM model order reduction[END_REF] the concept of domain of effectiveness is introduced to formalize the region of the parameter space accurately covered by a snapshot in a problem of cardiac conductivity. In this preliminary work, we do not dwell with this aspect since we work on more general problems. A significant number of snapshots is anyhow needed to construct an efficient POD basis, the Hi-Mod procedure providing an effective tool for this purpose (with respect to a full finite element generation of the snapshots).
To establish a correlation between the POD procedure and statistical mo-ments, we enforce the snapshot matrix to have null average by setting
R = S - 1 p p i=1 ũα i 1,1 ũα i 1,1 . . . ũα i 1,1 . . . . . . . . . . . . ũα i 1,N h ũα i 1,N h . . . ũα i 1,N h ũα i 2,1 ũα i 2,1 . . . ũα i 2,1 . . . . . . . . . . . . ũα i 2,N h ũα i 2,N h . . . ũα i 2,N h . . . . . . . . . . . . ũα i m,N h ũα i m,N h . . . ũα i m,N h ∈ R (mN h )×p . (7)
By Singular Value Decomposition (SVD), we write
R = ΨΣΦ T , with Ψ ∈ R (mN h )×(mN h ) , Σ ∈ R (mN h )×p , Φ ∈ R p×p .
Matrices Ψ and Φ are unitary and collect the left and the right singular vectors of R, respectively. Matrix Σ = diag (σ 1 , . . . , σ q ) is pseudo-diagonal, σ 1 , σ 2 , . . . , σ q being the singular values of R, with σ 1 ≥ σ 2 ≥ • • • ≥ σ q and q = min{mN h , p} [START_REF] Golub | Matrix Computations[END_REF]. In the numerical assessment below, we take q = p.
The POD (orthogonal) basis is given by the l left singular vectors {ψ i } associated with the most significant l singular values, with l mN h . Different criteria can be pursued to select those singular values. A possible approach is to select the first l ordered singular values, such that l i=1 σ 2 i / q i=1 σ 2 i ≥ ε for a positive user-defined tolerance ε [START_REF] Volkwein | Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling[END_REF]. The reduced POD space then reads V l POD = span{ψ 1 , . . . , ψ l }, with dim(V l POD ) = l. Equivalently, we can identify the POD basis by applying the spectral decomposition to the covariance matrix C ≡ R T R (being mN h ≥ p ). As well known, the right singular vectors of R coincide with the eigenvectors c i of C, with eigenvalues λ i = σ 2 i , for i = 1, . . . , p. Thus, the POD basis functions reads
ψ i = λ -1 i Sc i [26].
The online phase
We aim at rapidly computing the Hi-Mod approximation to problem (1) for the parameter value α * not included in the sampling set {α i } p i=1 . For this purpose, we project the Hi-Mod system (4), with α = α * , onto the POD space V l POD , by solving the linear system
A α * POD u α * POD = f α * POD , with A α * POD = (Ψ l POD ) T A h m (α * ) Ψ l POD ∈ R l×l , f α * POD = (Ψ l POD ) T f h m (α * ) ∈ R l and u α * POD = [u α * POD,1 , . . . , u α * POD,l ] T ∈ R l
, where A h m (α * ) and f h m (α * ) are defined as in (4), and Ψ l POD = [ψ 1 , . . . , ψ l ] ∈ R (mN h )×l is the matrix collecting, by column, the POD basis functions.
By exploiting the POD basis, we write
u h m (α * ) ≈ l s=1 u α * POD,s ψ s .
The construction of A α * POD and f α * POD requires the assembly of the Hi-Mod matrix/right-hand side for the value α * , successively projected onto the POD space. Also in the basic POD online phase, we need to assembly, in general, the full problem, and the Hi-Mod model, featuring lower size than a full finite element problem, gives a computational advantage. In addition, the final solution is computed by solving an l × l system as opposed to the mN h × mN h Hi-Mod system, with a clear overall computational advantage, as we verify hereafter.
Numerical assessment
In this preliminary paper, we consider only 2D problems, the 3D case being a development of the present work. We consider the linear advection-diffusionreaction (ADR) problem [START_REF] Temam | Navier-Stokes Equations. Theory and Numerical Analysis[END_REF] and where function χ ω denotes the characteristic function associated with the generic domain ω, C 1 = {(x, y) : (x -1.5) 2 + 0.4 (y -0.25) 2 < 0.01} and C 2 = {(x, y) : (x -0.75) 2 + 0.4 (y -0.75) 2 < 0.01} identifying two ellipsoidal areas in Ω. According to the notation in (1), we set therefore
-∇ • µ(x)∇u(x) + b(x) • ∇u(x) + σ(x)u(x) = f (x) in Ω u(x) = 0 on Γ D µ(x) ∂u ∂n (x) = 0 on Γ N , (8) with
(x) = f 1 χ C 1 (x) + f 2 χ C 2 (x) for f 1 , f 2 ∈ [5,
V ≡ H 1 Γ D (Ω), a(u, v) ≡ µ∇u, ∇v + b • ∇u + σu, v , for any u, v ∈ V , and F (v) = f, v , for any v ∈ V , •, • denoting the L 2 (Ω)-scalar product.
In the offline phase, we select p = 30 problems, by randomly varying coefficients µ 0 , σ, b 1 , b 2 , f 1 and f 2 in the corresponding ranges, so that α ≡
[µ 0 , σ, b 1 , b 2 , f 1 , f 2 ] T .
We introduce a uniform partition of Ω 1D into 121 subintervals, and we Hi-Mod approximate the selected p problems, combining piecewise linear finite elements along the 1D fiber with a modal expansion based on 20 sinusoidal functions along the transverse direction.
In the online phase, we aim at computing the Hi-Mod approximation to problem [START_REF] Hinze | Error estimates for abstract linear-quadratic optimal control problems using proper orthogonal decomposition[END_REF]
for α = α * = [µ * 0 , σ * , b * 1 , b * 2 , f * 1 , f * 2 ] T , with µ * 0 = 2.4, σ * = 0, b * 1 = 5, b * 2 = 1, f * 1 = f * 2 = 10.
Figure 1 shows a Hi-Mod reference solution, u R,h m , computed by directly applying Hi-Mod reduction to (8) for α = α * , with the same Hi-Mod discretization setting used for the offline phase.
This test is intended to demonstrate the reliability of Hi-POD to construct an approximation of the Hi-Mod solution (that, in turn, approximates the full solution u), with a contained computational cost. Figure 2 shows the spectrum of the response matrix R in [START_REF] Gunzburger | Perspectives in Flow Control and Optimization[END_REF]. As highlighted by the vertical lines, we select four different values for the number l of POD modes, i.e., l = 2, 6, 19, 29. For these choices, the ratio l i=1 σ 2 i / q i=1 σ 2 i assumes the value 0.780 for l = 2, 0.971 for l = 6, 0.999 for l = 19 (and, clearly, 1 for l = 29). The singular values for the specific problem decay quite slowly. This is due to the presence of many (six) parameters, so that the redundancy of the snapshots (that triggers the decay) is quite limited.
Nevertheless, we observe that the Hi-POD solution still furnishes a reliable and rapid approximation of the solution in correspondence of the value α * . Precisely, Figure 3 shows the Hi-Mod approximation provided by Hi-POD, for l = 2, 6, 19 (top-bottom). We stress that six POD modes are enough to obtain a Hi-Mod reduced solution which, qualitatively, exhibits the same features as u R,h m . Moreover, the contribution of singular vectors for l > 19 is of no improvement. We also notice that the results for l = 6 are excellent, since six scalar parameters influence the solution. Table 1 provides more quantitative information. We collect the L 2 (Ω)-and the H 1 (Ω)-norm of the relative error obtained by replacing the Hi-Mod reference solution with the one provided by the Hi-POD approach. As expected, the error diminishes proportionally to the number of POD modes.
Hi-POD reduction of the Navier-Stokes equations
We generalize the Hi-POD procedure in Section 2.2 to the incompressible Navier-Stokes equations [START_REF] Temam | Navier-Stokes Equations. Theory and Numerical Analysis[END_REF]. We first consider the stationary problem with u = [u 1 , u 2 ] T and p the velocity and the pressure of the flow, respectively ν > 0 the kinematic viscosity, D(u) = 1 2 ∇u + (∇u) T the strain rate, f the force per unit mass, n the unit outward normal vector to the domain boundary ∂Ω, I the identity tensor, g a sufficiently regular function, and where Γ D and Γ N are defined as in [START_REF] Hinze | Error estimates for abstract linear-quadratic optimal control problems using proper orthogonal decomposition[END_REF]. We apply a standard Picard linearization of the nonlinear term
-∇ • (2ν D(u)) (x) + (u • ∇) u(x) + ∇p(x) = f (x) in Ω ∇ • u(x) = 0 in Ω u(x) = 0 on Γ D (D(u) -pI) (x) n = gn on Γ N , (9)
l = 2 l = 6 l = 19 l = 29 ||u R,h m -u h m (α * )|| L 2 (Ω) ||u R,h m || L 2 (Ω)
-∇ • 2ν D(u k+1 ) + u k • ∇ u k+1 + ∇p k+1 = f in Ω ∇ • (u k+1 ) = 0 in Ω u k+1 = 0 on Γ D D(u k+1 ) -p k+1 I n = gn on Γ N ,
where {u j , p j } denotes the unknown pair at the iteration j. Stopping criterion of the Picard iteration is designed on the increment between two consecutive iterations. Problem ( 9) is approximated via a standard Hi-Mod technique, for both the velocity and the pressure, where a modal basis constituted by orthogonal Legendre polynomials, adjusted to include the boundary conditions, is used. Finite elements are used along the centerline. The finite dimension Hi-Mod spaces for velocity and pressure obtained by the combination of different discretization methods need to be inf-sup compatible. Unfortunately, no proof of compatibility is currently available, even though some empirical strategies based on the Bathe-Chapelle test are available [START_REF] Mansilla-Alvarez | Pipe-oriented hybrid finite element approximation in blood flow problems[END_REF][START_REF] Guzzetti | Hierarchical Model reduction solvers for the incompressible Navier-Stokes equations in cylindrical coordinates[END_REF]. In particular, here we take piecewise quadratic velocity/linear pressure along the mainstream and the numbers m p , m u of pressure and velocity modes is set such that m u = m p + 2. Numerical evidence suggests this to be an inf-sup compatible choice [START_REF] Aletti | Educated Bases for Hierarchical Model Reduction in 2D and 3D[END_REF][START_REF] Guzzetti | Hierarchical Model reduction solvers for the incompressible Navier-Stokes equations in cylindrical coordinates[END_REF]. Finally, the same number of modes is used for the two velocity components, for the sake of simplicity.
We denote by V h,u 1D ⊂ H 1 (Ω 1D ) and by V h,p 1D ⊂ L 2 (Ω 1D ) the finite element space adopted to discretize u 1 , u 2 and p, respectively along Ω 1D , with dim(V h,u 1D ) = N h,u and dim(V h,p 1D ) = N h,p . Thus, the total number of degrees of freedom involved by a Hi-Mod approximation of u and p is N u = 2m u N h,u and N p = m p N h,p , respectively.
From an algebraic viewpoint, at each Picard iteration, we solve the linear system (we omit index k for easiness of notation)
S h {mu,mp} z h mu,mp = F h {mu,mp} , (10)
where
S h {mu,mp} = C h {mu,mu} [B h {mu,mp} ] T B h {mu,mp} 0 ∈ R (Nu+Np)×(Nu+Np) ,
with C h {mu,mu} ∈ R Nu×Nu , B h {mu,mp} ∈ R Np×Nu the Hi-Mod momentum and divergence matrix, respectively, z h mu,mp = [u h mu , p h mp ] T ∈ R Nu+Np the vector of the Hi-Mod solutions, and where F h {mu,mp} = [f h mu , 0] T ∈ R Nu+Np , with f h mu the Hi-Mod right-hand side of the momentum equation.
When coming to the Hi-POD procedure for problem [START_REF] Huanhuan | Efficient estimation of cardiac conductivities via POD-DEIM model order reduction[END_REF], we follow a segregated procedure, where a basis function set is constructed for the velocity and another one for the pressure. The effectiveness of this reduced basis in representing the solution for a different value of the parameter is higher with respect to a monolithic approach, where a unique POD basis is built. We will support this statement with numerical evidence. Still referring to ( 6)-( 7), we build two response matrices, R u ∈ R Nu×p and R p ∈ R Np×p , which gather, by column, the Hi-Mod approximation for the velocity, u h mu (α) ∈ R Nu , and for the pressure, p h mp (α) ∈ R Np , solutions to the Navier-Stokes problem (9) for p different choices α i , with i = 1, . . . , p, of the parameter that, in this case, is α = [ν, f , g] T . A standard block-Gaussian procedure resorting to the pressure Schur-complement is used to compute velocity and pressure, separately [START_REF] Elman | Finite Elements and Fast Iterative Solvers: with Applications in Incompressible Fluid Dynamics[END_REF].
Following a segregated SVD analysis of the two unknowns, after identifying the two indices l u and l p , separately, we construct a unique reduced POD space V l POD , with l = max(l u , l p ), by collecting the first l singular vectors of R u and of R p . More precisely, for a new value α * of the parameters, with α * = α i for i = 1, . . . , p, at each Picard iteration, we project the linearized Navier-Stokes problem onto the space V l POD . Another possible approach is to keep the computation of the velocity and pressure separate on the two basis function sets with size l u and l p , by resorting to an approximation of the pressure Schur complement, followed by the computation of the velocity, similar to what is done in algebraic splittings [START_REF] Quarteroni | Factorization methods for the numerical approximation of Navier-Stokes equations[END_REF][START_REF] Elman | Finite Elements and Fast Iterative Solvers: with Applications in Incompressible Fluid Dynamics[END_REF][START_REF] Veneziani | ALADINS: an ALgebraic splitting time ADaptive solver for the Incompressible Navier-Stokes equations[END_REF][START_REF] Veneziani | Algebraic splitting methods for the steady incompressible Navier-Stokes equations[END_REF]. More in general, the treatment of the nonlinear term in the Navier-Stokes problem can follow approximation strategies with a specific basis function set and empirical interpolation strategies [START_REF] Rozza | Certified Reduced Basis Methods for Parametrized Partial Differential Equations[END_REF]. At this preliminary stage, we do not follow this approach and we just assess the performances of the basic procedure. However, this topic will be considered in the follow-up of the present work in view of real applications.
It is also worth noting that no inf-sup compatibility is guaranteed for the POD basis functions. Numerical evidence suggests that we do have inf-sup compatible basis functions, however a theoretical analysis is still missing.
A benchmark test case
We solve problem (9) on the rectangular domain Ω = (0, 8) × (-2, 2), where Γ D = {(x, y) : 0 ≤ x ≤ 8, y = ±2} and Γ N = ∂Ω \ Γ D . Moreover, we assume the analytical representation
f = f 0,x + f xx x + f xy y f 0,y + f yx x + f yy y (11)
for the forcing term f involved in the parameter α.
In the offline stage, we Hi-Mod approximate p = 30 problems, by varying the coefficients f st , for s = 0, x, y and t = x, y, in [START_REF] Lions | Non Homogeneous Boundary Value Problems and Applications[END_REF], the kinematic viscosity ν and the boundary value g in [START_REF] Huanhuan | Efficient estimation of cardiac conductivities via POD-DEIM model order reduction[END_REF]. In particular, we randomly sample the coefficients f st on the interval [0, 100], whereas we adopt a uniform sampling for ν on [30, 70] and for g on [START_REF] Aletti | Educated bases for the HiMod reduction of advection-diffusion-reaction problems with general boundary conditions[END_REF]80]. Concerning the adopted Hi-Mod discretization, we partition the fiber Ω 1D into 80 uniform sub-intervals to employ quadratic and linear finite elements for the velocity and the pressure, respectively. Five Legendre polynomials are used to describe the transverse trend of u, while three modal functions are adopted for p.
In the online phase, we compute the Hi-POD approximation to problem For the sake of completeness, we display the results of a monolithic approach in Figure 4 (center and right), where the POD basis is computed on a unique response matrix for the velocity and pressure. While velocity results are quite accurate, pressure approximation is bad, suggesting that, probably, a lack of inf-sup compatibility of the reduced basis leads to unreliable pressure approximations, independently of the dimension of the POD space.
When we turn to the segregated approach, Figure 5 shows the distribution of the singular values of the response matrices R u and R p , respectively. Again the values decay is not so rapid to pinpoint a clear cut-off value (at least for significantly small dimensions of the reduced basis), as a consequence of the multiple parametrization that inhibits the redundancy of the snapshots. However, when we compare the Hi-Mod solution identified by three different choices of the POD spaces, V l,u POD and V l,p POD , with the reference approximation in Figure 4 (left), we notice that the choice l = 6 is enough for a reliable reconstruction of the approximate solution (see Figure 6 (center)). The horizontal velocity component -being the most predominant dynamics -is captured even with a lower size of the reduced spaces V l,u POD , while the pressure still represents the most challenging quantity to be correctly described. In Table 2, we quantify the accuracy of the Hi-POD procedure. We compare the relative errors between the Hi-Mod reference solution {u R,h mu , p R,h mp } and the Hi-POD approximation {u h mu (α * ), p h mp (α * )} generated by different Hi-POD schemes, with u h mu (α * ) = [u h mu,1 (α * ), u h mu,2 (α * )] T . As for the computational time (in seconds) 1 , we found that the segregated Hi-POD requires 0.13s to be compared with 0.9s demanded by the standard Hi- Mod approximation. This highlights the significant computational advantage attainable by Hi-POD, in particular for a rapid approximation of the incompressible Navier-Stokes equations when estimating one or more parameters of interest.
||u R,h mu,1 -u h mu,1 (α * )|| H 1 (Ω) ||u R,h mu,1 || H 1 (Ω) ||u R,h mu,2 -u h mu,2 (α * )|| H 1 (Ω) ||u R,h mu,2 || H 1 (Ω) ||p R,h mp -p h mp (α * )|| L 2 (Ω) ||p R,h m || L 2 (Ω) l = 4 7.1 • 10 -3 3.9 • 10 -1 4.8 • 10 -1 l = 6 3.8 • 10 -4 4.3 • 10 -2 3.9 • 10 -1 l = 10 1.1 • 10 -4 8.6 • 10 -3 1.3 • 10 -3
Towards more realistic applications
We extend the Hi-POD segregated approach to the unsteady Navier-Stokes equations
∂u ∂t (x, t) -∇ • (2ν D(u)) (x, t) + (u • ∇) u(x, t) + ∇p(x, t) = f (x, t) in Q ∇ • u(x, t) = 0 in Q u(x, t) = 0 on G D (D(u) -pI) (x, t) n = g(x, t)n on G N u(x, 0) = u 0 (x)
in Ω, (12) Q = Ω × I with I = (0, T ) the time window of interest, G D = Γ D × I, G N = Γ N × I, u 0 the initial value, and where all the other quantities are defined as in [START_REF] Huanhuan | Efficient estimation of cardiac conductivities via POD-DEIM model order reduction[END_REF]. After introducing a uniform partition of the interval I into M subintervals of length ∆t, we resort to the backward Euler scheme and approximate the nonlinear term via a classical first order semi-implicit scheme. The semidiscrete problem reads: for each 0
≤ n ≤ M -1, find {u n+1 , p n+1 } ∈ V ≡ [H 1 Γ D (Ω)] 2 × L 2 (Ω) such that u n+1 -u n ∆t -∇ • 2ν D(u n+1 ) + (u n • ∇) u n+1 + ∇p n+1 = f n+1 in Ω ∇ • u n+1 = 0 in Ω u n+1 = 0 on Γ D D(u n+1 ) -p n+1 I n = g n+1 n on Γ N , (13)
with u 0 = u 0 (x), u n+1 u(x, t n+1 ), p n+1 p(x, t n+1 ) and t i = i∆t, for i = 0, . . . , M .
For the Hi-Mod approximation, we replace space V in (13) with the same Hi-Mod space as in the steady case. problem [START_REF] Lupo Pasini | HI-POD: HIerarchical Model Reduction Driven by a Proper Orthogonal Decomposition for Advection-Diffusion-Reaction Problems[END_REF] with the Hi-POD solution {u h mu (α * ), p h mp (α * )}, with u h mu (α * ) = [u h mu,1 (α * ), u h mu,2 (α * )] T , for l = 6. The agreement between the two solutions is qualitatively very good, in spite of the fact that no information from the Hi-Mod solver on the problem after time t 5 is exploited to construct the Hi-POD solution. The pressure still features larger errors, as in the steady case.
We make this comparison more quantitative in Table 3, where we collect the L 2 (Ω)-and the H 1 (Ω)-norm of the relative error between the Hi-Mod reference solution and the Hi-POD one, at the same four times as in Figure 7. We notice that the error does not grow significantly with time. This suggests that the Hi-POD approach can be particularly viable for reconstructing asymptotic solutions in periodic regimes, as in computational hemodynamics. As for the computational efficiency, Hi-POD solution requires 103s vs 287s of Hi-Mod one, with a significant reduction of the computational time.
Conclusions and future developments
The preliminary results in Sections 2.3, 3.1 and 4 yielded by the combination of the model/solution reduction techniques, Hi-Mod/POD, are very promising in view of modeling incompressible fluid dynamics in pipes or elongated domains. We have verified that Hi-POD enables a fast solution of parametrized ADR problems and of the incompressible, steady and unsteady, Navier-Stokes equations, even though in the presence of many (six) parameters. In particular, using Hi-Mod in place of a traditional discretization method applied to the reference (full) problem accelerates the offline phase and also the construction of the reduced problem projected onto the POD space.
Clearly, there are several features of this new approach that need to be investigated. First of all, we plan to migrate to 3D problems within a parallel implementation setting (in the library LifeV, www.lifev.org). Moreover, we aim at further accelerating the computational procedure by using empirical interpolation methods for possible nonlinear terms [START_REF] Rozza | Certified Reduced Basis Methods for Parametrized Partial Differential Equations[END_REF]. Finally, an extensive theoretical analysis is needed to estimate the convergence of the Hi-POD solution to the full one as well as the inf-sup compatibility of the Hi-Mod bases deserves to be rigorously analyzed. As reference application we are interested in computational hemodynamics, in particular to estimate blood viscosity from velocity measures in patients affected by sickle cell diseases [START_REF] Rivera | Sickle cell anemia and pediatric strokes: computational fluid dynamics analysis in the middle cerebral artery[END_REF].
Γ D , Γ N ⊂ ∂Ω, such that Γ D ∪ Γ N = ∂Ω and • ΓD ∩ • ΓN = ∅, where µ, b, σ and f denote the viscosity, the advective field, the reactive coefficient and the source term, respectively. In particular, we set Ω = (0, 6) × (0, 1), with Γ N = {(x, y) : x = 6, 0 ≤ y ≤ 1, } and Γ D = ∂Ω \ Γ N . We also assume constant viscosity and reaction, i.e., we pick µ = 0.1µ 0 for µ 0 ∈ [1, 10] and σ ∈ [0, 3]; then, we assign a sinusoidal advective field, b(x) = [b 1 , b 2 sin(6x)] T with b 1 ∈ [2, 20] and b 2 ∈ [1, 3], and the source term f
Figure 1 :
1 Figure 1: ADR problem. Hi-Mod reference solution.
Figure 2 :
2 Figure 2: ADR problem. Singular values of the response matrix R.
Figure 3 :
3 Figure 3: ADR problem. Hi-Mod approximation provided by the Hi-POD approach for l = 2 (top), l = 6 (center), l = 19 (bottom).
( 9 )
9 with parameters α * = [f * , ν * , g * ] T , with f * = [82.6, 12.1] T , ν * = 51.4 and g * = 24.2, f xx = f yy = f xy = f yx = 0. Figure 4 (left) shows the contour plots of the two components of the velocity and of the pressure for the reference Hi-Mod solution {u R,h mu , p R,h mp } (from top to bottom: horizontal velocity, vertical velocity, pressure), with u R,h mu = [u R,h mu,1 , u R,h mu,2 ] T .
Figure 4 :
4 Figure 4: Steady Navier-Stokes equations. Hi-Mod reference solution (left), Hi-Mod approximation yielded by the monolithic Hi-POD approach for l = 11 (center) and l = 28 (right): horizontal (top) and vertical (middle) velocity components; pressure (bottom).
Figure 5 :
5 Figure 5: Steady Navier-Stokes equations. Singular values of the response matrix R u (left) and R p (right).
Figure 6 :
6 Figure 6: Steady Navier-Stokes equations. Hi-POD approximation yielded by the segregated Hi-POD approach for l = 4 (left), l = 6 (center), l = 10 (right): horizontal (top) and vertical (middle) velocity components; pressure (bottom).
Figure 7 :
7 Figure 7: Unsteady Navier-Stokes equations. Reference Hi-Mod solution (left) and Hi-Mod approximation yielded by the Hi-POD approach for l = 6 (right), at t = 2 (first row), t = 4 (second row), t = 6 (third row) and t = T (fourth row): horizontal (top) and vertical (middle) velocity components; pressure (bottom).
Table 1 :
1 ADR problem. Relative errors for different Hi-POD reconstructions of the Hi-Mod solution.
3.52e-01 3.44e-02 9.71e-04 4.38e-04
||u R,h m -u h m (α * )|| H 1 (Ω) ||u R,h m || H 1 (Ω) 4.54e-01 6.88e-02 2.21e-03 8.24e-04
Table 2 :
2 Steady Navier-Stokes equations. Relative errors for different Hi-POD reconstructions of the Hi-Mod solution.
||u R,h mu,1 -u h mu,1 (α * )|| H 1 (Ω) ||u R,h mu,1 || H 1 (Ω) ||u R,h mu,2 -u h mu,2 (α * )|| H 1 (Ω) ||u R,h mu,2 || H 1 (Ω) ||p R,h mp -p h mp (α * )|| L 2 (Ω) ||p R,h mp || L 2 (Ω) t = 2 5.4 • 10 -4 4.5 • 10 -4 3.4 • 10 -2 t = 4 2.4 • 10 -3 2.1 • 10 -3 1.0 • 10 -1 t = 6 2.3 • 10 -3 2.2 • 10 -3 6.2 • 10 -2 t = T 2.6 • 10 -3 2.4 • 10 -3 7.7 • 10 -2
Table 3 :
3 Unsteady Navier-Stokes equations. Relative error associated with the Hi-Mod approximation provided by Hi-POD at different times.
All the experiments have been performed using MATLAB R2010a 64-bit on a Fujitsu Lifebook T902 equipped with a
2.70 GHz i5 (3rd generation) vPro processor and 8 GB of RAM
Acknowledgments
The third and the fifth author gratefully acknowledge the NSF project DMS 1419060 "Hierarchical Model Reduction Techniques for Incompressible Fluid-Dynamics and Fluid-Structure Interaction Problems" (P.I. Alessandro Veneziani).
When applied to unsteady problems, POD procedures are generally used for estimating the solution at a generic time by taking advantage of precomputed snapshots [START_REF] Volkwein | Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling[END_REF]. In our specific case, we know the Hi-Mod solution for a certain number of parameters α i , and we aim at rapidly estimating the solution over a time interval of interest for a specific value α * of the parameter, with α * = α i . The procedure we propose here is the following one:
1. we precompute offline the steady Hi-Mod solution for p samples α i of the parameter, i = 1, . . . , p;
2. for a specific value α * of the parameter, we compute online the Hi-Mod solution to [START_REF] Lupo Pasini | HI-POD: HIerarchical Model Reduction Driven by a Proper Orthogonal Decomposition for Advection-Diffusion-Reaction Problems[END_REF] at the first times t j , for j = 1, . . . , P ;
3. we juxtapose the Hi-Mod snapshots to the steady response matrix obtained offline;
4. we perform the Hi-POD procedure to estimate the solution to ( 12) at times t j , with j > P .
In absence of a complete analysis of this approach, we present here some preliminary numerical results in a non-rectilinear domain. Hi-Mod reduction has been already applied to curvilinear domains [START_REF] Perotto | Hierarchical model (Hi-Mod) reduction in non-rectilinear domains[END_REF][START_REF] Perotto | HIGAMod: a Hierarchical IsoGeometric Approach for MODel reduction in curved pipes[END_REF]. In particular, in [START_REF] Perotto | HIGAMod: a Hierarchical IsoGeometric Approach for MODel reduction in curved pipes[END_REF] we exploit the isogeometric analysis to describe a curvilinear centerline Ω 1D , by replacing the 1D finite element discretization with an isogeometric approximation.
Here, we consider a quadrilateral domain with a sinusoidal-shaped centerline (see Figure 7). We adopt the same approach as in [START_REF] Perotto | Hierarchical model (Hi-Mod) reduction in non-rectilinear domains[END_REF] based on an affine mapping of the bent domain into a rectilinear reference one. During the offline phase, we Hi-Mod solve problem (9) for p = 5 different choices of the parameter α = [ν, f , g] T , by uniformly sampling the viscosity ν in [1.5, 7], g in [START_REF] Aletti | Educated bases for the HiMod reduction of advection-diffusion-reaction problems with general boundary conditions[END_REF]80], and [START_REF] Kahlbacher | Galerkin proper orthogonal decomposition methods for parameter dependent elliptic system[END_REF]. Domain Ω 1D is divided in 80 uniform sub-intervals. We approximate u and p with five and three Legendre polynomials along the transverse direction combined with piecewise quadratic and linear functions along Ω 1D , respectively. The corresponding Hi-Mod approximations constitute the first p columns of the response matrices R u and R p .
Then, we solve the unsteady problem [START_REF] Lupo Pasini | HI-POD: HIerarchical Model Reduction Driven by a Proper Orthogonal Decomposition for Advection-Diffusion-Reaction Problems[END_REF]. We pick u 0 = 0, T = 10, and we introduce a uniform partition of the time interval I, with ∆t = 0.1. The data α * for the online phase are ν * = 2.8, g * = 30 + 20 sin(t) and f * = [5.8, 1.1] T . Matrices R u and R p are added by the first P = 5 Hi-Mod approximations {u h,j mu (α * ), p h,j mp (α * )}, for j = 1, . . . , 5, so that R u ∈ R Nu×10 and R p ∈ R Np×10 , where N u = 2 × 5 × N h,u , N p = 3 × N h,p with N h,u and N h,p the dimension of the one dimensional finite element space used along Ω 1D for u and p, respectively.
Figure 7 compares, at four different times, a reference Hi-Mod solution {u R,h mu , p R,h mp }, with u R,h mu = [u R,h mu,1 , u R,h mu,2 ] T , computed by hierarchically reducing | 43,626 | [
"761101",
"177127"
] | [
"366875",
"125443",
"91810",
"93707",
"147953"
] |
00147919 | en | [
"math"
] | 2024/03/04 23:41:46 | 1998 | https://hal.science/hal-00147919/file/newoct06.pdf | Luis Dieulefait
Galois realizations of families of Projective Linear Groups via cusp forms
Introduction
Let S k be the space of cusp forms of weight k for SL 2 (Z) and S 2 (N ) the one of cusp forms of weight 2 for Γ 0 (N ).
We are going to consider the Galois representations attached to eigenforms in these spaces, whose images have been determined by Ribet and Momose (see [Ri 75] for S k and [Mo 82], [Ri 85] for S 2 (N ) ).
Our purpose is to use these representations to realize as Galois groups over Q some linear groups of the following form: P SL 2 (p r ) if r is even and P GL 2 (p r ) if r is odd. In order to ease the notation, we will call both these families of linear groups P XL 2 (p r ), so that P XL stands for P SL if r is even and P GL if r is odd.
Extending the results in [Re-Vi], where it is shown that for r ≤ 10 these groups are Galois groups over Q for infinitely many primes p, we will cover the cases r = 11, 13, 17 and 19, using the representations attached to eigenforms in S k and again the cases 11 and 17 using the ones coming from S 2 (N ).
We will give the explicit criterion for the case r = 3 : for every prime p > 3 such that p ≡ 2, 3, 4, 5 (mod 7) the group P GL 2 (p 3 ) is a Galois group over Q . Assuming the following conjecture: "The characteristic polynomial P 2,k of the Hecke operator T 2 acting on S k is irreducible over Q , for all k " , we will prove that for every prime exponent r ≥ 3 , P GL 2 (p r ) is a Galois group over Q for infinitely many primes p.
Finally, applying results of [Br 96] we will prove that there exist infinitely many exponents r for which P XL 2 (p r ) are Galois groups over Q for infinitely many primes p.
Remark: This article was written in 1998 and it corresponds to a Research Project advised by Nuria Vila that the author did as part of his PhD at the Universitat de Barcelona, previous to his thesis.
Galois representations attached to eigenforms in S k
Generalizing the result of [Ri 75] for r = 2 , in [Re-Vi] sufficient conditions are given for P XL 2 (p r ) to be a Galois group over Q. They are the following:
Criterion 2.1 : Let k be such that dim C S k = r. Let P 2,k be the characteristic polynomial of the Hecke operator T 2 acting on S k . Let d 2,k be its discriminant and λ one of its roots.
Let p be a prime such that p / ∈ Σ k,λ , where Σ k,λ is a finite set of primes that can be computed in terms of k and λ.
Then if P 2,k is irreducible modp ,(which implies, in particular, that it is irreducible over Q) P XL 2 (p r ) is a Galois group over Q.
Remark 2.2 : The condition P 2,k irreducible modp implies that there are infinitely many inert primes in Q(λ) (besides, Q(λ) = Q f for some eigenform f ). From this, P XL 2 (q r ) is realized as a Galois group over Q for infinitely many primes q. ([Re-Vi]).
Corollary 2.3 : Suppose that there is a prime p 0 such that P 2,k is irreducible modp 0 . Then there are infinitely many primes p not in Σ k,λ satisfying this and for all of them P XL 2 (p r ) is a Galois group over Q, where r = dim C S k Remark 2.4 : The existence of such a prime for r = 2, 3, 4, ...., 10 is verified in [Re-Vi], thus 2.3 applies to these exponents.
In [Bu 96] it is proved that for r = 11, 13, 17, 19, P 2,12r is irreducible mod 479, 353, 263, 251 respectively. Then applying 2.3 we obtain: Corollary 2.5 : P GL 2 (p r ) is a Galois group over Q for r = 11, 13, 17, 19, for infinitely many primes p in each case.
The following conjecture is widely believed:
Conjecture 2.6 : For every k, the characteristic polynomial P 2,k of the Hecke operator T 2 acting on S k is irreducible over Q.
Even assuming 2.6 we are not in condition of applying 2.3 for other values of r. However, in case r is prime we can use the following : Lemma 2.7 : Let K be a number field of prime degree p over Q. Then there exist infinitely many rational primes inert in K.
Proof: Let N be the normal closure of K and G = Gal(N/Q). It is clear that #G = [N : Q] satisfies: p | #G, #G | p! (2.1)
Let H be a p-Sylow subgroup of G, whose order is p, and let L be its fixed field, so that H = Gal(N/L). Being N/L a cyclic extension of degree p, we can apply class field theory ( [Ne], pag. 85) to see that there are infinitely many primes Q of L inert in N/L. Applying this fact, together with the multiplicativity of the residual degree and (2.1), we see that there are infinitely many inert primes q in K.
Theorem 2.8 : Assume the truth of 2.6. Then for every prime exponent r, there exist infinitely many primes p such that P GL 2 (p r ) is Galois over Q Proof: Let k = 12r. If P 2,k is irreducible over Q, calling λ one of its roots we have:
Q(λ) = Q f , for some eigenform f and dim C S k = r = [Q f : Q].
The previous lemma implies that there are infinitely many inert primes in Q f / Q and we can apply 2.3 .
3 Galois representations attached to newforms in S 2 (N )
Let f be a newform of weight 2 in Γ 0 (N ). We apply the following theorem, ([ Ri 85], [Re 95]):
Theorem 3.1 : Let N be squarefree and P 2 be the characteristic polynomial of T 2 acting on S 2 (N ). Let λ be a simple root of P 2 such that there exists a newform f ∈ S 2 (N ) verifying Q(λ) = Q f (this always holds in the case of prime level). Then for every rational prime p outside a finite set
Σ N,λ inert in Q f , P XL 2 (p r ) is a Galois group over Q, where r = [Q f : Q].
Remark 3.2 : The fact that the nebentypus ε = 1 and N is squarefree implies that f does not have neither complex multiplication nor inner twists. This is used to obtain the surjectivity of the Galois representations.
In [Wa 73], a table of P 2 polynomials, we see that for N = 229, 239 there are simple factors of degree 11, 17 respectively.
These are prime levels, so that there are no old forms around. Invoking again 2.7 we can apply 3.1 and conclude: Corollary 3.3 : P GL 2 (p r ) is Galois over Q for r = 11, 17 and infinitely many primes p in both cases.
In the case of prime level N the exceptional set Σ N,λ can be 'removed' by using the following new result [ Ri 97]: Proposition 3.4 : Let T be the Hecke ring, the ring of endomorphisms over Q of J 0 (N ), for prime N . Let be a maximal ideal of T of residual characteristic p ≥ 5, and with T/ = F p r . Then if is not Eisenstein, P XL 2 (p r ) is Galois over Q. Remark 3.5 : 1-In the prime level case there are no oldforms, and we have the identification:
T ⊗ Q ∼ = f ∈Σ Q f
where Σ is a set of representatives of all newforms modulo the action of Gal(Q/Q).
Corollary 3.10 : P GL 2 (p 3 ) is Galois over Q, for every p ≡ 2, 3, 4, 5 (mod 7), p ≥ 5.
Proof: Consider level N = 97. We found in [Wa 73] that for this level there is a newform f with Q f equal to the splitting field of the polynomial:
x 3 + 4 x 2 + 3 -1
This field is the real cyclotomic field:
Q(ζ 7 + ζ -1 7
). The inert primes in this field are the primes that when reduced mod7 give a generator of the group (Z/7Z) * or the square of such a generator; corresponding to the cases of residual degree 6 and 3 in Q(ζ 7 ), respectively. These are the following: p ≡ 2, 3, 4, 5 (mod 7). Applying 3.4 and 3.5-1 (and once again the fact that Eisenstein primes are not inert) we obtain the desired result.
We now consider the case of arbitrary level N, again with trivial nebentypus. We still need the assumption that f is a newform without CM (complex multiplication). In this general case, we can apply the surjectivity result of [ Ri 85], after replacing Q f by the field F f ⊆ Q f defined as follows (we give three equivalent definitions, see [ Ri 80]): Definition 3.11 : Let A f be the abelian variety associated to f, and E = End A f ⊗ Q its algebra of endomorphisms. We define F f to be the centre of E. Equivalently, if Γ is the set of all immersions γ : Q f → C such that there exists a Dirichlet character χ with: γ(a p ) = χ(p)a p for almost every p, where a n q n = f ; then F f = Q Γ f , the fixed field of Γ. This coincides with the field generated over Q by the a 2 p , p ranging through almost every prime.
The following theorem can be deduced from the results in [ Ri 85]:
Theorem 3.12 : Let f be a newform in S 2 (N ) without CM. Let p be a rational prime outside a finite set Σ N,f and let i be the residual degree in F f /Q of some P | p. Then P XL 2 (p i ) is Galois over Q.
In order to obtain Galois realizations, we need some information about the fields F f . The best result is the following ([Br 96]):
Corollary 3.17 : Let p > 3 be a prime. Then there exists a positive integer l =l(p) such that for infinitely many primes q : P XL 2 (q l(p-1)/2 ) is a Galois group over Q.
The fact that 3.17 holds for every prime p > 3 implies the following: Corollary 3.18 : There exist infinitely many positive integers n such that for every one of them there are infinitely many primes q with P XL 2 (q n ) being a Galois group over Q. Moreover, an infinite number of these exponents n is even.
2-The Eisenstein ideal
⊆ T is the one generated by the elements: 1 + l -T l (l = N ), 1 + ω. An Eisenstein prime is a prime ideal β ⊆ T in the support of . Let n = num( N -1 12 ). We have a one-to-one correspondence between Eisenstein primes β and prime factors p of n, given by: P rime f actors of n ↔ Eisenstein primes p ↔ ( ,p)
Besides, for every Eisenstein prime β, T/β = F p . In particular, whenever the inertia is nontrivial (and these are the cases we are interested in) the involved prime will not be Eisenstein.
) the inert primes are the p ≡ ±2 (mod 5). The result follows from 3.4.
Remark 3.7 : The same result is obtained in [Me 88] by a different approach.
Corollary 3.8 : P SL 2 (p 2 ) is Galois over Q, for every p ≡ ±3 (mod 8), p ≥ 5.
The inert primes are the p ≡ ±3 (mod 8). Apply 3.4. Remark 3.9 : This same result is obtained in [Re 95] using 3.1 where the exceptional set is explicitated and proved to be disjoint from the set of inert primes.
Theorem 3.13 : Let f be as in 3.12. Suppose that p rp N . Let
As a particular case, let p > 3, p 3 N. Then
, that is to say: q (mod p) generates the multiplicative group F * p , and if Q is a prime over q in O F f ,the ring of integers of F f , we have:
All these residual degrees are bounded by [F f : Q], then between the infinitely many primes congruent with generators of F * p we can pick out an infinite subset of primes {q i } i∈N such that for all of them there exists a prime Q i in O F f over q i with:
Combining this with 3.12 we get:
Theorem 3.14 : Let p > 3 be a prime and let N be such that p 3 N. Then if there is a newform f ∈ S 2 (N ) without CM, there exists a number l = l(p) such that for infinitely many primes q : P XL 2 (q l(p-1)/2 ) is a Galois group over Q. The primes q can all be chosen congruent modp to generators of F * p .
Remark 3.15 : 1-There is always an exceptional set Σ N,f that has to be eluded in order to apply 3.12.
2-The values of l = l(p) are bounded by:
Remark 3.16 In order to apply this result we need to ensure that: "For every odd prime p there exist a positive integer N with p 3 N and such that every newform f ∈ S 2 (N ) does not have complex multiplication." Taking N = p 3 • t, t odd prime, t = p, it is well-known that this holds, in fact if a newform f of this level has CM the corresponding abelian variety A f , factor of J 0 (N ), would also have CM ,contradicting the fact that it has multiplicative reduction at t, as can be seen looking at its Néron model.
We see from the remark above that 3.14 applies for every prime p > 3 (take N = p 3 • t ) so that we have: | 11,518 | [
"839713"
] | [
"35765"
] |
01187210 | en | [
"shs",
"math"
] | 2024/03/04 23:41:46 | 2013 | https://hal.univ-reunion.fr/hal-01187210/file/tournes_2013a_owr_10.pdf | Dominique Tournès Lim
email: [email protected]
Ballistics during 18th and 19th centuries: What kind of mathematics?
Two recent papers ( [START_REF] Aubin | I'm just a mathematician': Why and how mathematicians collaborated with military ballisticians at Gâvre[END_REF], [START_REF] Gluchoff | Artillerymen and mathematicians: Forest Ray Moulton and changes in American exterior ballistics, 1885-1934[END_REF]) have studied the scientific and social context of ballistics during and around the First World War, and have put in evidence the collaborations and tensions that have been existing between two major milieus, the one of artillerymen, that is engineers and officers in the military schools and on the battlefield, and the other one of mathematicians that were called to solve difficult theoretical problems. My aim is to give a similar survey for the previous period, that is to say during the second half of the 18th century and the 19th century.
The main problem of exterior ballistics -I won't speak of interior ballistics, which is nearer to physics and chemistry than mathematics -is to determine the trajectory of a projectile launched from a cannon with a given angle and a given velocity. The principal difficulty encountered here is that the differential equations of motion involve the air resistance F (v), which is an unknown function of the velocity v. In fact, the problem is more complex because we must take into account other factors like the variations of the atmospheric pressure and temperature, the rotation of the Earth, the wind, the geometric form of the projectile and its rotation around its axis, etc. However these effects could be often neglected in the period considered here, because the velocities of projectiles remained small.
For a long time, artillerymen have made the assumption that the trajectory is parabolic, but this was not in agreement with the experiments. Newton was the first to research this topic taking into account the air resistance. In his Principia of 1687, he solved the problem with the hypothesis of a resistance proportional to the velocity, and he got quite rough approximations when the resistance is proportional to the square of the velocity. After Newton, Jean Bernoulli discovered the general solution in the case of a resistance proportional to any power of the velocity, but his solution, published in the Acta Eruditorum of 1719, was not convenient for numerical computation. After Bernoulli, many attempts have been done to treat mathematically the ballistic equation. We may organize these attempts throughout two main strategies, one analytical and one numerical.
The analytical strategy consists in integrating the differential equation in finite terms or, alternatively, by quadratures. Reduction to an integrable equation can be achieved in two ways: 1) choose an air resistance law so that the equation can be solved in finite form, leaving it to the artillerymen to decide after if this law can satisfy their needs; 2) if a law of air resistance is imposed through experience, change the other coefficients of the equation to make it integrable, with of course the risk that modifying the equation could modify also the solution in a significant way.
In 1744, D'Alembert restarts the problem of integrability of the equation. Acting here as a geometer, concerned only with progress of pure analysis, he finds four new cases of integrability. His work went relatively unnoticed at first: Legendre in 1782, and Jacobi in 1842 have found again certain of the same cases of integrability, but without quoting D'Alembert.
During the 19th century, we can observe a parallelism between the increasing velocities of bullets and cannonballs, and the appearance of new instruments to measure these velocities [START_REF] Bashforth | A Mathematical Treatise on the Motion of Projectiles, founded chiefly on the results or experiments made with the author's chronograph[END_REF]. Ballisticians have therefore felt the necessity of proposing new air resistance laws for certain intervals of velocity [START_REF] Cranz | Balistique extérieure[END_REF]. Thus, certain previous theoretical developments, initially without applications, led to tables that were actually used by the artillerymen. The fact that some functions determined by artillerymen from experimental measurements fell within the scope of integrable forms has reinforced the idea that it might be useful to continue the search for such forms.
It is within this context that Francesco Siacci resumes the theoretical search for integrable forms of the law of resistance. In two papers published in 1901, he discovers ten families of air resistance laws corresponding to new integrable equations. The question of integrability by quadratures of the ballistic equation is finally resolved in 1920 by Jules Drach [START_REF] Drach | L'équation différentielle de la balistique extérieure et son intégration par quadratures[END_REF], a brilliant mathematician who has contributed much in Galois theory of differential equations. Drach exhausts therefore the problem from a theoretical point of view, but his very complicated results are greeted without enthusiasm by the ballisticians, who do not see at all how to transform them into practical applications.
Another way was explored by theoreticians who accepted Newton's law of the square of the velocity, and tried to act on other terms of the ballistic equation to make it integrable. In 1769, Borda proposes to assume that the medium density is variable and to choose, for this density, a function that does not stray too far from a constant and makes the equation integrable. Legendre deepens Borda's ideas in his essay on the ballistic question [START_REF] Legendre | Dissertation sur la question de balistique[END_REF], with which he won in 1782 the prize of the Berlin Academy. After Legendre, many other people, for example Siacci at the end of the 19th century [START_REF] Siacci | [END_REF], have developed similar ideas to obtain very simple, general, and practical methods of integration.
The second strategy for integrating the ballistic differential equation, that is to say the numerical approach, contains three main procedures: 1) calculate the integral by successive small arcs; 2) develop the integral into an infinite series and keep the first terms; 3) construct graphically the integral curve.
Euler is truly at the starting point of the calculation of firing tables in the case of the square of the velocity [START_REF] Euler | Recherches sur la véritable courbe que décrivent les corps jettés dans l'air ou dans un autre fluide quelconque[END_REF]. In 1755, he resumes Bernoulli's solution and puts it in a form that will be convenient for numerical computation. The integration is then done by successive arcs: each small arc of the curve is replaced by a small straight line, whose inclination is the mean of the inclinations at the extremities of the arc. A little later, Grävenitz achieves the calculations of the program conceived by Euler and publishes firing tables in Rostock in 1764. In 1834, Otto improves Euler's method and calculates new range tables that will experience a great success, and will be in use until the early 20th century.
Another approach is that of series expansions. In the second half of the 18th century and early 19th, we are in the era of calculation of derivations and algebraical analysis. The expression of solutions by infinite series whose law of formation of terms is known, is considered to be an acceptable way to solve a problem exactly, despite the philosophical question of the infinite and the fact that the series obtained, sometimes divergent or slowly convergent, do not always allow an effective numerical computation. Lambert, in 1765, is one of the first to express as series the various quantities involved in the ballistic problem. On his side, Français applies the calculation of derivations for obtaining a number of explicit new formulas. However, he himself admits that these series are too complicated for applications.
Let us mention finally graphical approaches providing artillerymen with an easy and economic tool. Lambert in 1767, and Obenheim in 1818 have the similar idea of replacing some previous ballistic tables by a set of curves carefully drawn by points. In 1848, Didion [START_REF] Didion | Traité de balistique[END_REF], following some of Poncelet's ideas, constructs some ballistic curves that are not a simple graphic representation of numerical tables, but are obtained directly from the differential equation by a true graphical calculation. Artillery was so the first domain of engineering science in which graphical tables, called "abaques" in French, were commonly used.
In conclusion, throughout the 18th and 19th centuries, there has been an interesting interaction between analytic theory of differential equations, numerical and graphical integration, and empirical research through experiments and measurements. Mathematicians, ballisticians and artillerymen, although part of different worlds, collaborated and inspired each other regularly. All that led however to a relative failure, both experimentally to find a good law of air resistance, and mathematically to find a simple solution of the ballistic differential equation.
Mathematical research on the ballistic equation has nevertheless played the role of a laboratory where the modern numerical analysis was able to develop. Mathematicians have indeed been able to test on this recalcitrant equation all possible approaches to calculate the solution of a differential equation. There is no doubt that these tests, joined with the similar ones conceived for the differential equations of celestial mechanics, have helped to organize the domain into a separate discipline at the beginning the 20th century. | 9,888 | [
"12623"
] | [
"54305",
"1004988"
] |
01479387 | en | [
"info"
] | 2024/03/04 23:41:46 | 2016 | https://inria.hal.science/hal-01479387/file/trecvid-semantic-indexing-3.pdf | George Awad
Cees G M Snoek
Alan F Smeaton
Georges Quénot
TRECVid Semantic Indexing of Video: A 6-Year Retrospective
Keywords: TRECVid, video, semantic indexing, concept detection, benchmark
Semantic indexing, or assigning semantic tags to video samples, is a key component for content-based access to video documents and collections. The Semantic Indexing task has been run at TRECVid from 2010 to 2015 with the support of NIST and the Quaero project. As with the previous High-Level Feature detection task which ran from 2002 to 2009, the semantic indexing task aims at evaluating methods and systems for detecting visual, auditory or multi-modal concepts in video shots. In addition to the main semantic indexing task, four secondary tasks were proposed namely the "localization" task, the "concept pair" task, the "no annotation" task, and the "progress" task. It attracted over 40 research teams during its running period.
The task was conducted using a total of 1 400 hours of video data drawn from Internet Archive videos with Creative Commons licenses gathered by NIST. 200 hours of new test data was made available each year plus 200 more as development data in 2010. The number of target concepts to be detected started from 130 in 2010 and was extended to 346 in 2011.
Both the increase in the volume of video data and in the number of target concepts favored the development of generic and scalable methods. Over 8 million shots×concepts direct annotations plus over 20 million indirect ones were produced by the participants and the Quaero project on a total of 800 hours of development data.
Significant progress was accomplished during the period as this was accurately measured in the context of the progress task but also from some of the participants' contrast experiments. This paper describes the data, protocol and metrics used for the main and the secondary tasks, the results obtained and the main approaches used by participants.
Introduction
The TREC conference series has been sponsored by the National Institute of Standards and Technology (NIST) with additional support from other U.S. government agencies since 1991. The goal of the conference series is to encourage research in information retrieval by providing a large test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. In 2001 and 2002 the TREC series sponsored a video "track" devoted to research in automatic segmentation, indexing, and content-based retrieval of digital video. Beginning in 2003, this track became an independent annual evaluation (TRECVID) with a workshop taking place just before TREC 1) * . During the last 15 years of operation, TRECVid has addressed benchmarking of many component technologies used in video analysis, summarisation and retrieval, all with the common theme that they are based on video content. These include shot boundary detection, semantic indexing, interactive retrieval, instance retrieval, and ad hoc retrieval, rushes summarisation, and others.
From 2002 to 2009 inclusive, TRECVid included a task on detection of "High Level Features" (HLFs), also known as "semantic concepts" [START_REF] Smeaton | High-Level Feature Detection from Video in TRECVid: a 5-Year Retrospective of Achievements[END_REF] . In 2010, this task evolved as the "Semantic Indexing" (SIN) task. Its goal is similar; assigning semantic tags to video shots, but it is more focused toward generic methods and large scale and structured concept sets. A more general and varied type of data has been collected by NIST than had been used in previous years of TRECVid which was split into several slices constituting the training and/or testing sets for the 2010 to 2015 issues of the SIN task.
The SIN task has gradually evolved over the period of its running, both in the number of target concepts and the data set sizes. Also, besides the main (or primary) concept detection task, several variants of the task (or secondary tasks) have been run, including a "concept pair" task, a "localization" task, a "no annotation" task, and a "progress" task. As with the earlier HLF detection task, the indexed units in the SIN task are video shots, not full video documents.
The semantic indexing task is related to the Pascal Visual Object Classification (VOC) [START_REF] Everingham | The Pascal Visual Object Classes Challenge: A Retrospective[END_REF] , ILSVRC [START_REF] Russakovsky | ImageNet Large Scale Visual Recognition Challenge[END_REF] and other benchmarking tasks whose goal is to automatically assign semantic tags to still images. The purpose of this paper is to gather together the major contributions and to identify trends across the 6 years of the semantic indexing track and its variations. The paper is organized as follows: section 2 describes the data used for the semantic indexing task, its origin and organisation; section 3 describes the metrics used in TRECVid for evaluation; section 4 describes the main concept detection task and the results achieved across participating groups; sections 5, 6, 7, and 8 describe the concept pair, localization, no annotation and progress secondary tasks respectively. Each task description includes a short overview of the methods used by various participants.
This overview paper does not intend to be exhaustive or an in-depth summary of all the approaches taken by all the participants in all the 6 years of the running of the SIN task. Instead, it aims at illustrating the progress achieved over the period through a number of selected contributions. Full details of all the work done in the task, approaches taken and results achieved, can be found in the annual workshop proceedings, available on the TRECVid website * .
Data 1 IACC collections
In 2010, NIST collected a new set of internet videos (referred to in what follows as IACC, standing for Internet Archive Creative Commons) characterized by a high degree of diversity in creator, content, style, production qualities, original collection device/encoding, language, etc., as is commonly found in much "Web video". The collection also has associated keywords and descriptions provided by the video donor. The videos are available under Creative Commons (CC) licenses * * from the Internet Archive (IA) * * * . The only selection criteria imposed by TRECVid beyond the Creative Commons licensing is one of video duration where the videos were required to be less than 6.4 min in duration. Seven slices (or sub-collections) of about 200 hours of video each have been created. These are officially labeled: IACC.1.tv10.training, IACC.1.A-C, and IACC.2.A-C and described in Table 1.
As can be seen, not all the slices or video subcollections have been selected in the same way: IACC.1.A-C have been selected as the shortest videos up to a duration of about 3.5 minutes (211 seconds) and split into three slices (A, B and C) in a symmetric way by interlacing the list, sorted by video length. IACC. 1.tv10.training has been selected as the subsequent 200 hours among the next shortest videos, up to about 4.1 minutes in duration.
IACC.2.A-C
have been selected as the subsequent 600 hours of the next shortest videos up to about 6.4 minutes in duration, and then split into three slices (A, B and C) in a symmetric way by interlacing the list, sorted by video length. These include a few videos shorter than 4.1 minutes as these had been included into the global IACC collection subsequently. Table 1 also indicates which video collection slices were used for system training and which were used for system evaluation (testing) for each year of the SIN task. From years 2011 to 2013 included, a new slice was introduced each year as "fresh data" for year N while both the test and training data from year N -1 was merged to become training data for year N -1. From years 2013 to years 2015 included, the training data (as well as the annotations) were frozen so that the "progress" task (described in section 8) could be conducted properly. While the IACC.2.A-C slices were used as test collections for years 2013-2015 respectively as "fresh data", they were made available after 2013 so that participants could provide anticipated and blind submissions for years 2014 and 2015 with their 2013 systems and anticipated and blind submissions for year 2015 with their 2014 systems.
2 Master (reference) shot segmentation
As in the earlier HLF task, a common shot segmentation was provided to participants so that they could make submissions in the same way and so that evaluation could be made consistently across submissions using a standard information retrieval procedure. The shot segmentation was performed using an improved version of the Laboratoire dinformatique de Grenoble (LIG) * 4 tool evaluated in the TRECVid 2006 shot boundary detection task. This tool has a good detection rate, especially for gradual transitions [START_REF] Ayache | CLIPS-LSR Experiments at TRECVID 2006[END_REF] . Errors in shot boundary detection are not as critical for the concept detection evaluation as for the main search task, and in the concept pair variant participants were only asked to tell whether a target concept is visible, or not, at least at some point within a given video. A separate task has been defined for the evaluation of the temporal and spatial localization of target concepts described in section 6.
The reference segmentation of video into shots is given in several formats, including simple frame numbers in a text file and an MPEG-7 version * , the latter being the official reference. One MPEG-7 file is provided for each video file of each (sub-)collection (or slice). Additionally, for each issue (from 2010 to 2015), an XML file specifies the list of files that should be used for training and for testing.
3 Key frames
A reference key frame has also been selected for each video shot and the locations of these key frames are included in the segmentation files. In order to select the best key frame within each shot, three criteria were used: (i) closeness to the center of the shot, in order to avoid gradual transition regions if any, (ii) slow motion in the neighborhood of the frame, in order to avoid fuzzy contents, and (iii) high contrast, for having a clean content representation. All these criteria were computed on each video frame using a simple and ad hoc metric. The corresponding scores were then normalized and and averaged. The frame within a shot having the highest score was selected. Archives with the extracted key frames were also made available to participants though SIN detection methods which use the whole shot rather than just the key frames has become the norm in TRECVid and elsewhere were encouraged.
4 Speech transcription
Speech transcription of the audio track was generously contributed by the Laboratoire d'Informatique pour la Mcanique et les Sciences de l'Ingnieur (LIMSI) * * laboratory using their large vocabulary continuous speech recognition system [START_REF] Gauvain | The LIMSI Broadcast News transcription system[END_REF] . In practice, the IACC collection is highly multi-lingual (tens of different spoken languages are mentioned in the meta-data) and many files also include speech in different languages. Many files did not include audio or included audio but no speech. The LIMSI transcription process was therefore conducted in two steps. In the first (for the files in which audio and speech were present) they applied an automatic language detection system. Then, when they detected a language for which they had an automatic speech transcription system, they produced a transcription, otherwise they applied by default their English transcription system. This latter choice is sensible because even if the actual language spoken in the video is different, it may still include English words, especially for technical terms, and proper nouns may also be recognized if pronounced in a similar way.
5 Target concept set
A list of 500 target concepts was generated, 346 of which have been collaboratively annotated by TRECVid participants (see section 2. 6). The target concepts were selected as follows. First, they were chosen so that they include all the TRECVid HLFs from 2005 to 2009 in order to permit cross-collection experiments. Second, they also include the CU-VIREO374 concept set [START_REF] Jiang | CU-VIREO374: Fusing Columbia374 and VIREO374 for Large Scale Semantic Concept Detection[END_REF] which was also widely used in previous TRECVid experiments as a subset of the annotated part of the Large Scale Concept Ontology for Multimedia (LSCOM) [START_REF] Naphade | Large-scale concept ontology for multimedia[END_REF] . All of these concepts were already selected using a number of criteria among which: expected usefulness in a content-based video search system, coverage and diversity. This set was then completed by additional concepts selected among the 3000 available in the last version of LSCOM to which a few were specifically added. The added concepts were selected in order to improve the coverage and diversity of the set as well as for creating a number of genericspecific relations among the concepts. Considering diversity, we specifically managed to have a significant number of samples for the following concept types (not exhaustive): humans, animals, vehicles, scenes, objects, actions and multi-modal (involving audio). The structure of the concept set was enriched with two relations, namely implies and excludes. The goal here was to promote research on methods for indexing many concepts and subsequently using ontology relations between them to enhance the accuracy of concept detection.
The list of the 500 TRECVid SIN concepts is available on the TRECVid web site * . Each concept comes with a TRECVid SIN identifier, the corresponding LSCOM identifier, a name and a definition. In addition, the correspondence with previous TRECVid HLF identifiers and with concept definitions in other benchmarks (e.g. Pascal VOC) are also given when available in order to facilitate cross-collection experiments.
6 Collaborative annotation
As most concept detection methods rely on a supervised learning approach, it was necessary to create annotations for the training of participants' systems. As no funding was initially available for this annotation process and as for the 2003-2009 HLF tasks, participants themselves were involved in the annotation process, each of them contributing at least 3% of the target volume while receiving, in return, the full set of annotations. Some funding from the Quaero project * * later helped to increase the volume of annotations.
The set of target concepts and the set of training video shots were both large and as a consequence, only a fraction of the training set could be annotated, even using the "crowd" of TRECVid SIN participants and with Quaero support. Also, as most of the target concepts were sparse or very sparse in the training collection (less or much less than 1%), an active learning procedure was used in order to prioritize annotations of * http://www-nlpir.nist.gov/projects/tv2012/tv11.sin.500.concepts_ ann_v2.xls/ * * http://www.quaero.org/ the most useful sample shots [START_REF] Ayache | Video Corpus Annotation using Active Learning[END_REF] .
A system with a web interface was provided to participants for producing their annotations. They were required to annotate one concept at a time for a set of video shots represented by their reference key frames. If the key frame alone was not sufficient to enable making a good decision, they could play the full video shot. For each (concept, shot) combination, they had to choose a label as either positive (the concept is visible in the shot), negative (the concept is not visible in the shot), or skipped (ambiguous or bad example).
In addition to the active learning to select shots for annotation, an active cleaning procedure was included in the annotation system. Its aim was to improve the annotation quality by asking for a "second opinion" when manual annotations strongly disagreed with a prediction made by cross-validation from other available annotations. A second opinion was also systematically asked for all positive and skipped annotations as these were quite rare and their correction were likely to have a significant impact [START_REF] Safadi | Active Cleaning for Video Corpus Annotation[END_REF] . In case of disagreement between the first and second opinions, a third opinion was asked for and a majority vote was applied. The system enforced that second and third opinions were asked of different annotators. The annotation system also made use of the provided set of relations in order to increase the number of annotations and to enforce a consistency among them. In the last version of the collaborative annotation, 8,158,517 annotations were made directly by the participants or by the Quaero annotators and a total of 28,864,844 was obtained by propagating those initial annotations using the implies or excludes relations.
In order to improve annotation efficiency and as was done in the years from 2011 to 2013, the test set of year N -1 was included in the development set of year N , the assessments on year N -1 as well as the participants' systems' outputs on year N -1 were all used to bootstrap the active learning for the additional annotations produced for year N . For each year from 2010 to 2013 a new set of annotations was performed and added to the global pool.
Metrics
For the semantic indexing or concept detection task, the progress task and the concept pair task, the official TRECVid metric is the Mean Average Precision (MAP) which is a classic metric in information retrieval. In practice, however, MAP is evaluated on a statistical basis using the Inferred [START_REF] Yilmaz | Estimating Average Precision with Incomplete and Imperfect Judgments[END_REF] and Extended Inferred [START_REF] Yilmaz | A Simple and Efficient Sampling Method for Estimating AP and NDCG[END_REF] Mean Average Precision method using the sample eval tool * * * available from the TRECVid web site. Evaluation is based on an assessment of a subset of the test set built by pooling the top of the submissions from all participants. Additionally, in the Inferred Average Precision (InfAP) approach, the pools are split into sub-pools, some of which are only partially assessed, the first sub-pool being 100% assessed and the following sub-pools being more and more sub-sampled. The extended InfAP approach correspond to a further improvement in the estimation method. The main goal of the inferred approach is to estimate the MAP value with a good accuracy while using much less assessments.
In practice, we used it in order evaluate more concepts (typically twice as many) for the amount of manpower that was allocated for assessments. While doing this, we remained conservative in the pool partitioning and in the selection of the corresponding sub-sampling rates. We also conducted experiments using the submissions of previous years for which the whole pools were assessed at 100% and checked that (i) the inferred MAP values were very close to the actual ones, (ii) the ranking of the systems was not changed.
Concept detection task 1 Task definition
The task of automatic concept detection from video is defined as follows:
"Given the test collection, master shot reference, and concept definitions, return for each target concept a list of at most 2000 shot IDs from the test collection, ranked according to their likelihood of containing the target."
The training conditions, data and annotations are not part of the task definition. However, participant submission types are defined according to the following: 1) and the corresponding collaborative annotation (described earlier in section 2. 6). Type D corresponds to using whatever training data is available. Types E and F have been added in order to encourage research on systems able to work without prior annotation based on including an automatic crawling tool instead (described later in section 7).
Table 2 gives an overview of the specifics of the 2010 to 2015 issues of the SIN task. The first part of the table indicates what training (for type A submissions) and testing data were used in each year. The second part indicate the number of concepts for which annotations were provided, the number of concepts for which participants were required to submit results, and the number of concepts that were actually evaluated. From 2010 to 2012 included, two versions of the task, "light" and "full", were proposed to participants. The numbers are displayed as light/full in these cases. The third part of the table indicates which of the secondary tasks were available for the different years and the corresponding number of targets for those secondary tasks, if relevant. These secondary tasks are described in sections 5, 6, 7 and 8.
From 2010 to 2012, we attempted to scale up the task in order to encourage the development of scalable methods and to follow the ImageNet and LSCOM trends to increase the number of target concepts. Meanwhile, we also offered a light version of the task so that teams unable to follow the increase in the number of concepts could still participate and so that advanced but not yet scalable methods could also be evaluated. Considering participants' feedback, we froze the concepts set size to 346 concepts from 2011 onward. Also, since 2013, considering that the 2010 to 2012 results were consistent between the light and full submissions for the participants that made both, we removed the light/full distinction, replacing it by a single intermediate "main" task with 60 target concepts. These are a subset of the previous 346 "full" concept set and, even though submissions were required only for the 60 concepts in the "main" set, annotations were still made available for the full set and many participants actually computed for the full set and submitted only on the main set.
2 Results
Figures 1 to 6 show the performance obtained by the SIN task participants for the 2010 to 2015 issues of the task respectively. Participants were allowed to make up to four A-to D-type submissions (not necessarily one from each type) plus two additional Eor F-type submissions when possible. For simplifying the visualization, we display on the plots only the best submission from each participant for each submission type. Participants were required to define a priority among their submissions according to their prediction of which one would be the best-performing but we selected here only the actual best one for each submission type. As some participants made submissions with different types, those participants appear several times in the plots.
The total number of participants were respectively 39, 28, 25, 26, 15 and 15 for the 2010 to 2015 issues of the SIN task. From 2010 to 2012, the numbers included participants to both the light and full versions of the SIN task. However, as the light concept set was included in the full one, all submissions for the full task were added to the submissions for the light task. Respectively, 28, 18 and 15 participants made submissions to the full task only in the 2010, 2011 and 2012 issues.
As the test collections and the concepts selected for evaluation differed each year, it is not possible to compare directly the MAP performances across the different issues of the task. The increase of best and median MAP values from 2010 to 2013 is probably partly related to improvements in the methods but it is also likely related to differences in the intrinsic difficulty of the task because of the nature of the video used, and the concepts selected. The size of the training set and the number of available annotations also significantly increased during the years of the task which was the motivation for the introduction of the progress secondary task over the 2013-2015 period (section 8).
Similarly, for the 2010-2012 issues, even though the test collection used is the same, it is not possible to compare directly the MAP performances between the light and full tasks as the concept sets are different. However, it is possible to compare the ranking among systems (or participating teams) that submitted results to the full task (which also appear in the light task), by filtering the submission to the smallest concept set. We can observe that these system/participant rankings are quite consistent across the two versions of the task and even though there are some permutations, there are quite few of them and when they happen the performances of the involved systems are quite comparable. This good stability observed on the 2010-2012 issues validated the choice of keeping only a concept set of intermediate size (60).
For simplicity and for ease of comparison, we display all the submissions for the same year/task in a single graph. However, it should be noted that fair comparisons between approaches should in principle be made only among submissions of the same type (even within a same year/task). Differences in submission types correspond to different training conditions, the main difference being that some actually use more training data or different training data than others, possibly with similar methods. The difference is especially important between the A-D types that use explicitly and purposely annotated data and the E-F types that do not but use instead only data gathered via general search engines which return noisy results that are not manually checked or corrected.
Figure 7 shows the per concept InfAP for the 2015 main task. Results are very similar for the other years. It can be observed that while the MAP is close to 0.3, the per concept Average Precision (AP) varies a lot. Up to 0.8 or more for "Anchorperson", "Studio With Anchorperson" and "Instrumental Musician", close to 0.1 for many others, and close to 0.01 for "Car Racing". These differences are partly due to the high and low frequencies of the target concepts tn the test set and to the intrinsic difficulty of detecting them.
In comparison, figure 8 shows the (inferred) con- cept frequencies in the test collection. These frequencies correspond to the AP of a system making random prediction. Most concept frequencies are below 1% and even below 0.5%. The average concept frequency is of 0.62% while the MAP of the best and median systems are respectively of 36.2% and 24.0%. It can be observed too that concepts with similar frequencies may obtain quite different Average Precisions and vice versa. For instance, "Computers" and "Old People" have similar frequencies but the AP is much higher for "Computers" indicating that "Old People" is harder to detect. Similarly, "Instrumental Musician" and "Studio With Anchorperson" have similar Average Precisions but "Studio With Anchorperson" is much less frequent indicating that "Instrumental Musician" is harder to detect. This can be understood by the fact that "Instrumental Musician" is a true multi-modal target where it is required that the musician can be simultaneously seen and heard. "Basketball" is quite well detected, with an AP of 15.4%, even though it is very infrequent with a frequency of 0.013%. Figure 7 shows only the results for the top 10 submissions of all participants. Though these include several runs from same participants, they gather results from several different participants and are quite often quite grouped, indicating that the best participants or systems always obtain very similar performances for the same target concepts, even though the median run (at a depth of 29) is significantly lower. This is particularly true for instances of "Airplane", "Boat Ship", "Demonstration Or Protest", "Office", "Hills" or "Quadruped". For some other concepts, the AP varies much more within the top 10. This is the case for: "Cheering", "Government Leaders", "Motorcycle", "Telephones", "Throwing" or "Flags".
3 Approaches
Though, as previously mentioned, the performance of systems cannot be directly compared across years due to changes in test data, target concepts and the amount of annotation data available, significant progress has been achieved over the six years during which the SIN task was run. This is confirmed for the last three years in the context of the progress secondary task as can be seen in section 8 but it is likely that this was also the case for the previous years. The approaches of the participants significantly evolved over time leading to significant increases in systems' performance. All of them rely on supervised learning using the provided training data or other annotated data or both. Though there were lots of variations and particular approaches, three main phases could be observed.
In the first phase, many systems followed the "Bag of Visual Words" approach (BoVW) 13) 14) which consists of applying the "bag of words approach" popular in textual information retrieval. In this approach, local features (or descriptors) are extracted for a number of points or patches in images (key frames) or in video shots, and are aggregated into a single global representation. Local features are first quantized according to a "dictionary" built by clustering features extracted on training data. Images or video shots are then represented as histograms of quantized features. Among the most popular local features are: the Scale Invariant Feature Transform (SIFT) [START_REF] David | Distinctive image features from scale-invariant keypoints[END_REF] and its color version [START_REF] Van De Sande | Evaluating color descriptors for object and scene recognition[END_REF] for still images, and the Spatio-Temporal Interest Points (STIP) [START_REF] Laptev | On space-time interest points[END_REF] for video shots. These representations can be obtained from sparse sets of points or regions selected using for instance a Harris-Laplace detector or on dense sets following regular grids. Additionally, representations can be computed either on a whole image or separately on various image decompositions including pyramidal ones [START_REF] Van De Sande | Evaluating color descriptors for object and scene recognition[END_REF] . Other approaches also involves bag of trajectories.
As alternatives or complements to the BoVW approach, participants used simpler descriptors like color histograms, Gabor transforms or local extraction of semantic information (semantic categories on image patches). A few participants also used audio descriptors, most of which were derived from sequences of Mel Frequency Cepstral Coefficient (MFCC) vectors, either via global statistics (mean, standard deviation, . . . ) or again via the bag of words approach.
The shot or key frame representations are then used for supervised learning, mostly using Support Vector Machines (SVM) classifiers. Most participants used several different representations (e.g. color, texture, interest points, audio, motion . . . ) and/or several machine learning methods and fused them for obtaining better results. Fusion methods included early and late fusion [START_REF] Cees | Early versus late fusion in semantic video analysis[END_REF] , and kernel fusion [START_REF] Ayache | Advances in Information Retrieval: 29th European Conference on IR Research[END_REF] , either in flat or in hierarchical ways [START_REF] Tiberius Strat | Hierarchical late fusion for concept detection in videos[END_REF] .
In the second phase, following their introduction in still image representation, improved aggregation methods were introduced or designed for video shot representation. These include Fisher Vectors [START_REF] Sánchez | Image classification with the fisher vector: Theory and practice[END_REF] , Vectors of Locally Aggregated Descriptors [START_REF] Jégou | Aggregating local descriptors into a compact image representation[END_REF] , Vectors of Locally Aggregated Tensors [START_REF] Picard | Efficient image signatures and similarities using tensor products of local descriptors[END_REF] , and SuperVectors [START_REF] Inoue | A fast and accurate video semanticindexing system using fast map adaptation and gmm supervectors[END_REF] . These methods allowed a significant improvement over the basic BoVW approach, even when using the same local descriptors. These methods rely on the use of GMM representations of the training data which capture more information than the basic BoVW approach.
In the third phase, deep learning methods that made a significant breakthrough in still image categorization at ILSVRC 2012 [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF] were introduced and led to another significant improvement over the classical feature extraction and learning approaches. In contrast to these classical approaches, Deep Convolutional Neural Networks (DCNN) are end-to-end solutions in which both the feature extraction and the classifier training are performed at once. The first layers extract a type of infor-mation which is similar to the features/descriptors extracted in the classical approaches. These, called "deep features", turn out to be significantly more efficient that the classical "engineered" ones, even when used with classical machine learning for classifier training. The last DCNN layers performs the final classification with a single network for all the target concepts. The global training of DCNNs guarantees and optimal complementarity between the feature extraction part and the classification part.
The TRECVid training data from the collaborative annotation does not contain enough data for a complete training of a large scale Deep Convolutional Neural Network (DCNN). When tried, this approach performed significantly less well than the two main alternative approaches also used in other domains. The first one consists of partially retraining a DCNN already trained on ImageNet data for adapting it to TRECVid (IACC) data. In this approach, the first layers, corresponding to the feature extraction part, are frozen and only the few last layers are retrained. This is because the deep features trained on ImageNet are very general and does not depend much upon the training data or upon the target concepts while the last layers are much more specific to the set of target concepts. It has been experimentally observed that retraining only very few of the last layers is the best choice, the optimal number being typically only two or even one depending upon the DCNN architecture. The second main alternative to a full DCNN retraining consist in extracting the output of the few last layers and using them just as ordinary features in a classical machine learning (e.g. SVM) approach. Once again, it has been observed that the last two hidden layers and even the final output layer are the best candidates.
Fusion proved to be very efficient when used in conjunction with the deep learning approach. Such fusion can be done in many different ways: late fusion of the different network architectures, late fusion of a same architecture but with different training conditions, late fusion of partially retrained DCNNS and classical classifiers using deep features, late or early fusion of deep features combined with classical classifiers, late fusion of DCNN-based classifiers and fully classical systems using engineered features. Though all of these solutions may have different performances, their fusion almost always outperform the best elementary component with the general rule that the more elements are integrated in a system, the best performances this system reaches, possibly leading to very high system complexity as this was the case already with the classical approaches.
Other completely independent methods have also been used for further improving the system performance, some of them not really new. Among them: the use of multiple key frames for increasing the chance of identifying the target concept in a video shot and the use of the detection of a concept in adjacent shots for exploiting the local semantic coherency in video contents. In the context of DCNNs, the data augmentation approach has also been used, also leading to a significant performance improvement.
The use of audio and motion (STIP or trajectorybased) features does help in the classical approach but with generally a modest contribution. No use of audio or motion were considered yet in the best performing deep learning based approaches. Ontology relations (implies and excludes) were provided but they did not seem to be used directly by the participants, probably due to the difficulty of integrating hard rules with detection scores. However, these were used in the collaborative annotation for generating 28,864,844 total annotations from the 8,158,517 direct ones. So these relations were used indirectly in the training. Implicit or statistical relations between concepts were also used by some participants.
Concept Pair Task
For the 2012 and 2013 edition of the TRECVid benchmark, a secondary concept pair task was offered to SIN participants. This section motivates the task, summarizes results, and highlights the approaches.
1 Motivation
An important motivation for the regular SIN task is to provide semantic tags for video retrieval technologies like filtering, categorization, browsing, and search. While a single concept detection result has been proven by many to be a valuable resource in all these contexts, several video retrieval scenarios demand more complex queries that go beyond a single concept. Examples of concept pairs are Animal + Snow, P erson + U nderwater and Boat/Ship + Bridges. Rather than combining concept detectors at query time, the concept pair task strives for detecting the simultaneous occurrence of a pair of unrelated concepts in a video, where both concepts have to be observable simultaneously in a shot. The overall goal of the concept pair task is to promote the development of methods for retrieving shots containing a combination of concepts that do better than just combining the output of individual concept detectors.
While it can be foreseen that existing single concept detectors can also be trained using concept pair annotations, the combination of potential concept pairs is massive. Hence, such a pair-annotation approach seems unfeasible in practice and is therefore discouraged. By design, the concept pair task did not provide any pair annotations to participants.
2 Results
The performance metric for this task is the (inferred) MAP exactly as for the main task. The 2012 edition of the concept pair task received a total of twelve submissions from six different teams. The top run achieved a score of 0.076 while the median score was 0.041. In addition, the MediaMill team from the University of Amsterdam provided four baseline runs using their single-concept run as the basis. The runs simply relied on the first concept occurrence only, the second concept occurrence only, the sum of both concept detector scores, and the product of both concept detector scores. The baseline recognizing pairs by focusing on the first concept only proved to be a surprisingly valuable tactic, ranking third with a score of 0.056. For the pair Driver + F emale Human F ace the baseline even came out best. Motivated by the fact that systems for pair detection have difficulty in finding evidence for concept co-occurrence it was decided to continue the secondary task in 2013.
In 2013 participation grew to ten teams, submitting a total of 20 runs. Each participant was requested to submit a baseline run which just combines for each pair the output of the groups two independent single-concept detectors. In addition, the option to indicate the temporal order in which the two concepts occurred in a video shot was offered, but no teams participated in that. The top run in 2013 achieved a score of 0.162. While this seems much better than the score obtained in 2012, it should be noted that the pairs changed and some may have been easier, or less rare, than the ones in 2012. The best performer for the pair Government Leader +F lags, for example, scored 0.658. Among the teams who submitted baselines, we found that three of them had baselines that achieved better scores than their regular runs, while only two teams had all their regular runs improve over the baseline. The best run simply combined individual concept detector scores by their product. As their was no experimental evidence after two editions Paper (11) 11 of the task that dedicated approaches could outperform the simple baselines it was decided to stop the concept pair task after the 2013 edition for the time being.
3 Approaches
The majority of runs in the concept pair task focused on combining multiple individual detectors by well known fusion schemes, including sum, product and geometric mean. Some considered compensation for quality and imbalance in training examples of individual detectors by weighted fusion variants. Other approaches learned the pair directly from the intersection of annotations for the individual concepts or gathered examples containing the pair from the web. Among the more unique approaches was the submission from CMU which considered looking at many concepts, beyond just the pair, to enhance the prediction of pair-concepts using several semantically related concepts. Also unique was the submission by the MediaMill team, which tried to reduce the influence of the global image appearance on individual concept detectors, by considering spatiotemporal dependencies between localized objects. Unfortunately, none of these approaches were able to outperform the simple combination baselines. Time will tell whether a concept pair is more than the sum of its parts.
Concept Localization Task
In order to encourage more precise concept detectors, in 2013 a new secondary task was initiated for localizing the occurrence of visual concepts in both the temporal and spatial domains. The main goals of this secondary task are to test the precision of concept detectors to the frame (temporal) and bounding box (spatial) levels instead of just the shot-level, as in the main SIN task. The better the systems do their design of precise detectors, the more re-usable they are as they become less dependent on the video context. During 2013 and 2014 this secondary task was run where systems participating in the SIN task had the option to submit runs to localize the first 1 000 shots. In 2015 the organizers decided to run this as an independent secondary task where systems were given a set of relevant shots and asked to return localization result sets. In total 10 concepts were chosen for localization. In the following sections we discuss in more details the task, data, evaluation framework, metrics and results of participating teams from 2013 to 2015.
1 Task definition
This secondary task can be described as follows: for each visual concept from the list of 10 designated for localization, and for each I-Frame within the shot that contains the target, return the x,y coordinates of the upper left and lower right vertices of a bounding rectangle which contains all of the target concept and as little else as possible. Systems may find more than one instance of a concept per I-Frame and then may include more than one bounding box for that I-Frame, but only one was used in the judging since the ground truth contained only 1 per judged I-Frame, the one chosen by the NIST assessor that was supposed to be the most prominent (e.g., largest, clearest, most central, etc.). Assessors were asked to stick with this choice if a group of targets were repeated over multiple frames unless the prominence changes and they have to change their choice.
2 Data
For this secondary task we used the same test data sets (IACC.2.A, IACC.2.B, IACC. For each image the assessor was asked to decide first if the frame contained the concept or not, and if so, to draw a rectangle on the image such that all of the visible concept was included and as little else as possible.
In accordance with the secondary task guidelines, if more than one instance of the concept appeared in the image, the assessor was told to pick just the most prominent one and to box it in. Assessors were told that in the case of occluded concepts, they should include invisible but implied parts only as a side effect of boxing all the visible parts.
Early in the assessment process it became clear that some additional guidelines were needed. For example, sometimes in a series of sequential images the assessor might know from context that a blurred area was in fact the concept. In this case we instructed the assessor to judge such an image as containing the concept and to box in the blurry area.
A minimum of 5 assessor half-days for each of the 10 concepts to be judged was planned (total of 200 labor hours). This was based on some preliminary tests at NIST where it was estimated that each assessor could judge roughly 6,000 images in the time allotted.
Table 3 describes, for each concept, the total number of shots judged to contain the concept and the number of I-Frames comprised by those shots from 2013-2015. Note that the two concepts "Chair" and "Hand" were replaced in 2015 by "Anchorperson" and "Computer" due to the very high frequency of occurrence of "Chair" in the test collection and the ambiguity of the definition of the concept "Hand" (A close-up view of one or more human hands, where the hand is the primary focus of the shot).
4 Measures Used
Temporal and spatial localization were evaluated using precision, recall and f-score based on the judged I-frames. The I-frame is judged as a true frame temporally if the assessor can see the concept. The spatial recall and precision is calculated using the overlap area between the submitted bounding box and the ground truth box drawn by the assessor. NIST then calculated an average for each of these score values for each concept and for each run.
5 Evaluations and Results
In this section we summarize the participants' results from 2013-2015 by the type of localization measured. In general, 4 teams finished the first year localization secondary task, submitting total of 9 runs while 1 team finished the second year secondary task with 4 submitted runs. In 2015 when the task became independent from the semantic indexing task, 6 teams finished with a total of 21 runs. As the results of 2014 may not indicate real conclusions about systems performance because only 1 team finished the task we will skip the results from that year and discuss only the 2013 and 2015 results.
( 1 ) Temporal Localization Results Figures 11 and12 show the mean precision, recall and F-scores of the returned I-frames by all runs across all 10 concepts in 2013 and 2015 respectively. In 2013 all runs reported much higher recall (reaching a maximum above 50%) than precision or F-score except 1 team (FTRDBJ) which had close scores for the 3 measures. Lower precision scores (maximum 20%) indicate that most runs returned a lot of non-relevant I-frames that did not contain the concept. In 2015 systems reported much higher F-score values compared to the previous two years as 9 out of 21 runs scored above 0.7, and 8 runs scored above 0.6 F-score. We believe these high scores are side-effect of only localizing true positive shots (output of the semantic indexing task) compared to localizing just raw shots (as in 2013-2014) which may include true positive as well as true negative concepts.
To visualize the distribution of recall vs precision, we plotted the results of recall and precision for each submitted concept and run in Figures 13 and14 for 2013 and 2015 respectively. We can see in Figure 13 that the majority of systems submitted many non-target Iframes, achieving high recall and low precision while very few found a balance. However, in 2015 most concepts achieved very high values for both precision and recall (above 0.5). (small size, occluded,low illumination, etc) to demonstrate how a sophisticated localization system can perform while we picked easy examples in Fig 16 (centered, big, clear, etc) where results are not good. This variation in performance shows the gap between a top system and low ranked system. Figures 17 and18 show the performance by run for spatial localization (correctly returning a bounding box around the concept). In 2013 scores were much lower than for the temporal measures and barely reaching above 10% precision. This indicates that finding the best bounding box was a much harder problem than just returning a correct I-frame. In 2015 the F-scores range was less than the temporal F-score range but still higher than the previous two years. Overall, 8 out of the 21 runs scored above 50% and another 8 runs exceeded 40%. The distribution of recall vs precision performance in figures 19 and 20 shows an interesting observation that systems are good at submitting an accurate approximate bounding box size which overlaps with the ground truth bounding box coordinates. This is indicated by the cloud of points in the direction of positive correlation between precision and recall. It can also be shown that in 2015, performance is much better as the distribution of points are moving away from low precision and recall values (less than 0.2) which is on the contrary obvious in 2013.
6 Approaches
Most approaches by participating teams started by applying selective search [START_REF] Uijlings | Selective search for object recognition[END_REF] or EdgeBox [START_REF] Zitnick | Edge boxes: Locating object proposals from edges[END_REF] algorithms to extract a set of candidate boxes independent from the concept category. Features are then extracted from proposed boxes either in a bag of words framework or more recently using deep learning models such as VGG-16, fast-RCNN (Region-based Convolutional Neural Networks) or Inception deep neural networks [START_REF] Szegedy | Going deeper with convolutions[END_REF] . Support vector machines are usually applied as a final layer for classification. In addition, few teams employed Deformable Part-based models [START_REF] Felzenszwalb | Object detection with discriminatively trained part-based models[END_REF] with color or texture features. Deep learning-based approaches, especially the RCNN-based ones, performed the best. 6. 7 Summary and Observations on Localization Tasks The localization secondary task was a successful addition to the semantic indexing main task in 2013 and 2014 and it was decided to run it independently in 2015. In general, detecting the correct I-frames (temporal) only was easier than finding the correct bounding box around the concepts in the I-frames (spatial) and overall, systems can find a good approximate bounding box size that overlaps with the ground truth box but still not with high precision.
In 2015 the scores were significantly higher, mainly because we aimed to make systems just focus on the localization task, bypassing any prediction steps to decide if a video shot included the concept or not as was done in the previous two years in the main semantic indexing task. This may have caused the task to be relatively easy compared to a real-world use case where a localization system would have no way to know beforehand if the video shot already included the concept or not. In future localization tasks we plan to give systems raw shots (which may include true positive or true negative concepts) simulating a semantic indexing predicted shot list for a given concept. We also plan to test systems on a new set of concepts which may include some actions which span much more frames temporally compared to only objects that may not include much motion.
No Annotation Task
For the 2012 to 2015 issues of TRECVid, a "no annotation" secondary task was offered to SIN participants. This section describes how that task worked and the outcomes.
1 Motivation
The motivation behind launching a "no annotation" secondary task is a reflection of the difficulty associated with finding good training data for the supervised learning tools which are used in automatic concept detection. As seen throughout this paper, and especially in subsection 2. 6, the overhead behind manual annotation of positive, and even negative, examples of concept occurrence is huge. The potential for automatically harvesting training data for supervised learning from web resources has been recognised by many, including the first such work by [START_REF] Ulges | Computer Vision Systems: 6th International Conference[END_REF] , [START_REF] Arjan | Can social tagged images aid concept-based video search? In Multimedia and Expo[END_REF] , [START_REF] Fan | Harvesting large-scale weakly-tagged image databases from the web[END_REF] and subsequently by others.
With this in mind, TRECVid offered a secondary SIN task in which no training data for the concepts was provided to participants. There were two variations of the task offered in each of the 4 years, described earlier in section 4 and repeated here: E used only training data collected automatically using only the concepts' name and definition; F used only training data collected automatically using a query built manually from the concepts' name and definition. What is intended here is that participants are encouraged to automatically collect whatever training data they can, and most will use web resources like wordbased image search or word-based search of video resources such as YouTube. This proposition is attractive because it means that in theory there is effectively no restriction to the range, or the type of semantic concepts for which we can build detectors for video and this opens up huge possibilities for video search.
The potential downside to this idea is that the efficacy of these detectors will depend on how accurately participants could locate a good quality set of training data. With manual annotation of training data we expect the annotations to be accurate and there will be few, if any, false positives whereas with automaticallycollected training data we are at the mercy of the techniques that participants use to harvest such data. In particular, for abstract concepts this will be even more difficult and even for semantic concepts which refer to (physical) objects like "motor car", "tree" or "computer screen", it is a challenge to automatically locate many hundreds of positive examples with no false positives creeping into the training set. However with the quality of image search on major search engines improving constantly, some of that earlier work in the area like that reported in [START_REF] Ulges | Computer Vision Systems: 6th International Conference[END_REF] , [START_REF] Arjan | Can social tagged images aid concept-based video search? In Multimedia and Expo[END_REF] , [START_REF] Fan | Harvesting large-scale weakly-tagged image databases from the web[END_REF] is already quite dated in that they were then dealing with a level of image search quality which is now much improved. An additional problem to the level of noise in automatically crawled data is the possible domain mismatch between the general material that can be gathered from the web and the specific domain for which we may want to build a concept detector for.
2 Results
Table 4 shows the number of runs submitted by par-Paper ticipants for the E-and F-type SIN task condition, for each of the 4 years this secondary task was offered. From this we can see we had very low participation with only 18 runs from just a few participants over 3 of the 4 years this was offered. What was interesting about those results was the performance, as measured in terms of mean InfAP. In 2012 the best-performing category A result was 0.32 infAP with a median across submissions of 0.202 while the best-performing category F result was 0.071, with a median of 0.054. The "no annotation" results fall far short of the full category A but for a first running of the secondary task, this was encouraging. By 2014 (there were no results submitted in 2015), the best category E submission scored 0.078 against a best category A submission of 0.34 (mean 0.217). Once again these results are encouraging but with low interest in the task and no participation in its last year, we may have already tapped into all the interest that there might be in this topic.
3 Approaches
For the (limited) number of participants who submitted runs in this task, some used the results of searches to YouTube as a source of training data, others used the results of searches to Google image search, and some used both.
One of the participating teams (the MediaMill group at the University of Amsterdam) investigated three interesting research questions, described at [START_REF] Kordumova | Best practices for learning video concept detectors from social media examples[END_REF] . They found that . . .
• Tagged images are a better source of training data than tagged videos for learning video concept detectors;
• Positive examples from automatically selected tagged images shows best performance;
• Negative training examples are best selected with a negative bootstrap of tagged images One of the things that this secondary task has raised is the question of whether a no annotation approach to determining concept presence or absence is better applied a priori at indexing time, as in this task, or dynamically at query time. One of the disadvantages of indexing video by semantic concepts in advance of searching is that we need to know, and define, those concepts and that limits subsequent searching and video navigation to just those concepts that have been built and applied to the video collection. Building concept detectors at query time allows concepts to be dynamically constructed, if this can be achieved with reasonable response time.
Recent work such as the one by [START_REF] Chatfield | On-the-fly learning for visual search of large-scale image and video datasets[END_REF] has shown that it is possible to take a text query and download several hundreds of top-ranked images from Google image search, compute visual features of those images on-the-fly and use these as positive examples for building a classifier which is then applied to a video collection to detect the presence (and absence) of video shots containing the concept represented by the Google image search query, and to do all this within a couple of seconds while the searcher waits for query results. In the work reported to date this is shown to work well for visually distinct objects like "penguin", "guitar" or "London bus" where the issues of quality of the training set in terms of how many false positives creep into the top-ranked images when searching for penguins, guitars or London buses, is not so important. Further work to refine and improve the training set will mean that more challenging concepts should be detectable and this would offer a real alternative to what was promoted in this secondary SIN task.
Progress Task 1 Motivation
Evaluation campaigns like TREC, TRECVid, Ima-geNet LSVRC and many others are very good for comparing automatic indexing methods at a given time point. The evaluation protocols are usually well designed so that comparisons between methods, systems and/or research teams are as fair as possible. The fairness of the comparison relies for a significant part on the fact that all systems are compared using the same training data (and annotations) and that test data are processed blindly with results being submitted within the same deadline. It also relies on the trust granted to the participants that they do respect the guidelines, especially considering blind processing. While it is acceptable that they have a look at the results for checking that these make sense and for detecting or fixing major bugs, they should never do any system tuning by analyzing them.
This approach implies that when such campaigns are organized periodically, new fresh test data are made available for each issue because a lot of information can be obtained via the analysis of past results, taking them into account in a new version of the system. Applying the new system on past data will then result in biased and invalid results. This is the approach used for the SIN task (as can be seen in Table 2) and more generally at TRECVid.
While it is good to compare various systems or methods at a given time point, it is also interesting to monitor the overall evolution of the state of the art methods' performance over time. As previously mentioned, it is not possible to do this directly using the results obtained from consecutive issues of TRECVid because they differ on the test samples, on the evaluated categories and/or on the amount of training data. The ideal solution would be that regular participants keep a version of their system from each year and apply it, unchanged, for each of the subsequent years. Even in this case, the comparison would not be meaningful if new training data became available in the intervening period. For practical reasons, it is often complicated to maintain over years, a number of previous versions of the systems and, in the best cases, some participants are able to make one reference submission using their best system from the previous year. Some studies have shown that significant progress has been achieved over time in the past [START_REF] Cees | Visual-Concept Search Solved[END_REF] . However, these have been made a posteriori, and while their conclusions are valuable, they did not strictly follow the blind submission process. Also, they concerned only submissions from a single participant.
The "progress" secondary task was developed following the feedback from a number of participants from 2010 to 2012. Its goal was to obtain meaningful comparisons between successive versions of systems and to accurately measure the performance progress over time. It was conducted on the 2013 to 2015 issues by:
• releasing the test data for the three 2013 to 2015 issues at once;
• freezing the training data and annotation sets (no new annotations were made available in 2014 and 2015);
• freezing the concept set for which submissions were requested;
• requiring participants each year to directly submit runs for the current issue and for all the available next issues (i.e. in 2013, participants submitted runs for the 2013, 2014 and 2015 test collections; in 2014, they submitted runs for the 2014 and 2015 test collections; in 2015, they submitted runs only for the 2015 test collec-tion).
Apart from the fact that some submitted runs are anticipated submissions for future years, this secondary task is exactly the same as the main SIN task described in section 4. Submissions to the progress task corresponding to the current year are the same as those for the main task by the same participant. Submissions made by a participant for the future years are included in the pool of submissions for these future years. These anticipated submissions have been filtered out in the presentation of the results in section 4 but they were included in the same evaluation process, including their insertion in the pooling process for assessment.
Submitting to the progress secondary task required little effort from participants just running their systems on one, two or three slices of the test data instead of only just one, while the main work was in the design, the training and the tuning of their systems. The rule of not using new annotations for the 2014 and 2015 submissions was specific to the progress task. Some participants to the main task that did not submit to the progress task and actually used the 2013 and/or 2014 assessment as additional annotations, especially for parameter tuning by cross-validation on them. This possibly induced a small disadvantage for the participants to the main task that strictly followed the progress task protocol.
2 Results
Six groups participated in the progress task by submitting anticipated runs in 2013 and 2014: Eurecom, IRIM, ITI CERTH, LIG/Quaero, UEC and insightdcu. Figure 21 shows the performance obtained on the 2015 test collection with their 2013, 2014 and 2015 systems. For most of them, a significant performance improvement is observed. Some of the points, e.g. Eurecom and UEC 2013 submissions, are "outliers", their low performance being due to bugs in their submissions. In the case of Eurecom, IRIM and LIG-Quaero, most of the performance gains come from the use of more and more deep features. For IRIM and LIG-Quaero, it also comes from the use of multiple key frames in 2015. The typical performance gain between 2013 and 2015 is of about 30% in relative MAP value. It was mostly due to the use of deep learning, either directly via partial retraining, or indirectly via the use of deep features, or via combinations of both.
In addition to the official progress task, some participants like the University of Amsterdam often submitted one run for the current year using their previous year's best system as a baseline. Though this approach does not strictly follow the progress task protocol, it still produces meaningful results that also demonstrate significant progress over years. Additionally, some participants like the University of Helsinki compared the yearon-year progression of their PICSOM system over 10 years [START_REF] Viitaniemi | Advances in visual concept detection: Ten years of trecvid[END_REF] , including most of the current semantic indexing task period but also the previous High-Level Feature
Post-campaign experiments
The TRECVid advisory committee has decided to stop (or suspend) the Semantic Indexing task in 2016. The main reason is that a lot has been learned on this problem for which many techniques are now mature and effective and it is time to move back to the previously suspended main video search task. In the context of the new Ad hoc Video Search (AVS) task, semantic indexing is likely to play a significant role and it will still be indirectly evaluated as a key component.
The data, annotations, metrics, assessment and protocol of the task will remain available for the past TRECVid participants or for new groups that would like to use them for post-campaign experiments. This is similar to what is proposed for the Pascal VOC Challenge 3) closed in 2012 and for which it is still possible to evaluate submissions for the past campaigns and for which an evaluation server is still running and a leaderboard is permanently maintained. This will be slightly different in the case of the TRECVid semantic task.
First, the "ground truth" on the test data has been released for the SIN task while it is maintained hidden for Pascal VOC. However, in both cases, the validity of the results rely on the trust granted to the participants that they will not tune their system on the test data; this is a bit harder in the case of VOC but still possible since a number of test submissions are possible. In the case of the TRECVid SIN task, participants to post-campaign experiments should not in any case tune their systems on the test data; for the results to be valid and fair, system tuning should be done only by cross-validation within the development data. A second difference is that there will be no evaluation server; the evaluation will have to be made directly by the participants using the provided ground truth and the sample eval tool available on the TRECVid server.
Conclusion
The Semantic INdexing (SIN) task has been running at TRECVid from 2010 to 2015 inclusive with the support of NIST and the Quaero project. It followed the previously proposed High-Level Feature (HLF) detection task which ran from 2002 to 2009 2) . It attracted over 40 participants during the period. The number of participants gradually decreased during the period, while it increased during the previous HLF task, but still 15 groups finished during the last two editions.
The task was conducted using a total of 1,400 hours of video data drawn from the IACC collection gathered by NIST. 200 hours of new test data was made available each year plus 200 more as development data in 2010. The number of target concepts started from 130 in 2010 and was extended to 346 in 2011. Both the increase in the volume of video data and in the number of target concepts favored the development of generic and scalable methods. A very large number of annotations was produced by the participants and by the Quaero project on a total of 800 hours of development data.
In addition to the main semantic indexing task, four secondary task were proposed: the "localization" task, the "concept pair" task, the "no annotation" task, and the "progress" task.
Significant progress was accomplished during the period as this was accurately measured in the context of the progress task but also from the participants' contrast experiments. Two major changes in the methods were observed: a first one by moving from the basic "bag of visual words" approach to more elaborate aggregation methods like Fisher Vectors or SuperVectors, and a second one with the massive introduction of deep learning, either via partially retrained network or via the use of features extracted using previously trained deep networks. These methods were also combined with many other like fusion of features or of classifiers, use of multiple frame per shot, use of semantic temporal consistency, and use of audio and motion features. Most of this progression was directly made possible via the development data, the annotations, and the evaluations proposed in the context of the TRECVid semantic indexing task.
Hamster_3 F_A_FTRDBJ-HLF-2_2 F_A_NEC-UIUC-4_4 F_A_TT+GT_run1_1 F_C_FTRDBJ-HLF-3_3 F_A_PicSOM_2geom-max_2 F_A_Marburg3_2 F_A_CU,Athena_3 F_A_UC3M_3_3 F_A_inria,willow_2 F_A_DFKI-MADM_3 F_A_Quaero_RUN02_2 F_A_UEC_MKL_2 F_A_brno,color_2 F_A_MUG-AUTH_2 F_A_IRIM_RUN1_1 F_A_VIREO,baseline_vk_cm_4 F_A_ITI-CERTH_1 F_A_NHKSTRL1_1 F_D_NTU-r2-C,H_2 F_A_NTU-RF-L_3 F_A_Fudan,TV10,3_3 F_C_MUG-AUTH_4 F_D_DFKI-MADM_1 F_A_LIF_RUN1_1 F_C_nii,ksc,run1005_1 F_A_nii,ksc,run1001_3 F_B_DFKI-MADM_2 F_D_KBVR_2 F_A_Fuzhou_Run2_2 F_A_FIU-UM-3_3 F_A_uzay,sys3_3 F_A_CMU4_4 F_A_IIPLA_Ritsu_CBVR_1
2.C) as used for SIN from 2013-2015 as the basis for the localization task. 6. 3 Evaluation framework Figures 9 and 10 show the evaluation framework at NIST for the localization secondary task in 2013, 2014 and 2015 respectively. In 2013 for each shot found to contain a localization concept in the main SIN task, a sequential percentage (22 %) subset of the I-Frames beginning at a randomly selected point within the shot was selected and presented to an assessor. However, in 2014 and 2015, a systematic sampling was employed to select I-frames at regular intervals (every 3rd I-frame in 2014 and every alternate I-frame in 2015) from the shot.
Fig. 9
9 Fig. 9 2013-2014 Evaluation Framework
Fig. 10
10 Fig. 10 2015 Evaluation Framework
Fig. 11
11 Fig. 11 2013: Temporal localization results by run
Fig. 12
12 Fig. 12 2015: Temporal localization results by run
Fig. 14
14 Fig. 14 2015: temporal precision and recall per concept for all teams
Fig. 16
16 Fig. 16 Visual samples of less good results
Fig. 172013: Spatial localization results by run
Fig. 18
18 Fig. 18 2015: Spatial localization results by run
Fig. 19
19 Fig. 19 2013: spatial precision and recall per concept for all teams
Fig. 21
21 Fig. 21 Progress task results: performance on 2015 test data from 2013, 2014 and 2015 systems.
Table 1
1 IACC collections statistics
Collection total video min/mean/max video mean used for used for
(slice) duration (h) files duration (s) shots duration (s) training test
IACC.1.tv10.training 198 3,127 211/228/248 118,205 6.04 2010-2015 -
IACC.1.A 220 8,358 11/95/211 144,757 5.48 2011-2015 2010
IACC.1.B 218 8,216 11/96/211 137,327 5.72 2012-2015 2011
IACC.1.C 221 8,263 11/96/211 145,634 5.46 2013-2015 2012
IACC.2.A 199 2,407 10/297/387 110,947 6.46 - 2013
IACC.2.B 197 2,368 10/299/387 106,611 6.65 - 2013-2014
IACC.2.C 199 2,395 10/298/387 113,046 6.32 - 2013-2015
Total 1452 35,134 10/149/387 876,527 5.97 N.A. N.A.
Table 2
2
2010-2015 SIN tasks summary
Data Number of Concepts Secondary Tasks
Year Training Test Annotated Submitted Evaluated Concept Locali-No anno-Progress
data data concepts concepts concepts pairs zation tation
2010 IACC.1.tv10.training IACC.1.A 130 10/130 10/30 - - - -
2011 2010 train + 2010 test IACC.1.B 346 50/346 23/50 - - - -
2012 2011 train + 2011 test IACC.1.C 346 50/346 15/46 10 - Yes -
2013 2012 train + 2012 test IACC.2.A 346 60 38 10 10 Yes Yes
2014 2013 train IACC.2.B 346 60 30 - 10 Yes Yes
2015 2013 train IACC.2.C 346 60 30 - 10 Yes Yes
Top 10 InfAP scores by concept for the 2015 main task. Starred concepts were common between the 2014 and 2015 main tasks.
1
Median
0.9
0.8
0.7
Inf AP. 0.4 0.5 0.6
0.3
0.2
0.1
0
Airplane* Anchorperson Basketball* Bicycling* Boat_Ship* Bridges* Bus* Car_Racing Cheering* Computers* Dancing Demonstration_Or_Protest Explosion_fire Government_leaders Instrumental_Musician* Kitchen Motorcycle* Office Old_people Press_conference Running* Telephones* Throwing Flags* Hill Lakes Quadruped* Soldiers Studio_With_Anchorperson Traffic
3500
3000
2500
2000
1500
1%
1000
500
0
Airplane Anchorperson Basketball Bicycling Boat_Ship Bridges Bus Car_Racing Cheering Computers Dancing Demonstration_Or_Protest Explosion_fire Government_leaders Instrumental_Musician Kitchen Motorcycle Office Old_people Press_conference Running Telephones Throwing Flags Hill Lakes Quadruped Soldiers Studio_With_Anchorperson Traffic
Fig. 8 Inferred concept frequency for the 2015 main task
Fig. 7
Table 3
3 Number of TP shots and I-frames per concept
Name True shots I-Frames
Airplane 594 10,229
Boat Ship 1,296 2,917
Bridges 662 884
Bus 561 12,027
Chair 2,375 93,206
Hand... 1,718 20,266
Motorcycle 584 12,086
Telephones 508 19,163
Flags 1,219 41,886
Quadruped 1,233 50,448
Anchorperson 300 14,119
Computers 300 15,814
Table 4
4 Number of runs submitted in the "no annotation" secondary task
Year Type E Type F
2012 1 4
2013 6 3
2014 4 0
2015 0 0
( 4 ) ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
[START_REF] Gauvain | The LIMSI Broadcast News transcription system[END_REF] ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
[START_REF] Naphade | Large-scale concept ontology for multimedia[END_REF] ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
[START_REF] Safadi | Active Cleaning for Video Corpus Annotation[END_REF] ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
[START_REF] Yilmaz | A Simple and Efficient Sampling Method for Estimating AP and NDCG[END_REF] ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
Paper
[START_REF] Cees | Early versus late fusion in semantic video analysis[END_REF] ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
[START_REF] Tiberius Strat | Hierarchical late fusion for concept detection in videos[END_REF] ITE Transactions on Media Technology and Applications Vol. xx, No. xx (20xx)
Acknowledgments
This work was also partly realized as part of the Quaero Programme funded by OSEO, French State agency for innovation. The authors wish to thank Paul Over from NIST, now retired, for his work in setting up the TRECVid benchmark and for his help in managing the semantic indexing task.
Disclaimer: Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose.
Georges Quénot Georges Quénot is a se- nior researcher at CNRS (French National Centre for Scientific Research).
He has an engineer diploma of the French Polytechnic School (1983) and a PhD in computer science (1988) from the University of Orsay. He currently leads the Multimedia Information Indexing and Retrieval group (MRIM) of the Laboratoire dinformatique de Grenoble (LIG) where he is also responsible for their activities on video indexing and retrieval. His current research activity includes semantic indexing of image and video documents using supervised learning, networks of classifiers and multimodal fusion. | 77,344 | [
"3114"
] | [
"442150",
"122781",
"442774",
"1041964"
] |
01479456 | en | [
"shs",
"math"
] | 2024/03/04 23:41:46 | 2014 | https://hal.science/hal-01479456/file/tournes_2014a_icm_seoul.pdf | Dominique Tournès
Mathematics of Engineers: Elements for a New History of Numerical Analysis
Keywords: Mathematics Subject Classification (2010). Primary 65-03; Secondary 01A55, 01A60 Mathematics of engineers, numerical analysis, nomography, civil engineering, topography, ballistics, hydraulics, linear systems, differential equations, dynamical systems
The historiography of numerical analysis is still relatively poor. It does not take sufficient account of numerical and graphical methods created, used and taught by military and civil engineers in response to their specific needs, which are not always the same as those of mathematicians, astronomers and physicists. This paper presents some recent historical research that shows the interest it would be to examine more closely the mathematical practices of engineers and their interactions with other professional communities to better define the context of the emergence of numerical analysis as an autonomous discipline in the late 19th century.
Introduction
Few recent books have been devoted to the history of numerical analysis. Goldstine [START_REF] Goldstine | A History of Numerical Analysis from the 16th through the 19th Century[END_REF] was a pioneer. His work focuses primarily on identifying numerical methods encountered in the works of some great mathematicians: Newton, Maclaurin, Euler, Lagrange, Laplace, Legendre, Gauss, Cauchy and Hermite. The main problems are the construction of logarithmic and trigonometric tables necessary for astronomical calculations, Kepler's equation, the Lunar theory and its connection with the calculation of longitudes, the three-body problem and, more generally, the study of perturbations of orbits of planets and comets. Through these problems we assist to the birth of finite difference methods for interpolating functions and calculating quadratures, developments in series or continued fractions for solving algebraic equations and differential equations, and the method of least squares for finding optimal solutions of linear systems with more equations or less equations than unknowns. At the end of the book, a few pages involve Runge, Heun, Kutta, Moulton, that is to say, some characters who can be considered as being the first applied mathematicians identified as such in the late 19th century and the beginning of the 20th. In Goldstine's survey, numerical analysis is thus the fruit of a few great mathematicians who developed the foundations of today's numerical methods by solving some major problems of astronomy, celestial mechanics and rational mechanics. These numerical methods were then deepened by professional applied mathematicians appearing in the late 19th century, which was the time when numerical analysis, as we know it today, structured itself into an autonomous discipline. In this story, a few areas of inspiration and intervention other than astronomy are sometimes mentioned incidentally, but no engineer is explicitly quoted.
While Goldstine actually begins his history in the 16th century, Chabert [START_REF] Chabert | A History of Algorithms: From the Pebble to the Microchip[END_REF] gives more depth to the subject by examining numerical algorithms in a variety of texts from various civilizations since Antiquity. Besides the famous previously mentioned problems of astronomy such as Kepler's equation, the determination of orbits of comets, the brightness of stars, etc., there are some references to other domains, for example the theory of vibrating strings or the signal theory. Some engineers are mentioned, in general in connection with secondary points. Only one of them, Cholesky, is quoted for a significant contribution consisting in an original method for solving linear systems (see Section 3). Despite these few openings compared to previous work, most numerical analysis questions addressed in Chabert's book are presented as abstract mathematical problems, out of context.
In a more recent collective book edited by Bultheel and Cools [START_REF]The Birth of Numerical Analysis[END_REF], the birth of modern numerical analysis is located precisely in 1947, in a paper of John von Neumann (1903Neumann ( -1957) ) and Herman Goldstine (1913Goldstine ( -2004) ) [START_REF] Neumann | Numerical inverting of matrices of high order[END_REF] which analyzes for the first time in detail the propagation or errors when solving a linear system, in conjunction with the first uses of digital computers. The authors recognize naturally that a lot of numerical calculations were made long before this date in various questions of physics and engineering, but for them the problem of the practical management of calculations made by computer actually founds the field of numerical analysis and this apparently technical problem is at the origin of the considerable theoretical developments that this domain generated since the mid-20th century. In this book written not by historians but by specialists of numerical analysis, it is interesting to note that the accepted actors of the domain do not trace the history of their discipline beyond what characterizes their current personal practices.
In fact, the birth of numerical analysis, in the modern sense of the term, should not be connected to the advent of digital computers, but to the distinction between pure mathematics and applied mathematics (formerly "mixed mathematics"), which is clarified gradually throughout the 19th century with a more and more marked separation between the two domains in scientific journals, institutions and university positions 1 . The development of new calculating instruments -be-fore computers, there were numerical and graphical tables, slide rules, mechanical instruments of integration, desktop calculators, etc. -has also contributed to setting up a new equilibrium between analytical, numerical and graphical methods. This is actually around 1900 that mathematicians began to formulate, in concrete terms, what is meant by "applied mathematics". Germany, and particularly Göttingen, played a leading role in this international process of institutionalization of applied mathematics as an autonomous domain [26, p. 60-63]. Encouraged by Felix Klein, Carl Runge (1856-1927) and Rudolf Mehmke (1857-1944) assumed in 1901 the editorship or the Zeitschrift für Mathematik und Physik and devoted this journal to applied mathematics. In 1904, Runge accepted the first full professorship of applied mathematics at the University of Göttingen. In 1907, German applied mathematicians adopted the following definition:
The essence of applied mathematics lies in the development of methods that will lead to the numerical and graphical solution of mathematical problems. 2 Recent research has shown that engineers have constituted a bridge between mathematics and their applications since the 18th century, and that problems encountered in ballistics, strength of materials, hydrodynamics, steam engines, electricity and telephone networks also played an important role in the creation of original numerical and graphical methods of computation. In fact, the mathematical needs of engineers seem very different from those of mathematicians. To illustrate this with a significant example, consider the problem of the numerical solution of equations, a pervasive problem in all areas of mathematics intervention. Léon-Louis Lalanne (1811-1892), a French civil engineer who, throughout his career, sought to develop practical methods for solving equations, wrote what follows as a summary when he became director of the École des ponts et chaussées:
The applications have been, until now, the stumbling block of all the methods devised for solving numerical equations, not that, nor the rigor of these processes, nor the beauty of the considerations on which they are based, could have been challenged, but finally it must be recognized that, while continuing to earn the admiration of geometers, the discoveries of Lagrange, Cauchy, Fourier, Sturm, Hermite, etc., did not always provide easily practicable means for the determination of the roots. 3 its history" [START_REF] Epple | From "mixed" to "applied" mathematics: Tracing an important dimension of mathematics and its history[END_REF].
2 "Das Wesen der angewandten Mathematik liegt in der Ausbildung und Ausübung von Methoden zur numerischen und graphischen Durchführung mathematischer Probleme" (quoted in [27, p. 724]).
3 "Les applications ont été, jusqu'à ce jour, la pierre d'achoppement de tous les procédés imaginés pour la résolution des équations numériques, non pas que, ni la rigueur de ces procédés, ni la beauté des considérations sur lesquelles ils se fondent, en aient reçu la moindre atteinte; mais enfin il bien reconnaître que, sans cesser de mériter l'admiration des géomètres, les découvertes de Lagrange, de Cauchy, de Fourier, de Sturm, d'Hermite, etc., n'ont pas fourni toujours des moyens facilement praticables pour la détermination des racines" [20, p. 1487].
Lalanne says that as politely as possible, but his conclusion is clear: the methods advocated by mathematicians are not satisfactory. These methods are complicated to understand, long to implement and sometimes totally impracticable for ground engineers, foremen and technicians, who, moreover, did not always receive a high-level mathematical training.
Given such a situation, 19th-century engineers were often forced to imagine by themselves the operational methods and the calculation tools that mathematicians could not provide them. The objectives of the engineer are not the same as those of the mathematician, the physicist or the astronomer: the engineer rarely needs high accuracy in his calculations, he is rather sensitive to the speed and simplicity of their implementation, especially since he often has to perform numerous and repetitive operations. He also needs methods adapted for use on the ground, and not just for use at the office. Finally, priority is given to methods that avoid performing calculations by oneself, methods that provide the desired result directly through a simple reading of a number on a numerical or graphical table, on a diagram, on a curve or on the dial of a mechanical instrument.
In this paper, I would want to show, through some examples from recent historical research, that the engineers, so little mentioned so far in the historiography of numerical analysis, have contributed significantly throughout the 19th century to the creation of those numerical and graphical methods that became an autonomous discipline around 1900. More than that, I shall underline that their practical methods have sometimes been at the origin of new theoretical problems that also inspired pure mathematicians.
From Civil Engineering to Nomography
The 19th century is the moment of the first industrial revolution, which spreads throughout the Western world at different rates in different countries. Industrialization causes profound transformations of society. In this process, the engineering world acquires a new identity, marked by its implications in the economic development of industrial states and the structuration of new professional relationships that transcend national boundaries. Linked to the Industrial Revolution, enormous computational requirements appeared during the 19th century in all areas of engineering sciences and caused an increasing mathematization of these sciences. This led naturally to the question of engineering education: how were engineers prepared to use high-level mathematics in their daily work and, if necessary, to create by themselves new mathematical tools?
The French model of engineering education in the early 19th century is that of the École polytechnique, founded in 1794 4 . Although it had initially the ambition to be comprehensive and practice-oriented, this school quickly promoted a highlevel teaching dominated by mathematical analysis. This theoretical teaching was then completed, from the professional point of view, by two years in application schools with civil and military purposes. Such a training model, which subordinates practice to theory, has produced a corporation of "scholarly engineers" capable of using the theoretical resources acquired during their studies to achieve an unprecedented mathematization of the engineering art. This model is considered to have influenced the creation of many polytechnic institutes throughout Europe and to the United States.
A paradigmatic example of a corpus of mathematical tools, constituting an autonomous knowledge which was created from scratch by engineers themselves to meet their needs, is that of nomography 5 . The main purpose of nomography is to construct graphical tables to represent any relationship between three variables, and, more generally, relationships between any number of variables. Among the "Founding Fathers" of nomography, four were students at the École polytechnique: Lalanne, Charles Lallemand (1857-1938), Maurice d'Ocagne and Rodolphe Soreau . The only exception in this list is the Belgian engineer Junius Massau (1852Massau ( -1909)), an ancient student and then professor at the school of civil engineering of the University of Ghent, but, in this school of civil engineering, the training was comparable to that of the École polytechnique, with high-level courses of mathematics and mechanics.
During the years 1830-1860, the sector of public works experiences a boom in France and more generally in Europe. The territories of the different countries are covered progressively by vast networks of roads, canals, and, after 1842, of railways. These achievements require many tedious calculations of surfaces of "cut and fill" on cross-sections of the ground. Cut and fill is the process of earthmoving needed to construct a road, a canal or a railway. You have to cut land where the ground level is too high and then transport this land to fill the places where the ground level is too low. And to calculate roughly the volume of land to be transported, you have to decompose this volume in thin vertical slices, evaluate the area of each slice and sum all these elementary areas.
Civil engineers tried different methods of calculation more or less expeditious. Some, like Gaspard-Gustave Coriolis (1792-1843), have calculated numerical tables giving the surfaces directly based on a number of features of the road and its environment. Other engineers, especially in Germany and Switzerland, designed and built several kinds of planimeters, that is mechanical instruments used to quickly calculate the area of any plane surface. These planimeters, which concretize the continuous summation of infinitesimal surfaces, had significant applications in many other scientific fields beyond cuts and fills. Still others, like Lalanne, have imagined replacing numerical tables by graphical tables, cheaper and easier to use. It is within this framework that nomography developed itself and was deepened throughout the second half of the 19th century.
First principles of nomography. The departure point of nomography lies in the fact that a relationship between three variables α, β and γ can be considered, under certain conditions, as the result of the elimination of two auxiliary variables x and y between three equations, each containing only one of the initial variables. One can then represent the equation by three sets of lines in the plane x-y, one of them parametrized by α, the second by β and the third by γ. On this kind of graphical table, called a "concurrent-line abaque", a solution of the equation corresponds to an intersection point of three lines.
Isolated examples of graphical translation of double-entry tables are found already in the first half of the 19th century, mainly in the scope of artillery, but this is especially Lalanne who gave a decisive impetus to the theory of graphical tables. In 1843, he provided consistent evidence that any law linking three variables can be graphed in the same manner as a topographic surface using its marked level lines. His ideas came to a favorable moment. Indeed, the Act of June 11, 1842 had decided to establish a network of major railway lines arranged in a star from Paris. To run the decision quickly, one felt the need for new ways of evaluating the considerable earthworks to be carried out. In 1843, the French government sent to all engineers involved in this task a set of graphical tables for calculating the areas of cut and fill on the profile of railways and roads.
Curves other than straight lines are difficult to construct on paper. For this reason, Lalanne imagined the use of non-regular scales on the axes for transforming curves into straight lines. By analogy with the well-known optical phenomenon previously used by certain painters, he called "anamorphosis" this general transformation process. After Lalanne, the graphical tables resting on the principle of concurrent lines spread rapidly until becoming, in the third quarter of the 19th century, very common tools in the world of French engineers.
Massau succeeded Lalanne to enrich the method and its scope of applications. For that, he introduced a notion of generalized anamorphosis, seeking what are the functions that can be represented using three pencils of lines. Massau put in evidence that a given relationship between three variables can be represented by a concurrent-straight-line abaque if, and only if, it can be put into the form of a determinant of the type
f 1 (α) f 2 (α) f 3 (α) g 1 (β) g 2 (β) g 3 (β) h 1 (γ) h 2 (γ) h 3 (γ) = 0.
These determinants, called "Massau determinants", played an important role in the subsequent history of nomography; they are encountered in research until today. As an application of this new theory, Massau succeeded in simplifying Lalanne's abaques for cuts and fills. With Massau's publications, the theory of abaques was entering into a mature phase, but at the same time a new character intervened to orient this theory towards a new direction.
From concurrent-line abaques to alignment nomograms. In 1884, when he is only 22 years old, d'Ocagne observes that most of the equations encountered in practice can be represented by an abaque with three systems of straight lines and that three of these lines, each taken in one system, correspond when they meet into a point. His basic idea is then to construct by duality, by substituting the use of tangential coordinates to that of punctual coordinates, a figure in correlation with the previous one: each line of the initial chart is thus transformed into a point, and three concurrent lines are transformed into three aligned points. The three systems of marked straight lines become three marked curves. Through this transformation, a concurrent-straight-line abaque becomes an "alignment abaque", which is easier to use.
A given relationship between three variables is representable by an alignment abaque if, and only if, it can be put into the form of a Massau determinant, because it is clear that the problem of the concurrency of three straight lines and the problem of the alignment of three points, dual to each other, are mathematically equivalent. As his predecessors, d'Ocagne applied immediately his new ideas to the problem of cuts and fills, actually one of the main problems of civil engineering.
After this first achievement in 1891, d'Ocagne deepened the theory and applications of alignment abaques until the publication of a large treatise in 1899, the famous Traité de nomographie, which became for a long time the reference book of the new discipline. A little later, he introduced the generic term "nomogram" to replace "abaque", and the science of graphical tables became "nomography". From there, alignment nomograms were quickly adopted by many engineers for the benefit of the most diverse applications. At the turn of the 20th century, nomography was already an autonomous discipline well established in the landscape of applied sciences.
Mathematical implications of nomography. The mathematical practices of engineers are often identified only as "applications", which is equivalent to consider them as independent from the development of mathematical knowledge in itself. In this perspective, the engineer is not supposed to develop a truly mathematical activity. We want to show, through the example of nomography, that this representation is somewhat erroneous: it is easy to realize that the engineer is sometimes a creator of new mathematics, and, in addition, that some of the problems which he arises can in turn irrigate the theoretical research of mathematicians.
Firstly, the problem of general anamorphosis, that is to say, of characterizing the relationships between three variables that can be put in the form of a Massau determinant, has inspired many theoretical research to mathematicians and engineers: Cauchy, Saint-Robert, Massau, Lecornu, and Duporcq have brought partial responses to this problem before that in 1912 the Swedish mathematician Thomas Hakon Gronwall (1877-1932) gives a complete solution resulting in the existence of a common integral to two very complicated partial differential equations. But, as one can easily imagine, this solution was totally inefficient, except in very simple cases.
After Gronwall, other mathematicians considered the problem of anamorphosis in a different way, with a more algebraic approach that led to study the important theoretical problem of linear independence of functions of several variables. These mathematicians, like Kellogg in the US, wanted to find a more practical solution not involving partial differential equations. A complete and satisfactory solution was finally found by the Polish mathematician Mieczyslaw Warmus . In his Dissertation of 1958, Warmus defined precisely what is a nomographic function, that is a function of two variables that can be represented by an alignment nomogram, and classified nomographic functions through homography into 17 equivalence classes of Massau determinants. Moreover, he gave an effective algorithm for determining if a function is nomographic and, if true, for representing it explicitly as a Massau determinant.
Beyond the central problem of nomographic representation of relationships between three variables, which define implicit functions of two variables, there is the more general problem of the representation of functions of three or more variables. Engineers have explored various ways in this direction, the first consisting in decomposing a function of any number of variables into a finite sequence of functions of two variables, which results in the combined use of several nomograms with three variables, each connected to the next by means of a common variable. Such a practical concern was echoed unexpectedly in the formulation of the Hilbert's 13th problem, one of the famous 23 problems that were presented at the International Congress of Mathematicians in 1900 [START_REF] Hilbert | Mathematical problems[END_REF]. The issue, entitled "Impossibility of the solution of the general equation of the 7th degree by means of functions of only two arguments", is based on the initial observation that up to the sixth degree, algebraic equations are nomographiable. Indeed, up to the fourth degree, the solutions are expressed by a finite combination of additions, subtractions, multiplications, divisions, square root extractions and cube root extractions, that is to say, by functions of one or two variables. For the degrees 5 and 6, the classical Tschirnhaus transformations lead to reduced equations whose solutions depend again on one or two parameters only. The seventh degree is then the first actual problem, as Hilbert remarks:
Now it is probable that the root of the equation of the seventh degree is a function of its coefficients which does not belong to this class of functions capable of nomographic construction, i. e., that it cannot be constructed by a finite number of insertions of functions of two arguments. In order to prove this, the proof would be necessary that the equation of the seventh degree is not solvable with the help of any continuous functions of only two arguments [19, p. 462].
In 1901, d'Ocagne had found a way to represent the equation of the seventh degree by a nomogram involving an alignment of three points, two being carried by simple scales and the third by a double scale. Hilbert rejected this solution because it involved a mobile element. Without going into details, we will retain that there has been an interesting dialogue between an engineer and a mathematician reasoning in two different perspectives. In the terms formulated by Hilbert, it was only in 1957 that the 13th problem is solved negatively by Vladimir Arnold (1937-2010), who proved to everyone's surprise that every continuous function of three variables could be decomposed into continuous functions of two variables only.
From Topography to Linear Systems
The French military engineer André-Louis Cholesky (1875-1918) offers us the occasion of a perfect case study. Before 1995, not many details were known on his life. In 1995 (120 years after his birth), the documents about him kept in the archives of the army at the Fort de Vincennes (near Paris) were open to the public. In 2003, we had the chance that a grandson of Cholesky, Michel Gross, donated the personal archives of his grandfather to the École polytechnique 6 .
Cholesky was born on 15 October 1875, in Montguyon, a village near Bordeaux, in the south-west of France. In October 1895, he was admitted to the École polytechnique and, two years later, he was admitted as a sous-lieutenant at the École d'application de l'artillerie et du génie in Fontainebleau. He had to spend one year at the school and then to serve for one year in a regiment of the army. There he had courses on artillery, fortification, construction, mechanics, topography, etc.
Cholesky as a topographer. Between 1902 and 1906, he was sent to Tunisia and then to Algeria for several missions. In 1905, he was assigned to the Geographical Service of the Staff of the Army. In this service, there were a section of geodesy and a section of topography. Around 1900, following the revision of the meridian of Paris, the extent of the meridian of Lyon and a new cadastral triangulation of France had been decided. These missions were assigned to the section of geodesy together with the establishment of the map of Algeria, and a precise geometric leveling of this country. The problem of the adjustment (or compensation) of networks (corrections to be brought to the measured angles) concerned many officers of the Geographical Service, eager to find a simple, fast and accurate method. According to Commandant Benoît, one of his colleagues, it was at this occasion that Cholesky imagined his method for solving the equations of conditions by the method of least squares.
Cholesky is representative of these "scholarly engineers" of whom we spoke above. Due to his high-level mathematical training, he was able to work with efficiency and creativity in three domains: as a military engineer, specialized in artillery and in topography, able to improve and optimize the methods used on the ground at this time; as a mathematician able to create new algorithms when it is necessary; and as a teacher (because in parallel to his military activities, he participated during four years to the teaching by correspondence promoted by the École spéciale des travaux publics founded in Paris by Léon Eyrolles).
Concerning topography, Cholesky is well known among topographers for a leveling method of his own: the method of double-run leveling (double cheminement in French). Leveling consists in measuring the elevation of points with respect to a surface taken as reference. This surface is often the geoid in order to be able to draw level curves, also called "contour lines". Double-run leveling consists in conducting simultaneously two separate survey traverses, very close to each other, and comparing the results so as to limit the effects of some instrumental defects. This method is still taught and used today.
Cholesky's method for linear systems. As said before, Cholesky is a good example of an engineer creating a new mathematical method and a new algorithm of calculation for his own needs. Cholesky's method for linear systems is actually an important step in the history of numerical analysis. A system of linear equations has infinitely many solutions when the number of unknowns is greater than the number of equations. Among all possible solutions, one looks for the solution minimizing the sum of the squares of the unknowns. This is the case in the compensation of triangles in topography which interested Cholesky. The method of least squares is very useful and is much used in many branches of applied mathematics (geodesy, astronomy, statistics, etc.) for the treatment of experimental data and fitting a mathematical model to them. This method was published for the first time by Legendre in 1806. Its interpretation as a statistical procedure was given by Gauss in 1809.
As it is known, the least square method leads to a system with a symmetric positive definite matrix. Let us describe Cholesky's method to solve such a system. Let A be a symmetric positive definite matrix. It can be decomposed as A = LL T , where L is a lower triangular matrix with positive diagonal elements, which are computed by an explicit algorithm. Then the system Ax = b can be written as LL T x = b. Setting y = L T x, we have Ly = b. Solving this lower triangular system gives the vector y. Then x is obtained as the solution of the upper triangular system L T x = y.
What was the situation before Cholesky? When the matrix A is symmetric, Gauss method makes no use of this property and needs too many arithmetical operations. In 1907, Otto Toeplitz showed that an Hermitian matrix can be factorized into a product LL * with L lower triangular, but he gave no rule for obtaining the matrix L. That is precisely what Cholesky did in 1910. Cholesky's method was presented for the first time in 1924 in a note published in the Bulletin géodésique by commandant Benoît, a French geodesist who knew Cholesky well, but the method remained unknown outside the circle of French military topographers. Cholesky method was rebirth by John Todd who taught it in his numerical analysis course at King's College in London in 1946 and thus made it known. When Claude Brezinski classified Cholesky's papers in 2003, he discovered the original unpublished manuscript where Cholesky explained his method 7 . The manuscript of 8 pages is dated 2 December 1910. That was an important discovery for the history of numerical analysis.
From Ballistics to Differential Equations
The main problem of exterior ballistics is to determine the trajectory of a projectile launched from a cannon with a given angle and a given velocity. The differential equation of motion involves the gravity g, the velocity v and the tangent inclination θ of the projectile, and the air resistance F (v), which is an unknown function of v: 8g d(v cos θ) = vF (v) dθ.
To calculate their firing tables and to adjust their cannons, the artillerymen have used for a long time the assumption that the trajectory is parabolic, but this was not in agreement with the experiments. Newton was the first to research this topic taking into account the air resistance. In his Principia of 1687, he solved the problem with the hypothesis of resistance proportional to the velocity, and he got quite rough approximations when the resistance is proportional to the square of the velocity. After Newton, Jean Bernoulli discovered the general solution in the case of resistance proportional to any power of the velocity, but his solution, published in the Acta Eruditorum of 1719, was not convenient for numerical computation. This problem of determining the ballistic trajectory for a given law of air resistance is particularly interesting because it stands at the crossroads of two partly contradictory concerns: on the one hand, the integration of the differential equation of motion is a difficult problem which interests the mathematicians from the point of view of pure analysis; on the other hand, the artillerymen on the battlefield must quickly determine the firing angle and the initial velocity of their projectile in order to attain a given target, and for that practical purpose they need firing tables precise and easy to use. This tension between theoreticians, generally called ballisticians, and practitioners, described rather to be artillerymen, is seen in all synthesis treatises of the late 19th and early 20th century. I shall content myself with one quotation to illustrate this tension. In 1892, in the French augmented edition of his main treatise, Francesco Siacci (1839-1907), a major figure in Italian ballistics, writes:
Our intention is not to present a treatise of pure science, but a book of immediate usefulness. Few years ago ballistics was still considered by the artillerymen and not without reason as a luxury science, reserved for the theoreticians. We tried to make it practical, adapted to solve the firing questions fast, as exactly as possible, with economy of time and money. 9By these words, Siacci condemns a certain type of theoretical research as a luxury, but he also condemns a certain type of experimental research that accumulates numerous and expensive firings and measurements without obtaining convincing results.
Of course, the problem of integrating the ballistic equation is difficult. Many, many attempts have been done to treat this equation mathematically with the final objective of constructing firing tables. We can organize these attempts throughout two main strategies, one analytical and one numerical.
Analytical approach of the ballistic differential equation. The analytical strategy consists in integrating the differential equation in finite terms or, alternatively, by quadratures. Reduction to an integrable equation can be achieved in two ways: 1) choose an air resistance law so that the equation can be solved in finite form (if the air resistance is not known with certainty, why not consider abstractly, formally, some potential laws of air resistance, leaving it to the artillerymen to choose after among these laws according to their needs?); 2) if a law of air resistance is needed through experience or by tradition, it is then possible to change the other coefficients of the equation to make it integrable, with, of course, the risk that modifying the equation could also modify the solution in a significant way. Fortunately, in the same time of theoretical mathematical research, there have been many experimental studies to determine empirically the law of air resistance and the equation of the ballistic curve. Regular confrontations took place between the results of the theoreticians and those of the practitioners.
In 1744, D'Alembert restarts the problem of integrability of the equation, which had not advanced since the Bernoulli's memoir of 1719. He founds four new cases of integrability:
F (v) = a + bv n , F (v) = a + b ln v, F (v) = av n + R + bv -n , F (v) = a(ln v) 2 + R ln v + b. D'Alembert
's work went relatively unnoticed at first. In 1782, Legendre found again the case F (v) = a + bv 2 , without quoting D'Alembert. In 1842, Jacobi found the case F (v) = a+bv n to generalize Legendre's results, quoting Legendre, but still ignoring D'Alembert. After studying this case in detail, Jacobi notes also that the problem is integrable for F (v) = a + b ln v, but he does not study further this form, because, he says, it would be abhorrent to nature (it's hard indeed to conceive an infinite resistance when velocity equals zero). Jacobi puts the equations in a form suitable for the use of elliptic integrals. Several ballisticians like Greenhill, Zabudski, MacMahon, found here inspiration to calculate ballistic tables in the case of air resistance proportional to the cube or to the fourth power of the velocity. These attempts contributed to popularizing elliptic functions among engineers and were quoted in a lot of treatises about elliptic functions.
During the 19th century, there is a parallelism between the increasing speeds of bullets and cannonballs, and the appearance of new instruments to measure these speeds. Ballisticians are then conducted to propose new air resistance laws for certain intervals of speeds. In 1921, Carl Julius Cranz (1858-1945) gives an impressive list of 37 empirical laws of air resistance actually used to calculate tables at the end of the 19th century. Thus, theoretical developments, initially free in D'Alembert's hands, led to tables that were actually used by the artillerymen. The fact that some functions determined by artillerymen from experimental measurements fell within the scope of integrable forms has reinforced the idea that it might be useful to continue the search for such forms. It is within this context that Siacci resumed the theoretical search for integrable forms of the law of resistance. In two papers published in 1901, he places himself explicitly in D'Alembert's tradition. He multiplies the differential equation by various multipliers and seeks conditions for these multipliers are integrant factors. He discovers several integrable equations, including one new integrable Riccati equation. This study leads to eight families of air resistance laws, some of which depend on four parameters. In his second article, he adds two more families to his list.
The question of integrability by quadratures of the ballistic equation is finally resolved in 1920 by Jules Drach (1871-1949), a brilliant mathematician who has contributed much in Galois theory of differential equations in the tradition of Picard, Lie, and Vessiot. Drach puts the ballistic equation in a new form that allows him to apply a theory developed in 1914 for a certain class of differential equations, which he found all cases of reduction. Drach exhausts therefore the problem from the theoretical point of view, by finding again all integrability cases previously identified. As you might expect, the results of this long memoir of 94 pages are very complicated. They were greeted without enthusiasm by the ballisticians, who did not see at all how to transform them into practical applications.
Another way was explored by theoreticians who accepted Newton's law of the square of the velocity, and tried to act on other terms of the ballistic equation to make it integrable. In 1769, the military engineer Jean-Charles de Borda (1733-1799) proposes to assume that the medium density is variable and to choose, for this density, a function that does not stray too far from a constant and makes the equation integrable. Borda makes three assumptions about the density, the first adapted to small angles of fire, the second adapted to large angles of fire, and the third for the general case, by averaging between the previous ones and by distinguishing the ascending branch and the descending branch of the curve.
Legendre deepens Borda's ideas in his Dissertation sur la question de balistique, with which he won in 1782 the prize of the Berlin Academy. The question chosen for the competition was: "Determine the curve described by cannonballs and bombs, by taking the air resistance into account; give rules to calculate range that suit different initial speeds and different angles of projection." Legendre puts the ballistic equation in a form similar to that used by Euler, with the slope of the tangent as independent variable. After commenting Euler's method by successive arcs (see below), considered too tiresome for numerical computation, Legendre suggests two ideas of the same type as those of Borda, with a result which is then satisfactory for the entire curve, and not only at the beginning of the trajectory. With these methods, Legendre manages to calculate ten firing tables that will be considered of high quality and will permit him to win the prize of the Berlin Academy. After Legendre, many other people, for example Siacci at the end of the 19th century, have developed similar ideas to obtain very simple, general, and practical methods of integration. Direct numerical integration of the differential equation. The second strategy for integrating the ballistic differential equation belongs to numerical analysis. It contains three main procedures: 1) calculate the integral by successive small arcs; 2) develop the integral into an infinite series and keep the first terms; 3) construct graphically the integral curve.
Euler is truly at the starting point of the calculation of firing tables in the case of the square of the velocity. In 1753, Euler resumes Bernoulli's solution and put it in a form that will be convenient for numerical computation. He takes the slope p of the tangent as principal variable. All the other quantities are expressed in function of p by means of quadratures. The integration is done by successive arcs: each small arc of the curve is replaced by a small straight line, whose inclination is the mean of the inclinations at the extremities of the arc. To give an example, Euler calculates a single table, the one corresponding to a firing angle of 55. With this numerical table, he constructs by points the corresponding trajectory. A little later, Henning Friedrich von Grävenitz (1744-1764), a Prussian officer, performs the calculations of the program conceived by Euler. He published firing tables in Rostock in 1764. In 1834, Jacob Christian Friedrich Otto, another military officer, publishes new tables in Berlin, because he finds that those of Grävenitz are insufficient. To answer better the problem encountered in practice by artillerymen, he reverses the table taking the range as the given quantity and the initial velocity as the unknown quantity. Moreover, he calculates a lot more elements than Grävenitz to facilitate interpolation. Otto's tables will experience a great success and will be in use until the early 20th century.
Another approach is that of series expansions. In the second half of the 18th century and early 19th, we are in the era of calculation of derivations and algebraical analysis. The expression of solutions by infinite series whose law of the formation of terms is known, is considered to be an acceptable way to solve a problem exactly, despite the philosophical question of the infinite and the fact that the series obtained, sometimes divergent or slowly convergent, do not always allow an effective numerical computation. In 1765, Johann Heinrich Lambert (1728-1777) is one of the first to express as series the various quantities involved in the ballistic problem. On his side, the engineer Jacques-Frédéric Français (1775-1833) applies the calculation of derivations. He identifies a number of new formulas in the form of infinite series whose law of the formation of the successive terms is explicitly given. However, he himself admits that these formulas are too complicated for applications.
Let us mention finally graphical approaches providing to the artillerymen an easy and economic tool. In 1767, recognizing that the series calculated in his previous memoir are unusable, Lambert constructs a set of curves from Grävenitz's ballistic tables. In France, an original approach is due to Alexander-Magnus d'Obenheim (1752-1840), another military engineer. His idea was to replace the numerical tables by a set of curves carefully constructed by points calculated with great precision. These curves are drawn on a portable instrument called the "gunner board" ("planchette du canonnier" in French). The quadrature method used to construct these curves is highly developed. Obenheim employs a method of Newton-Cotes type with a division of each interval into 24 parts. In 1848, Isidore Didion (1798-1878), following Poncelet's ideas, constructs ballistic curves that are not a simple graphic representation of numerical tables, but are obtained directly from the differential equation by a true graphical calculation: he obtains the curve by successive arcs of circles, using at each step a geometric construction of the center of curvature. Artillery was thus the first domain of engineering science in which graphical tables, called "abaques" in French, were commonly used (see Section 2). One of the major advantages of graphical tables is their simplicity and rapidity of utilization, that is important on the battlefield when the enemy is firing against you!
In conclusion, throughout the 18th and 19th centuries, there has been an interesting interaction between analytic theory of differential equations, numerical and graphical integration, and empirical experimental research. Mathematicians, ballisticians and artillerymen, although part of different worlds, collaborated and inspired each other regularly. All this led, however, to a relative failure, both experimentally to find a good law of air resistance, and mathematically to find a simple solution of the ballistic differential equation.
Mathematical research on the ballistic equation has nevertheless played the role of a laboratory where the modern numerical analysis was able to develop. Mathematicians have indeed been able to test on this recalcitrant equation all possible approaches to calculate the solution of a differential equation. There is no doubt that these tests, joined with the similar ones conceived by astronomers for the differential equations of celestial mechanics, have helped to organize the domain into a separate discipline around 1900. In parallel with celestial mechanics, ballistics certainly played an important role in the construction of modern Runge-Kutta and Adams-Bashforth methods for numerically integrating ordinary differential equations.
From Hydraulics to Dynamical Systems
Concerning another aspect of the theory of differential equations, it should be noticed that the classification of singular points obtained by Poincaré had occurred earlier in the works of at least two engineers who dealt with hydraulic problems 10 . As early as 1924, Russian historians reported a similar classification in a memoir of Nikolai Egorovich Zhukovsky (1847-1921) dated 1876 on the kinematics of liquids. Dobrovolsky published a reproduction of Zhukovsky's diagrams in 1972 in the Revue d'histoire des sciences [START_REF] Dobrovolski | Sur l'histoire de la classification des points singuliers des équations différentielles[END_REF]. In what Zhukovsky called "critical points", we recognize the so-called saddles, nodes, focuses and centers.
The second engineer is the Belgian Junius Massau, already encountered above about nomography. Considered as the creator of graphical integration, he developed elaborate techniques to construct precisely the integral curves of differential equations [START_REF] Tournès | Junius Massau et l'intégration graphique[END_REF]. From 1878 to 1887, he published a large memoir on graphical integration [START_REF] Massau | Mémoire sur l'intégration graphique et ses applications[END_REF], with the following objectives:
The purpose of this memoir is to present a general method designed to replace all the calculations of the engineer by graphic operations. [...] In what follows, we will always represent functions by curves; when we say 'to give or to find a function', it will mean giving or finding graphically the curve that represents it. 11Book VI, the last book of the memoir, is devoted to applications in hydraulics. Massau examines the motion of liquids in pipes and canals. Among these specialized developments, a general and theoretical statement on graphic integration of first order differential equations appears. The entire study of a differential equation rests on the preliminary construction of the loci of points where integral curves have the same slope. Massau calls such a locus an "isocline". The isoclines (under the Latin name of "directrices") had already been introduced by Jean Bernoulli in 1694 as a universal method of construction of differential equations, particularly useful in the numerous cases in which the equations cannot be integrated by quadratures. Once enough isoclines are carefully drawn, one takes an arbitrary point A on the first curve and one constructs a polygon of integration ABCD, the successive sides of which have the slopes associated with isoclines and the successive summits of which are taken in the middle of the intervals between isoclines. Massau explains that you can easily obtain, by properly combining the directions associated to successive isoclines, graphical constructions equivalent to Newton-Cotes quadrature formulas, whereas the same problem would be difficult to solve numerically because of the implicit equations that appear at each step of the calculation. In fact, numerical algorithms of order greater than 2 will be discovered only at the turn of the 20th century by the German applied mathematicians Runge, Heun and Kutta.
The construction of the integral curves from isoclines is another way of studying globally a differential equation. In contrast to Poincaré's abstract approach, Massau's diagram both gives a global description and a local description of the curves. This diagram is both an instrument of numerical calculation -the ordinates of a particular integral curve can be measured with accuracy sufficient for the engineer's needs -and a heuristic tool for discovering properties of the differential equation. For example, Massau applies this technique to hydraulics in studying the permanent motion of water flowing in a canal. He is interested in the variations of depth depending on the length of the canal, in the case of a rectangular section the width of which is growing uniformly. The differential equation to be solved is very complicated. With his elaborate graphical technique, Massau constructs isoclines and studies the behavior of the integral curves. He discovers that there is what he calls an"asymptotic point" : the integral curves approaching this point are turning indefinitely around it.
Massau then develops a theoretical study of singular points from isoclines. For a differential equation F (x, y, y ) = 0, he considers the isoclines F (x, y, α) = 0 as the projections on the plane (x, y) of the contour lines of the surface of equation F (x, y, z) = 0, and the integral curves as the projections of certain curves drawn on this surface. By geometric reasoning in this three-dimensional framework, Massau finds the same results as Poincaré concerning the singular points, but in a very different manner. He starts with the case where isoclines are convergent straight lines. In the general case, when isoclines pass by the same point, Massau studies the integral curves around this point by replacing the isoclines by their tangents. A singular point is always called a "focus". The special case that we call "focus" today is the only one to receive a particular name, that of "asymptotic point". Massau determines very carefully the various possible positions around a focus by considering the number of straight-line solutions passing through this point. In Massau's reasoning, the isoclines play the same role as Poincaré's arcs without contact to guide the path of integral curves. By using a graphical technique developed at first as a simple technique of numerical calculation, Massau also succeeds in a qualitative study, the purpose of which is the global layout of the integral curves and the description of their properties.
Knowing that Massau published his Book VI in 1887, is it possible that he had previously read Poincaré's memoir and that he was inspired in it? It is not very probable because, in fact, Massau had already presented a first version of his Book VI on December 3, 1877, at the Charleroi section of the Association of the engineers of Ghent University, as is shown by the monthly report of this association. Further, the vocabulary, the notations and the demonstrations used by Massau are clearly different from those of Poincaré. In particular, Massau constantly works with the isoclines, a notion about which Poincaré never speaks. Finally, Massau, who quotes many people whose work is related to his, never quotes Poincaré.
Clearly, Massau and Zhukovsky are part of a geometric tradition that survived since the beginning of Calculus within engineering and applied mathematics circles. In this tradition one kept on constructing equations with graphical computation and mechanical devices, as theoretical mathematicians came to prefer the analytical approach. In this story, it is interesting to notice the existence of these two currents without an apparent link between them, the one among academic mathematicians, the other among engineers, with similar results that have been rediscovered several times independently.
Conclusion
In previous Sections, I presented some examples, mainly during the second half of the 19th century and the early 20th, that illustrate how civil and military engineers have been strongly engaged in the mathematical activity of their time. The examples that I have chosen are directly related to my own research, but we could mention some other recent works going in the same direction.
David Aubin [START_REF] Aubin | Why and how mathematicians collaborated with military ballisticians at Gâvre[END_REF] and Alan Gluchoff [START_REF] Gluchoff | Artillerymen and mathematicians: Forest Ray Moulton and changes in American exterior ballistics, 1885-1934[END_REF] have studied the scientific and social context of ballistics during and around the First World War, the one in France with the case of the Polygone de Gâvre, a famous ballistic research center situated in Brittany, and the other in the United States with the Aberdeen Proving Grounds, which was the prominent firing range in America. These papers prolong what I have presented in Section 4 and put in evidence similar collaborations and tensions between two major milieus, the one of artillerymen, that is military engineers and officers in the military schools and on the battlefield, and the other one of mathematicians that were called to solve difficult theoretical problems. The new firing situations encountered during the First World War (fire against planes, fire over long distances through air layers of widely varying densities, etc.) generated new theoretical problems impossible to solve analytically and thus favored the creation of new numerical algorithms such as Adams-Moulton methods for ordinary differential equations. Kostas Chatzis ([2], [START_REF] Chatzis | La réception de la statique graphique en France durant le dernier tiers du xix e siècle[END_REF]) has studied the professional milieu of 19th century French engineers from the sociological and economic point of view. In particular, he has reviewed the conditions of diffusion of graphical statics, first in France, then in Germany and Italy, and again in France in the late 19th century. Graphical statics was an extensively used calculation tool, for example for the construction of metallic bridges and buildings such as the famous Eiffel Tower in Paris. Its development is closely linked to that of descriptive geometry and projective geometry. For her part, Marie-José Durand-Richard ( [START_REF] Durand-Richard | Planimeters and integraphs in the 19th century: Before the differential analyzer[END_REF], [START_REF] Durand-Richard | Mathematical machines 1876-1949[END_REF]) has examined the mathematical machines designed by engineers between Babbage's machine and the first digital computer. These machines, which include planimeters, integraphs and differential analyzers, have played a major role in solving differential equations encountered in many areas. Among the most important of them is the polar planimeter of Jakob Amsler (1823-1912), the integraph of Abdank-Abakanowicz (1852-1900), the harmonic analyzer of Lord Kelvin (1824-1907) and the large differential analyzers of Vannevar Bush (1890-1974) in the United States and Douglas Rainer Hartree in Great-Britain. The technical and industrial design of these machines has contributed to the development of new numerical and graphical methods, but also to some advances in logic and information theory, as seen in the work of Claude Elwood Shannon . During and after the Second World War, all this knowledge has been transferred to the first computers like ENIAC. More generally, Renate Tobies ( [START_REF] Tobies | A Life at the Crossroads of Mathematics[END_REF], [START_REF] Tobies | Mathematical modeling, mathematical consultants, and mathematical divisions in industrial laboratories[END_REF]) has explored the relationships between mathematics, science, industry, politics and society, taking as support of her work the paradigmatic case of Iris Runge (1888-1966), a Carl Runge's daughter, who was a mathematician working for Osram and Telefunken corporations.
In the early 20th century, the emerging applications of electricity became a new field of research for engineers, who were then faced with nonlinear differential equations with complex behavior. Jean-Marc Ginoux, Christophe Letellier and Loïc Petitgirard ( [START_REF] Letellier | Development of the nonlinear dynamical systems theory from radio engineering to electronics[END_REF], [START_REF] Ginoux | Analyse mathématique des phénomènes oscillatoires non linéaires: le carrefour français (1880-1940)[END_REF], [START_REF] Ginoux | Van der Pol and the history of relaxation oscillations: Toward the emergence of a concept[END_REF], [START_REF] Ginoux | Poincaré's forgotten conferences on wireless telegraphy[END_REF]) have studied the history of oscillatory phenomenons produced by various electrical devices. Balthazar Van der Pol (1889-1959) is one of the major figures in this field. Using Massau's techniques of graphical integration (see Section 5), in particular the method of isoclines, Van der Pol studied the oscillations in an electric circuit with a triode, and succeeded in describing the continuous passage from sinusoidal oscillations to quasi-aperiodic oscillations, which he called "relaxation oscillations". A little later, Aleksander Andronov established a correspondence between the solution of the differential system given by Van der Pol to characterize the oscillations of the triode and the concept of limit cycle created by Poincaré, thus connecting the investigations of engineers to those of mathematicians. In his thesis, Jean-Marc Ginoux [START_REF] Ginoux | Analyse mathématique des phénomènes oscillatoires non linéaires: le carrefour français (1880-1940)[END_REF] lists carefully all the engineering works on this subject between 1880 and 1940.
Loïc Petitgirard [START_REF] Petitgirard | Un "ingénieur-mathématicien" aux prises avec le non linéaire: Nicolas Minorsky[END_REF] is also interested in another engineer mathematician struggling with nonlinear differential equations: Nicolas Minorsky (1885-1970), an engineer of the Russian Navy trained at the Naval Academy in St. Petersburg. Minorsky was a specialist in the design, stabilization and control of ships. In his naval research during the years 1920-1930, he was confronted with theoretical problems related to nonlinear differential equations, and established mathematical results adapted to maritime issues. He also conceived a system of analog computing in connection with the theory of nonlinear oscillations and the stability theory, emphasizing that the theories produced by mathematicians like Poincaré remain incomplete without computational tools to implement them.
All these recent works demonstrate a large entanglement between the milieus of civil engineers, military engineers, physicists, astronomers, applied mathematicians and pure mathematicians (of course, these categories were far from watertight). It seems necessary to take all of them into account if we want to rethink the construction of knowledge in the domain of numerical analysis and if we want to avoid the historical bias of the projection into the past of contemporary conceptions of the discipline. A new history remains to be written, which would not focus only on a few major authors and some high-level mathematical algorithms, but also on the actors of the domain in the broad sense of the term, and on the numerical and graphical methods actually performed by users on the ground or at the office. A good start to this problem could be, among others, to identify, classify and analyze the mathematical texts contained in the many engineering journals published in Europe and elsewhere since the early 19th century. This could allow to characterize more precisely the mathematical knowledge created and used by engineers, and to study the circulation of this knowledge between the professional circles of engineers and other groups of actors involved in the development of mathematical ideas and practices.
A very interesting workshop on this subject took place in March
in Oberwolfach, organized by Moritz Epple, Tinne Hoff Kjeldsen and Reinhard Siegmund-Schultze, and entitled "From 'Mixed' to 'Applied' Mathematics: Tracing an important dimension of mathematics and
On the professional milieu of French engineers during the 19th century and the École polytechnique, see the papers by Bruno Belhoste and Konstantinos Chatzis ([2],[START_REF] Chatzis | Theory and practice in the education of French engineers from the middle of the 18th century to the present[END_REF]).
This Section is an abridged and synthetic version of developments contained in my papers[START_REF] Tournès | Une discipline à la croisée de savoirs et d'intérêts multiples: la nomographie[END_REF],[START_REF] Tournès | Mathematics of the 19th century engineers: methods and instruments[END_REF] and[START_REF] Tournès | Mathematics of nomography[END_REF].
Claude Brezinski has classified these archives and published a lot of papers about the life and work of Cholesky: see[START_REF] Brezinski | La méthode de Cholesky[END_REF],[START_REF] Brezinski | The life and work of André Cholesky[END_REF] and[START_REF] Brezinski | André-Louis Cholesky 1875-1918: Mathematician, Topographer and Army Officer[END_REF]. For this Section, I found a lot of information in these papers.
This manuscript has been published in 2005 in the Revue d'histoire des mathématiques[START_REF] Brezinski | La méthode de Cholesky[END_REF].
In fact, the problem is more complex because we must take into account other factors like the variations of the atmospheric pressure and temperature, the rotation of the Earth, the wind, the geometric form of the projectile and its rotation around its axis, etc. However these effects could be often neglected in the period considered here, because the velocities of projectiles remained small.
"Notre intention d'ailleurs n'est pas de présenter un traité de science pure, mais un ouvrage d'utilité immédiate. Il y a peu d'années que la balistique était encore considérée par les artilleurs et non sans raison comme une science de luxe, réservée aux théoriciens. Nous nous sommes efforcé de la rendre pratique, propre à résoudre les questions de tir rapidement, facilement, avec la plus grande exactitude possible, avec économie de temps et d'argent"[25, p. x].
A more developed version of this Section can be found in my paper[START_REF] Tournès | Diagrams in the theory of differential equations (eighteenth to nineteenth centuries)[END_REF]. On Junius Massau, see also[START_REF] Tournès | Junius Massau et l'intégration graphique[END_REF]. For a general survey on graphical integration of differential equations, see[START_REF] Tournès | L'intégration graphique des équations différentielles ordinaires[END_REF].
L'objet de ce mémoire est d'exposer une méthode générale ayant pour but de remplacer les calculs de l'ingénieur par des opérations graphiques. [...] Dans ce qui va suivre, nous représenterons toujours les fonctions par des courbes; quand nous dirons donner ou trouver une fonction, cela voudra dire donner ou trouver graphiquement la courbe qui la représente[22, p. 13-16].
* I am grateful to the French National Research Agency, which funded the four-year project "History of Numerical Tables" (2009)(2010)(2011)(2012)(2013). A large part of the contents of this paper is issued from this project. I also thank the laboratory SPHERE (UMR 7219, CNRS and University Paris-Diderot), which offered me a good research environment for many years. | 64,646 | [
"12623"
] | [
"54305",
"1004988"
] |
01479464 | en | [
"shs",
"math"
] | 2024/03/04 23:41:46 | 2015 | https://hal.science/hal-01479464/file/tournes_2015b_owr_12.pdf | EA Dominique Tournès Lim
email: [email protected]
Models and visual thinking in physical applications of differential equation theory: three case studies during the period 1850-1950 (Bashforth, Størmer, Lemaître)
This paper is organized around three important works in applied mathematics that took place in the century 1850-1950: Francis Bashforth (1819-1912) on capillary action [START_REF] Bashforth | An attempt to test the theories of capillary action by comparing the theoretical and measured forms of drops of fluid[END_REF], Carl Størmer (1874Størmer ( -1957) ) on polar aurora [START_REF] Størmer | Sur les trajectoires des corpuscules électrisés dans l'espace sous l'action du magnétisme terrestre avec application aux aurores boréales[END_REF], Georges Lemaître (1894Lemaître ( -1966) ) on cosmic rays [START_REF] Lemaître | On the allowed cone of cosmic radiation[END_REF]. I have chosen these three figures for several reasons: they were applied mathematicians with strong theoretical training; they studied complex physical problems for which they had to create new numerical methods at the limit of the human and technical possibilities of their time; there is a natural continuity in their works, each being partially inspired by the previous one; finally, these works present the same characteristics as what we call today mathematical modeling and computer simulation.
Francis Bashforth was fellow at St. John's College at Cambridge and later professor of mathematics at the Royal Military Academy of Woolwich. Between 1864 and 1880 he developed important experimental and theoretical research on ballistics. Before and after his professional engagement in artillery, he was also interested in capillary action. In this domain, his major aim was to compare the measured forms of drops of fluid resting on a horizontal plane, obtained by experiment, with the theoretical forms of the same drops as determined by the Laplace differential equation of capillarity.
In his research, Bashforth used, on the one hand, a new measurement process involving a micrometer of his invention and, on the other hand, a new method of numerical integration of differential equations involving finite differences of the fourth order and efficient quadrature formulas, conceived with the help of the famous astronomer John Couch Adams [START_REF] Tournès | L'origine des méthodes multipas pour l'intégration numérique des équations différentielles ordinaires[END_REF]. Bashforth, with his assistants, computed 32 integral curves, each of them with 36 points. Knowing that five auxiliary values were necessary for each point of the curve, we arrive at the total of more than 5000 numbers to be calculated. The calculation time can be estimated to at least 500 hours. The coincidence of the curves obtained by the experimental method and the numerical one was excellent and could be viewed as a mutual validation of the two approaches of the given capillary problem.
In Bashforth's work, we may distinguish different levels of representation of the physical phenomenon concerned. Experimentation and measurement lead to what I call an "experimental model" of the forms of drops. In parallel, the mathematization of the problem gives birth to what we would call today a "mathematical model". This model is non-operative because we cannot integrate the differential equation analytically, so it is necessary to discretize this equation to obtain a "numerical model". This process of discretization is not a simple translation. It would be an error to consider the continuous mathematical model and the discrete numerical model as being obviously equivalent. In fact, a discretization process often introduces significant changes in the informational content of the original model, because a numerical algorithm may be divergent, may suffer from numerical instability, and may be unadapted to the available instruments of calculation.
Carl Størmer, the second character in my story, was a Norwegian mathematician trained in Kristiania, Paris and Göttingen. For many years until his retirement, he was professor of mathematics at Kristiania University. Up to his death, the major part of his research was devoted to the study of the curious phenomenon of polar aurora, called also "aurora borealis" or "northern lights", on which he published almost 150 papers.
Understanding that polar auroras are caused by electrically charged particles coming from outer space, Størmer decided to determine the trajectories of these particles under the action of terrestrial magnetism. In order to track these trajectories step by step from the Sun to the Earth, he had to develop new techniques of numerical integration of differential equations, inspired by those of Adams-Bashforth and British astronomers, but best suited to his specific problem. With his students, he calculated a multitude of different trajectories during three years. He himself estimated that this huge task required more than 5000 hours of work.
After that, Størmer and his assistants constructed several wire models to visualize the numerical tables issued from the calculations. These material models showed that the charged particles coming from the Sun concentrate around the polar circle, in accordance with observation. These models also explained in a convincing way why the northern lights can appear on the night side of the Earth, at the opposite of the Sun.
A few years before, a Størmer's colleague, Kristian Birkeland, professor of physics at Kristiania University, had realized a physical simulation of the polar aurora. For that, he was sending cathode rays through an evacuated glass container against a small magnetic sphere representing the Earth, which he called "terrela". Birkeland's simulations showed two illuminated bands encircling the poles, in agreement with the behavior of northern lights and also with the computed trajectories obtained later by Størmer.
Finally, the physical phenomenon of polar aurora has been studied by three ways. First, by direct observations and measurements, secondly by Birkeland's simulation, which we can consider as an "analog model", and thirdly by Størmer's mathematization with a continuous mathematical model consisting in a system of differential equations, a numerical model obtained by discretization and a wire material model representing concretely the trajectories. The coherence of the results obtained by these three approaches validates strongly the initial hypothesis of charged particles deviated by terrestrial magnetism.
My third and last part is devoted to the astrophysicist Georges Lemaître and his research on cosmic rays. At this time, an important problem addressed by Millikan was to explain the origin and nature of the cosmic rays detected by balloons or mountain observatories. There were two rival conceptions of these cosmic rays, one principally advocated by Millikan and the other by Arthur Compton. While Millikan held the rays to consist of high-energy photons, Compton and his collaborators argued that they were charged particles of extragalactic origin. Lemaître was interested in these cosmic rays because he saw in them the fossil traces of his "Primeval Atom hypothesis", an ancestor of the Bang theory, so he wanted to prove the validity of Compton's conception. In collaboration with the Mexican physicist Manuel Sandoval Vallarta, Lemaître engaged in complicated calculations of the energies and trajectories of charged particles in the Earth's magnetic field.
At first, Lemaître and Vallarta tried to integrate numerically the differential equations of the trajectories with the Adams-Bashforth method, but this was not convenient. Later, they discovered the Størmer method in the literature and began to use it, but the calculations were very tedious to perform. Finally they thought of the differential analyzer constructed by Vannevar Bush at the MIT [START_REF] Bush | The differential analyzer. A new machine for solving differential equations[END_REF]. A differential analyzer is a mechanical analog machine conceived for the integration of differential equations. It is constituted by algebraic mechanisms that perform the algebraic operations and mechanical integrators that realize the integrations. Once suitably prepared, the machine is in exact correspondence with the given differential equation and when it moves from an initial given state, it traces exactly an integral curve of this equation.
For the use of the differential analyzer, Lemaître and Vallarta were helped by Samuel Hawks Caldwell, an assistant of Bush who managed the differential analyzer for the specific problem of cosmic rays. Thanks to this instrument, they could obtain hundreds of trajectories within a reasonable time. In this third situation, we find again the notions of experimental, mathematical and numerical models already analyzed in Basforth's and Størmer's researches, but the novelty is in the role played by the differential analyzer: this instrument being a mechanical analog model of the differential equation, it appears also, indirectly, as an analog model of the physical phenomenon of cosmic rays.
In the three situations we have studied, we encountered several representationsexperimental, analog, mathematical, numerical, graphical, material -of a physical phenomenon that validate each other through the consistence and coherence of their results. Each of them brings specific information about the real phenomenon. In fact, these representations make sense when they are considered together, so I am tempted to say that this is this system of representations considered as a whole which constitutes a "model" of the phenomenon. Concretely, we can only reason and calculate in this multifaceted model, whereas the reality of the phenomenon remains definitively hidden. | 9,867 | [
"12623"
] | [
"54305",
"1004988"
] |
01479948 | en | [
"phys"
] | 2024/03/04 23:41:46 | 2016 | https://theses.hal.science/tel-01479948/file/GallegoManzano_L_07_2016.pdf | Jose Florian
Lucile
Martin
Merci Fanny
Grégoire, Loïc, Dani, Thorben, Thiago Alexandre P Roland
Maryam Guillaume
Audrey, Gabriel Benjamin A Florian
Laurent Astrid Guillaume
de partager de nombreuses pauses café, soirées et de trés bons moments
E t tels qu'ils devraient être, mes derniers mots sont dédiés à tous ceux qui ont contribués à ce moment. Je ne peux pas, même si j'aimerais bien, personnaliser chacun des moments et des situations particulières pour lesquels je vous dois beaucoup à chacun d'entre vous, car il serait un peu bizarre que les remerciements soient plus longs que le manuscrit lui-même. Mais je voudrais vous dire que sans vous je n'en serai pas ici aujourd'hui et que cette thèse n'aurait pas été possible. Pour tout ça : ¡GRACIAS! Je tiens en premier lieu à remercier Laurence Le Coq de m'avoir fait l'honneur de présider mon jury de thèse. Un grand merci à Fabrice Rètiere et à Paul Lecoq pour avoir accepté la lourde tâche d'en être les rapporteurs. Merci á Fernando Arqueros et Jean-François Chatal d'avoir aussi accepté de faire parti de ce jury. Merci à Ginés Martínez d'avoir accepté être mon directeur de thèse et d'avoir attendu un petit moment avant de lire ce précieux livre de «Aprendre à dire non». Gracias Ginés pour ton soutien et ta gentillesse depuis mon arrivée à Nantes. Je tiens à adresser mes plus sincères remerciements à Dominique Thers et à Jean-Pierre Cussonneau pour leur soutien et pour m'avoir offert l'opportunité de réaliser une thèse dans un domaine aussi passionnant. Je les remercie pour les «vient discuter cinq minutes», qui n'étaient jamais cinq minutes, mais qui m'ont obligés à pousser toujours plus loin mes connaissances sur des sujets très différents. Merci aussi pour les, parfois difficiles mais toujours productives, discussions sur ce travail et pour m'avoir donné la liberté et la force d'explorer de nouvelles idées. Merci pour votre disponibilité, votre patience, votre confiance et vos conseils. Un grand merci en particulier à Jean-Pierre pour avoir accepté d'encadrer cette thèse et pour avoir toujours pris le temps de répondre à mes nombreuses questions ou quelque fois à la même question plusieurs fois, mais toujours avec le sourire. Je tiens aussi à exprimer ma profonde gratitude à Eric Morteau pour m'avoir pris sous son aile, pour m'avoir fait confiance avec la manip et ainsi que pour avoir su me guider et montrer les routes mystérieuses de l'électronique et du development de détecteurs. Merci d'avoir répondu à toutes mes innombrables questions et de m'avoir appris à regarder le vent d'un nouvel oeil. Merci ! Je voudrais également exprimer toute ma gratitude aux autres membres du groupe Xénon. Merci à Nicolas Beaupere pour ta gentillesse et ta générosité. Merci à ma compagne de voyage Sara Diglio pour ta simplicité, ta générosité et tous tes précieux conseils et discussions autour d'un café. Comment ne pas exprimer ma gratitude à Jean-Sébastien Stutzmann et Patrick Leray pour leur soutien, leur bonne humeur même dans les moments les plus désespérantes et pour leur appui technique tout au long de ces années. Je remercie Loïck Virone et Kevin Micheneau pour m'avoir supporté pendant toutes ces années dans le même bureau. Merci aussi à Julien Masbou et Luca Scotto-Lavina. J'ai beaucoup appris avec vous iii et de vous, mais surtout merci à l'ensemble du groupe d'avoir fait de ces années toute une aventure ! Je remercie également les anciens membres
Y cambiamos de lengua para dar mi más profundo agradecimiento a Fernando Arqueros, Jaime Rosado y Paco Ramos. Muchas gracias por darme la oportunidad de empezar en el mundo de la investigración, por vuestra confianza, apoyo y sobre todo, por haber dedicado tiempo a formarme. Sin duda esta tesis no hubiera sido posible sin vosotros, por todo ello: ¡Gracias! Gracias a mi «familia nantesa»: Rocio, Tatán, Sara, Jean-Pascal y Jose, por vuestra amistad, por recordarme que en esta vida hay cosas más importantes que el trabajo y por todos los momentos (y los que nos quedan) juntos. Nos vemos en Madrid! Gracias a mis amigos. A aquellos que forman parte de mi vida desde que tengo memoria; desde el momento tortilla en el Gorongoro o las tardes de domingo en el parque de la Fuente del Berro, y a aquellos más recientes, pero no por ello menos importantes. En cualquier caso, gracias Julito, Lafren, Santi, Ire, Lalo, Raquel . . . y mis «pitutis» Ali y Ele, porque a pesar del tiempo o la distancia, estar con vosotros siempre es como estar en casa. Gracias también a Celes, Gloria, Paula, Leti, Rachel y Azahara. Gracias por ser una risa asegurada y porque a pesar de los años, cuando estoy con vosotras es como si el tiempo no pasara.
Introduction
T he rapid evolution of the different medical imaging techniques, particularly in the areas of instrumentation and image processing, marked the beginning of the 21st century. This is in part due to the significant technological advances, and the continuous interplay between the worlds of scientific research and industry. The application of fundamental nuclear and particle physics to medical applications is essential in modern medicine and it has deeply influenced its development. Many devices currently used in medical imaging have their roots directly borrowed from experiments in nuclear and particle physics. However, despite the very good images currently obtained in clinical practice, the increasing life expectancy and the desire to always move forward in that direction pose new challenges, especially for functional imaging techniques used in nuclear medicine. The reduction of the radiation dose administered to the patient, the decrease of the time required to perform an exam and the therapeutic follow-up of certain diseases are three clear vectors to guide future improvements. It is in this context that Subatech proposed a new medical imaging technique, called 3γ imaging. The 3γ imaging technique is an innovative functional medical imaging modality, which is based on two new concepts: the joint use of a new technology based on a liquid xenon Compton telescope, and a new radiopharmaceutical labeled with Sc-44, which aims to reduce the activity injected to the patient to unprecedented limits.
The principle of the 3γ imaging technique is based on the use of a specific radioisotope, the 44 Sc, which emits a positron and a γ-ray of 1.157 MeV in spatial and temporal coincidence. After the annihilation of a positron with an electron of the surrounding matter, two γ-rays are produced back-to-back with an energy of 511 keV. The detection in coincidence of these two photons forms a line of response (LOR) between the two interactions. In 3γ imaging, we use the intersection between the LOR and a Compton cone obtained after the interaction of the third γ-ray, coming from the decay of 44 Sc, with a Compton telescope. With the additional information provided by the third photon, it is possible to constrain the position of the source along the LOR and hence, directly obtain the distribution of the radioactive source in 3D. The benefit of this new technique is expressed directly in terms of reducing the number of disintegrations necessary to obtain the image and therefore, the activity injected to the patient and / or the exam time are significantly reduced.
In order to consolidate and provide an experimental demonstration of the use of a liquid xenon Compton camera for 3γ imaging, a first phase of research and development (R&D) has been carried out. This initial step is the starting point of the XEMIS project (XEnon Medical Imaging System), which involves both fundamental research and the development and implementation of innovative technologies. A first prototype of a liquid xenon Compton telescope, called XEMIS1, was successfully developed and tested at Subatech laboratory. The choice of liquid xenon as detection medium is motivated by the fact that the detection techniques currently available in nuclear medicine, mostly based on the use of scintillation crystal for the detection of γ-rays, are not well suited for 3γ imaging. Moreover, the fundamental physical properties of liquid xenon, high density and atomic number, give it a high stopping power for ionizing radiation, which makes liquid xenon a perfect candidate as a γ-ray detector in the energy range from several tens of keV to tens of MeV. Liquid xenon is both an excellent active medium for an ionizing radiation detector and an excellent scintillator, with the advantage of making possible the construction of large scale, massive and homogeneous detectors. These are the main reasons why liquid xenon has gained relevance, not only in medical imaging, but also in diverse fields such as in fundamental particle physics and astrophysics.
The present document details the characterization and optimization of a single-phase liquid xenon Compton camera for 3γ imaging applications. It provides the experimental evidence of the feasibility of the 3γ imaging technique through the small-scale prototype XEMIS1. This work has been focused in the extraction of the ionization signal in liquid xenon and the device optimization. The obtained results have contributed to substantial advancements in our understanding of the detector performances and ionization signal extraction, which has led to the final design and construction of a second prototype dedicated to small animal imaging. This larger-scale device called XEMIS2, is a monolithic cylindrical camera filled with liquid xenon placed surrounding the small animal. The geometry of XEMIS2 is optimized to measure the three γ-rays with very high sensitivity and a wide field of view.
The work presented in this document was performed at Subatech laboratory under the scientific advice of Dr. Jean-Pierre Cussonneau and the supervision of Dr. Ginés Martinez. This document is divided in seven chapters, each of them as stand-alone as possible.
Chapter 1 is devoted to an introduction of the general properties of liquid xenon as radiation detector material. We overview the physics of particle interaction and the production of ionization and scintillation signals in liquid xenon. A general review of various liquid xenon-based detectors used in different experimental research fields is presented. Then, we perform a brief introduction to nuclear medical imaging and in particular to the two most used nuclear functional medical imaging techniques, single positron emission computed tomography and positron emission tomography. We describe the basics of Compton imaging and we carry out a detailed description of the principle of the 3γ-imaging technique. Finally, we introduce the basic requirements of a liquid xenon Compton camera for medical imaging.
In Chapter 2, we present an overview of the basic principle of a liquid xenon time projection chamber. Liquid xenon time projection chambers are one of the most promising technologies for the study of rare phenomena, from dark matter searches to neutrino detection. XEMIS is a single-phase liquid xenon time projection chamber, which design has been optimized for medical applications. We lay out the basic principle and the advantages of these kind of detectors for nuclear medical imaging. The different mechanisms that affect the production and detection of the ionization signal in liquid xenon such as diffusion, recombination and impurities attachment are discussed. Finally, we present a brief summary of the formation process of the ionization signal in the segmented anode, starting from the interaction of an ionizing particle with the detector to the collection of the signal by the front-end electronics. The discussion is supported by some experimental results along with results reported by other authors.
Chapter 3 gives a detailed description of the XEMIS1 camera. This includes a description of the light detection and charge collection systems and the cryogenic infrastructure, developed to liquefy and maintain the xenon in stable temperature and pressure conditions during long data taking periods. The purity of the liquid xenon is a major concern is this kind of detectors where electrons must travel relative long distance without being attached to impurities. We describe the purification and circulation system used in XEMIS. Then, we introduce the main characteristics of the new liquid xenon Compton camera, XEMIS2. Building and operating a large scale detector for medical applications involves an entirely new set of challenges, from experimental condition stability, safety conditions to data acquisition and processing. An innovative cryogenic infrastructure, called ReStoX (Recovery and Storage of Xenon), has been especially developed to recuperate and store the xenon in liquid state.
Chapter 4 is devoted to the data acquisition system used in XEMIS1. The system has been developed to record both the ionization and scintillation signals with a minimum dead time. The design and performance of the readout front-end electronics used in XEMIS1 is introduced. We present a detailed study of the electronics response. The ASIC shows excellent properties in terms of gain linearity (in the energy range up to 2 MeV), baseline stability and electronic noise. In particular, my work was focused on the description of the measurement of the ionization signal. Special attention is paid to the optimization of the time and amplitude measurements of the ionization signals. In this thesis, a Monte Carlo simulation of the output signal of the IDeF-X LXe has been implemented. The obtained results have contributed to the development of an advanced acquisition system for the measurement of the ionization signal in XEMIS2. Finally, a description of the main characteristics of this new analog ASIC, called XTRACT, is presented.
The use of a Frisch grid, located between the cathode and the anode, is essential to remove the position-dependence of the collected signals. A complete study of the performances of a Frisch grid ionization chamber is presented in Chapter 5. During this work, three effects have been identified as possible factors that have a direct impact on the extraction of the ionization signal: electron transparency, inefficiency of the Frisch grid and indirect charge induction in non-collecting pixels. These processes and the studies performed in this thesis are explained in detail. Moreover, experimental data have been used to set an upper bound of their impact in the quality of the collected signals. A Monte Carlo simulation has been developed in order to study the effect of charge induction on a segmented anode.
In Chapter 6, a detailed description of the experimental set-up and trigger system used in XEMIS1 for the detection of 511 keV γ-rays from a 22 Na source is presented. We describe the data acquisition system and data processing protocol used in XEMIS1. A detailed description of the analysis and calibration method developed in this thesis to determine the noise for each individual pixel is discussed. The results obtained from this study are used to correct the raw data and to set a threshold level for event selection. Finally, the off-line methods used for clustering and data analysis is presented.
In Chapter 7 we present and discuss the results obtained during this work with XEMIS1. The results presented in this section intent to provide a complete understanding of the response of XEMIS1 to 511 keV γ-rays. We study the energy, timing, position and angular resolutions with a monochromatic beam of 511 keV γ-rays emitted from a low activity 22 Na source. The evolution of the energy resolution and ionization charge yield with the applied electric field and the drift length are analyzed. Then, a preliminary calibration of the response of the detector for different γ-ray energies is presented. Transport properties of electrons in LXe such as electron drift velocity and diffusion are discussed. Finally, we present a Monte Carlo simulation as an useful tool to understand the impact of diffusion on the measured signal on a pixelated detector, and to estimate the position resolution.
In conclusion, the work performed during this thesis has allowed to reach a very low electronic noise (lower than 100 electrons), a time resolution of 44 ns for 511 keV photoelectric events equivalent to a longitudinal spatial resolution of 100 µm, an energy resolution of 3.9 % (σ/E) at 511 keV and an electric field of 2.5 kV/cm, and a transverse spatial resolution smaller than 1 mm. All these results are compatible with the necessary requirements for small animal imaging with XEMIS2, and very promising for the future of the 3γ imaging technique.
Chapter 1
Liquid xenon as detection medium and its application to nuclear medical imaging T he study of the structure of matter, the origin of the universe or the fundamental laws of nature has captivated the interests of our species from the very beginning. The desire to understand and predict the behavior of nature has indeed incited many of the scientific and technological breakthroughs. In the field of experimental physics, new devices are continuously developed to apprehend new observations where the limits of our knowledge are tested.
Liquid noble gases and more particularly liquid xenon have proven to be a perfect detector medium to answer some of these fascinating questions. That is why, for the past few decades, many leading experiments in the field of particle physics, γ-ray astronomy and astrophysics use liquid xenon as radiation detector medium showing its superiority over other materials. Moreover, its excellent properties for γ-ray detection have extended its use to other fields such as medical imaging applications.
In this chapter, we will first introduce the general properties of liquid xenon as radiation detector material. The physics of particle interactions and the production of ionization and scintillation signals in liquid xenon are also discussed in detail. Then, applications of liquid xenon detectors in several past and present experiments will be reviewed. The last part of this introductory chapter is devoted to a more extensive description of the application of liquid xenon to nuclear medical imaging. To conclude we will present the 3γ imaging technique, an innovative medical imaging modality developed at Subatech laboratory, that requires the use of liquid xenon as detection medium.
Fundamental properties of liquid xenon as radiation detection medium
Liquid noble gases, especially liquid xenon (LXe) and liquid argon (LAr) have shown their potential for particle detection since several decades [START_REF] Doke | Fundamental Properties of Liquid Argon, Krypton and Xenon as Radiation Detector Media[END_REF]. Their high densities and short radiation lengths confer them a high stopping power for penetrating radiation. In addition, they present an unique response to ionizing radiation by the simultaneous emission of a scintillation and an ionization signals. These properties makes them a well suited medium not only for the detection of γ-rays but also for the discovery of rare events, such as direct dark matter search or neutrinoless double-beta decay.
In particular, LXe presents some very interesting properties, which made it one of the most extended target medium for both position-sensitive detectors and calorimeters. In this section, we will first present the fundamental properties of LXe as radiation detector. A brief reminder of the main interaction processes of γ-rays and charged particles with liquid xenon is also included. An introduction to both signal production channels, ionization and scintillation, is also presented in this section.
Main properties of liquid xenon
The relevant properties of the liquefied rare gases suitable for radiation detection are presented in Table 1.1. Among liquid noble gases, xenon (Xe) appears to be the most attractive candidate for particle detection in a wide range of applications [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]. Even though radon (Rn) has the highest atomic number [START_REF] Kharfi | Principles and Applications of Nuclear Medical Imaging: A Survey on Recent Developments[END_REF], which is an important requirement for high stopping power detectors, its very high intrinsic radioactivity has excluded it so far for radiation detection. Therefore, omitting radon, Xe has the highest atomic number and density, which implies the highest absorption coefficient for γ-rays in the energy range of hundreds of keV to tens of MeV.
A review of some of the fundamental properties of LXe as detection medium is made in Table 1.2. LXe has a small radiation length of 2.77 cm which, together with its high atomic number provides good capabilities as electromagnetic calorimeter [3]. Moreover, among all noble gases, LXe has the highest ionization yield. Indeed, the high electron mobility in LXe and the low energy required to produce an electron-ion pair, make it also very attractive as ionization detector medium. The high ionization density results in enough electron-ion pairs per unit length to generate a detectable ionization signal. Light yield (photons/MeV) 15000 30000 40000 25000 42000 [START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF]7] Table 1.1 -Main properties of several liquid noble gases.
Element
LXe also presents excellent scintillation properties with a high scintillation yield and fast decay times. The number of photons emitted per MeV with zero electric field is around 42000. This value is comparable to that of the most commonly used scintillator crystals such as NaI, and around two times bigger than the Lutetium Oxyorthosilicate (LSO), widely used in Positron Emission Tomography (PET) [START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF][START_REF] Korzhik | Development of scintillation materials for PET scanners[END_REF]. In addition, the fast scintillation response implies good time resolution, making it ideal for time-of-flight (TOF) applications [START_REF] Doke | Time-of-flight positron emission tomography using liquid xenon scintillation[END_REF].
From the radioactive point of view, unlike krypton (Kr) and argon (Ar), which suffer from 85 Kr and 39 Ar at a level of 1MBq/kg and 1 Bq/kg respectively [START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF], Xe is intrinsically clean since no long-live naturally radioisotopes are present. This is a crucial requirement for those experiments for which low background is needed.
Another advantage of using liquid noble gases as radiation detection is the possibility to construct large monolithic detection volumes at a reasonable cost with high detection efficiency, which is not yet possible with other detector media such as semiconductor detectors. This fact, together with the previously mentioned characteristics, make liquid noble gases suitable media to produce high-sensitivity detectors with a large Field-Of-View (FOV). However, one constrain of using liquefied noble gases as particle detector is the need of good cryogenics and purification systems. LXe has a small temperature operating range of only 4 K. At a pressure of 1 bar, xenon becomes liquid at a temperature of 165 K and solid at 161 K. This small temperature interval requires constant pressure and temperature monitoring. Nevertheless, compared to the other liquid noble gases, LXe has a relative high temperature. The fast development of new and effective cryogenics systems in the recent years has made the technical aspects accessible.
Xenon properties Values Ref
Atomic number Z 54 [12] Average atomic weight A 131.3 [12] Density (g.cm -3 ) 3.06 [13] Radiation length X 0 (cm) 2.77 [START_REF] Hagiwara | Review of Particle Physics[END_REF] Ionization potential in liquid phase (eV) 9.28 [START_REF] Aprile | Noble Gas Detectors[END_REF] Average ionization energy W-value (eV) 15.6 ± 0.3 [START_REF] Takahashi | Average energy expended per ion pair in liquid xenon[END_REF] W ph in liquid for relativistic e -(eV) a 21.6 [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF] W ph in liquid for alpha particles (eV) a 17.9 [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF] Peak emission wavelength (nm) 178 [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF][START_REF] Doke | Present status of liquid rare gas scintillation detectors and their new application to gamma-ray calorimeters[END_REF] Refractive index (at 178 nm) [1.6,1.72] [START_REF] Barkov | Measurement of the refractive index of liquid xenon for intrinsic scintillation light[END_REF][START_REF] Solovov | Measurement of the refractive index of liquid xenon for intrinsic scintillation light[END_REF] Fast decay time (singlet state τ s ) (ns) 2.2 [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF][START_REF] Doke | Present status of liquid rare gas scintillation detectors and their new application to gamma-ray calorimeters[END_REF] Slow decay time (triplet state τ s ) (ns) 27 [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF][START_REF] Doke | Present status of liquid rare gas scintillation detectors and their new application to gamma-ray calorimeters[END_REF] Recombination time (τ r ) (ns) 45 [START_REF] Doke | Present status of liquid rare gas scintillation detectors and their new application to gamma-ray calorimeters[END_REF] Table 1.2 -Main properties of Liquid xenon as radiation detector medium a In the absence of electric field.
Response of liquid xenon to ionizing radiation
When an ionizing particle passes through matter, it loses its energy by interaction with the atoms of the medium. The type of interaction and the amount of deposited energy depend on the kind of incident particle, its energy and also on the type of material. Charged particles and photons mainly interact with matter via electromagnetic processes, principally through inelastic collisions with the atomic electrons [START_REF] Evans | The Atomic Nucleus[END_REF]. Neutrons, on the other hand, interact with nuclei of the absorbing material via the strong interaction [START_REF] Knoll | Radiation Detection and Measurements[END_REF]. For the purpose of this thesis, only the principal electromagnetic mechanisms are considered.
Charged particle interactions in matter
Charged particles, such as α-particles, protons or electrons, lose their energy mainly by ionization and atomic excitation. When a charged particle penetrates in LXe, it will interact with the electrons and nuclei present in the material through the Coulomb force. Depending on whether this interaction is an inelastic collisions with the atomic electrons or an elastic scattering from nuclei, the incoming particle will lose its energy or it will just suffer a deflection from its incident direction. Inelastic collisions with the bound electrons of the atoms are in general the predominant processes by which heavy charged particles, such as α-particles and protons, interact with matter. These type of interactions will cause either ionization or excitation of the atoms of the medium depending on the transferred energy in each collision.
Since charged particles can transfer only a small fraction of the total energy in a single electronic collision, they interact with many electrons of the medium continuously losing their energy before being stopped [START_REF] Evans | The Atomic Nucleus[END_REF]. Moreover, in the case of heavy charged particles, collisions with the atomic electrons are not enough to cause a significant deflection from the incoming direction. This implies that the trajectory of a heavy charged particle, defined as the average distance traveled by a particle before coming to rest, can be approximated by a straight line. On the other hand, electrons and positrons are less ionizing than heavy charged particles and thus, they can travel longer distances before slow down. For example, a 5.5 MeV α-particle will lose all its energy in 42.5 µm of LXe, while an electron of the same energy will travel around 1.3 cm of LXe before being stopped [13]. In addition, electrons and positrons are more susceptible to multiple scattering by nuclei than heavy charged particles, and also a larger fraction of their energy can be lost in a single interaction producing larger scattering angles. As a result, electrons and positrons suffer from larger deviations from the electron path than heavy charged particles, resulting in erratic trajectories. Figure 2. [START_REF] Doke | Present status of liquid rare gas scintillation detectors and their new application to gamma-ray calorimeters[END_REF] shows the trajectory of a 511 keV electron in LXe obtained by simulation. Collisional energy loss is not the only mechanism by which charged particles lose their energy when interacting with matter. Electrons due to their small mass also lose their energy by inelastic collisions with nuclei, resulting in the emission of electromagnetic radiation (bremsstrahlung). However, at the energies we are interested in, of the order of 1 MeV, the radiative stopping power1 mostly due to bremsstrahlung (8.217 × 10 -2 M eV cm 2 /g) is about one order of magnitude smaller than that from collision interactions (1.122 M eV cm 2 /g) [13]. Thereby, the bremsstrahlung process becomes negligible and thus, it can be ignored from this discussion. The total mass stopping power and CSDA 2 range for electrons in xenon are shown in Figures 1.2 and 1.3 respectively. Then, the predominant interaction processes of electrons and positrons with matter at these energies are ionization and atomic excitation through inelastic collisions with the atomic electrons. Other mechanisms such as the emission of Cherenkov radiation and nuclear reactions are also irrelevant in this thesis. For a pedagogical review of these processes please refer to [START_REF] Evans | The Atomic Nucleus[END_REF][START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF].
Photon interactions in matter
The lack of electric charge of photons, in our case X-rays and γ-rays, makes the interaction processes with matter completely different from those of charged particles. Unlike electrons, photons do not lose their energy continuously via Coulomb interactions with atomic electrons, but instead they travel relatively long distances before a partial or total transfer of their energy to the medium in a single interaction. Photons interact with LXe mainly via photoelectric effect, Compton scattering, Rayleigh effect and pair production. The relative contribution of this processes in xenon as a function of the photon energy is illustrated in Figure 1.4.
Photoelectric effect
In the photoelectric effect, the total energy of the incoming photon is transferred to a bound electron of the atom producing the complete absorption of the photon and the consequent ejection of the electron from the atom. The energy of the emitted electron is then equal to the energy of the incoming photon minus the binding energy of the electron on its atomic shell. Since the electrons are bounded, the momentum is conserved by the recoil of the entire atom.
This phenomenon was first observed and reported by Heinrich Hertz in 1887 after observing electric sparks when a metal was illuminated with a light of an specific wavelength. However, it was not until 1905 when a plausible explanation of the photoelectric effect was given by Albert Einstein, awarded in 1921 with the Nobel Prize in physics for its contribution to explain this effect.
The photoelectric effect is graphically depicted in Figure 1.5. After the ejection of the electron, the atom remains in an excited state with an inner shell electron vacancy. The atom will return to the ground state by filling the vacancy with a less tightly electron from an upper shell with lower binding energy. In such a transition, the emission of a characteristic X-ray or an Auger electron is produced. The direction in which the electron is ejected from the atom depends on the energy of the photon. For low energies the electron is emitted perpendicular to the direction of incidence, whereas for high energies the electron is ejected forward in the direction of the incident photon [START_REF] Evans | The Atomic Nucleus[END_REF][START_REF] Sauter | Über den atomaren Photoeffekt in der K-Schale nach der relativistischen Wellenmechanik Diracs[END_REF]. At energies we are interested in, above 100 keV, the recoil electron is emitted almost in the same direction as the incident photon. Fluorescence radiation is, on the other hand, emitted isotropically.
For the photoelectric absorption to occur, the energy of the incident photon must be greater or equal to the binding energy of the involved electron. The probability of photoelectric absorption depends, therefore, on the electron binding energy, being greater for more tightly bound electrons. The photoelectric effect takes place predominantly in the K-shell. In fact, if the energy of the photon is higher than the K-electron binding energy, 85 % of the photoelectric interactions results in the ejection of an electron from the K-shell. Figure 1.4 shows the photoelectric cross section in LXe as a function of the photon's energy. As we can see, the probability of interaction increases as the energy of the photon decreases. In the figure the absorption edges corresponding to the K, L and M atomic shells are visible at energies of around 34 keV, 5 keV and 2 keV respectively. At these energies the photoelectric cross section rapidly increases, which means that "the maximum absorption takes place for photon with just enough energy to eject the electron" [START_REF] Evans | The Atomic Nucleus[END_REF].
+ + + + hν 0 e -
As presented in Figure 1.4, the photoelectric effect dominates over the other interaction processes at low photon energies or high atomic numbers Z. This is because the photoelectric cross section shows a strong dependency on the energy of the incident photon and on the atomic number of the material. Despite the complexity of the theoretical estimation of the photoelectric cross section, it can be roughly approximated as: τ ≃ const Z n (hν) 3 (1. [START_REF] Doke | Fundamental Properties of Liquid Argon, Krypton and Xenon as Radiation Detector Media[END_REF] where τ is the photoelectric mass attenuation coefficient. At the energy range of interest, the dependency on Z varies between Z 4 to Z 5 depending on the energy of the photon. As a result, materials with high atomic number are better candidates for γ-ray absorbers. For a complete discussion of the theoretical estimation of the photoelectric cross section please refer to [START_REF] Davisson | Gamma-Ray Absorption Coefficients[END_REF].
Compton scattering
Compton scattering was first reported by Arthur H. Compton in 1923 [START_REF] Compton | A Quantum Theory of the Scattering of X-rays by Light Elements[END_REF] after a series of experiments concerning the scattering of X-rays from electrons in a carbon target. Its discovery made Compton recipient of the Nobel Prize in physics in 1927. Unlike photoelectric effect, in Compton scattering the incident photon with energy hν does not transfer all of its energy to a bounded electron, but instead, only a portion of its energy is transferred to a recoil electron which is considered to be at rest and unbound. The free electron approximation is only valid if the energy of the photon is much higher than the binding energy of the electron.
After the collision, the photon is deflected with respect to its original direction and emitted with a lower energy hν ′ that depends on the diffusion angle θ. Figure 1.6 illustrates the Compton scattering process. Applying momentum and energy conservation, the energy of the scattered photon hν ′ is given by Equation 1.2:
hν ′ = hν 1 + α(1 -cos θ) (1.2)
where α = hν/m e c 2 . The electron recoils at an angle φ with a kinetic energy T e given by:
T e = hν -hν ′ = hν α(1 -cos θ) 1 + α(1 -cos θ) (1.3)
The directions of the scattered photon and the electron depend on the amount of energy transferred to the electron during the collision. The scattering angles of both the incoming photon and the released electron can be expressed as:
cos θ = 1 - 2 (1 + α) 2 + tan 2 φ + 1 (1.4) cot φ = (1 + α) tan θ 2 (1.5)
The energy transferred to the electron can vary from zero, when electron is scattered at right angles (φ ⋍ 90 • ), to a maximum value obtained for an scattering angle θ = 180 • (back scattering). In such a collision, the electron moves forward in the direction of the incident photon, which conserves the minimum energy. The maximum energy that can be transferred to the scattered electron is given by Equation 1.6: T e,max = hν 2α 1 + 2α
(1.6) Figure 1.4 shows the total Compton scattering cross section as a function of the energy of the incident photon in xenon. The Compton scattering probability becomes important at energies of the order of 100 keV with a maximum at around 1.5 MeV and then, rapidly decreases as the energy of the incident photon increases. Because Compton scattering involves free electrons, the probability of Compton absorption is nearly independent of the atomic number Z, although is directly proportional to the number of electrons per gram Z/A, which is nearly constant for all materials. Regardless the probability of the interaction, the amount of energy transferred during the collision depends on the energy of the incident photon. Low-energy photons are scattered with small angles θ ⋍ 0 • going almost forward with respect to the direction of the incident photon with a small energy transfer to the electron. On the other hand, if the energy of the incident photon is large, 10 to 100 MeV, most of the energy is transferred to the recoil electron.
Rayleigh scattering
The phenomenon of Rayleigh scattering also known as coherent scattering, was named in honour of J. W. S. Rayleigh, who in 1871 published a paper describing, at that time unknown, effect [START_REF] Rayleigh | On the light from the sky, its polarization and color[END_REF][START_REF] Rayleigh | On the scattering of light by small particles[END_REF]. A full understanding of the process is, however, the result of collective efforts carried out by many authors during the first half of the 20th century [START_REF] Hayes | Scattering of light by crystals[END_REF]. Nevertheless, a rigorous mathematical formulation of the Rayleigh scattering had to wait until 1965 (Kleinman, 1975(Kleinman, , 1978) ) [START_REF] Kleinman | Rayleigh Scattering[END_REF].
The Rayleigh scattering is produced between an incident photon and a tightly bound atomic electron, in which both particle interact coherently. After the collision, the atom is neither ionized nor excited. The photon is deflected without energy transfer. The Rayleigh scattering mostly occurs at low energies and for high atomic number materials. The average scattering angle decreases with increasing energy. In xenon the probability of coherent scattering is very small even for low photon energies, so it can be ignored.
Pair production
Figure 1.7 illustrates the pair production mechanism. This effect, which is based on the conversion of a photon into an electron-positron pair, was first confirmed by Patrick M.S. Blackett and Giuseppe P.S. Occhialini after the discovery of the positron by Carl D. Anderson in 1932 [START_REF] Anderson | Energies of Cosmic-Ray Particles[END_REF]. Blacket was awarded for his work with the Nobel Prize in physics in 1948 [START_REF] Hubbell | Electron-positron pair production by photons: A historical overview[END_REF].
For the pair production to occur, the incident photon must have an energy above 2m e c 2 (1.022 MeV). The photon, passing near the nucleus of an atom, can interact with the Coulomb field of the atomic nucleus to be transformed into an electron-positron pair. The electron and the positron are not scattered, but created from energy conservation after the photon disappears. The excess energy after the creation of the pair, which is equal to the difference between the energy of the incident photon and the minimum energy required to create the pair (2m e c 2 ), is shared by the electron and the positron. After their emission, both particles will rapidly slow down inside the material. The positron will annihilate with an electron releasing two back-to-back γ-rays with energies of 511 keV.
The pair production mechanism can also occur in the field of an atomic electron if the energy of the incident photon is greater than 4m e c 2 (2.044 MeV). In such an interaction called triplet production, the recoil electron is also ejected from the atom resulting in the emission of two electrons and a positron.
The probability of a photon to undergo a pair production increases rapidly with the photon energy and varies approximately as the square of the atomic number Z. In the case of xenon, pair production starts to dominate at energies above 10 MeV.
Ionization properties
As we have seen in Section 1.1.2, when an ionizing particle interacts with LXe it loses part of its energy mainly by ionization and atomic excitation. This implies that, regardless the type of incoming particle, charged particles or photons with the consequent emission of a recoil electron, a certain number of electron-ion pairs (N i ), excited atoms (N ex ) and free electrons (sub-excitation electrons) are created after the interaction with the LXe. The energy deposited by the scattering particle E 0 can be expresses in terms of the number of excited and ionized xenon atoms through the Platzman equation [START_REF] Platzman | Total ionization in gases by high energy particles: an appraisal of our understanding[END_REF]:
E 0 = N ex E ex + N i E i + N i ǫ (1.7)
where the energies E ex and E i correspond to the average energy needed to produce atomic excitation or an electron-ion pair respectively, and the remaining term ǫ corresponds to the kinetic energy of the sub-excitation electrons.
The average energy W needed to create an electron-ion pair in LXe can be expressed as follows:
W = E 0 N i (1.8)
Applying Equation 1.8:
W = E i + E ex N ex N i + ǫ (1.9)
The value of W in LXe is 15.6 ± 0.3 eV [START_REF] Takahashi | Average energy expended per ion pair in liquid xenon[END_REF]. Compared to that of LAr, which is 23.6 ± 0.3 eV [START_REF] Miyajima | Average energy expended per ion pair in liquid argon[END_REF], the significantly smaller W-value for LXe implies a much larger ionization yield. In fact, LXe has the highest ionization yield of ∼64000 pairs/MeV (for an infinite electric field) among all liquefied noble gases.
Scintillation mechanism
Scintillation light emission in LXe has been investigated in detail by many authors since the pioneering studies carried out by Doke [START_REF] Doke | Fundamental Properties of Liquid Argon, Krypton and Xenon as Radiation Detector Media[END_REF]. Light is of major importance because its fast production mechanism, of the order of a few ns, makes photons useful to provide the event trigger information, which corresponds to the interaction time. Moreover, the number of produced photons is proportional to the deposited energy by the interacting particles and hence, it can provide calorimetric information.
Scintillation photons are produced either by atomic ionization or atomic excitation. Both ways lead to Xe * 2 excimer formation, in a molecular excited state singlet 1 + u or triplet 3 + u , which eventually de-excites producing VUV photon. The two scintillation production processes can be summarized as follows:
1. Direct excitation of xenon atoms followed by an excited molecule formation and de-excitation:
Xe + e -→ Xe * + e -
Xe * + Xe + Xe → Xe * 2 + Xe Xe * 2 → 2Xe + hν (1.10)
The recoiling electron emitted after the interaction via a photoelectric effect or a Compton scattering of a γ-ray with the LXe, may excite the encountered xenon atoms.
The excited atoms Xe * will combine with another xenon atom creating an excited di-xenon molecule or dimer Xe * 2 . After some time, depending on whether the molecule is on a singlet or triplet excited state, the dimer will return to the ground state through the dissociation in two neutral atoms and the emission of a VUV photon. The typical time needed for the formation of a Xe * 2 excimer is of the order of few picoseconds.
2. Molecular state formation through recombination between electrons and Xe + ions:
Xe + e -→ Xe + + 2e The ejected electrons can otherwise transfer enough energy to a neutral atom to create an electron-ion pair. The Xe + may recombine with another atom creating an ionized dimer Xe + 2 . The molecular ion Xe + 2 will capture a free electron leading to the formation of a Xe * 2 excimer and the eventually emission of a UVU scintillation photon in the same way as in the direct excitation process. This process takes, however, more time for the Xe * 2 excimer formation.
No matter the origin of the scintillation emission process, the scintillation light spectrum is in the vacuum ultraviolet (VUV) region, with a peak around λ = 178 ± 1 nm (FWHM = 14 nm) [START_REF] Incicchitti | Liquid Xenon as a Detector Medium[END_REF]. Moreover, a strong peculiarity of liquid noble gases is that they are transparent to their scintillation light, since the emitted VUV photons have an energy which is appreciably lower than the minimum necessary energy required for atomic excitation. The scintillation mechanism in LXe is schematically illustrated in Figure 1.8.
Scintillation decay components
Both ions Xe + and excited atoms Xe * lead to the formation of excited dimers Xe * 2 . The excimers may exist in two different energetic states, the singlet state 1 + u and the triplet state 3 + u , which eventually decay to the ground state 1 + g . The associated relaxation times are then different, depending on which excited state the excimer is. For relativistic electrons, the lifetimes of the singlet and triplet excited states are 2.2 ns and 27 ns respectively [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF]. These values referred as the fast and slow decay components depend slightly on the nature of the recoiling particle. Figure 1.9 illustrates the decay curves of the scintillation light in LXe for electrons, alpha particles and fission fragments without applied electric field. For electrons and without electric field, the scintillation light shows only a decay component of 45 ns [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF][START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF]. Since this component disappears in presence of an electric field as reported by Kubota et al. [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF], its origin is most likely due to the recombination between electron and ions [START_REF] Kubota | Recombination luminiscence in liquid argon and in liquid xenon[END_REF]. The comparison of the scintillation decay curves with and without applied electric field is shown in Figure 1. [START_REF] Doke | Time-of-flight positron emission tomography using liquid xenon scintillation[END_REF]. With an electric field of 4 kV/cm, the decay curve exhibits roughly a double exponential decay form, characterized by the fast and slow lifetimes of the single and triplet excited states. Scintillation light in LXe is therefore characterized by three different decay components: a fast component with a time constant of τ s = 2.2 ± 0.3 ns, a slow component with a lifetime of τ t = 27 ± 1 ns and a third component due to electron-ion recombination with time constant of τ r = 45 ns (at 173 K). Table 1.3 lists the lifetime differences for electrons, alpha particles and fission fragments.
Incident particle τ s (ns)
τ t (ns) τ r (ns) I s /I t Ref
Electrons:
2.2 ± 0.3 27.0 ± 1.0 ∼ 45 0.05 [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF][START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF] Alpha particles: 4.3 ± 0.6 22.0 ± 1.5 0.45 ± 0.07 [START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF] Fission fragments 4.3 ± 0.5 21.0 ± 2.0 1.6 ± 0.2 [START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF] Table 1.3 -Decay times for the fast, slow and recombination time constants for electrons, α-particles and fission fragments .
Figure 1.9 -Decay curves of the scintillation light for electrons, α-particles and fission fragments in LXe, without applied electric field [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF][START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF].
For electrons, 71 % of the VUV scintillation photons are produced by ionization of the xenon atoms, whereas the remaining 29 % is due to pure atomic excitation [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF]. The later does not depend on the applied electric field and the ratio I s /I t between the number of produced photons from the two excited states, singlet (I s ) and triplet (I t ), was found to be 0.05 [START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF]. Under the influence of an electric field, on the other hand, the proportion of scintillation light due to recombination decreases. For an electric field of 2 kV/cm, for example, only the ∼46 % of the maximum scintillation light is produced (see Figure 1.11). Since the fraction of light due to direct excitation remains constant regardless the electric field, 63 % of the scintillation yield is therefore emitted with a time constants of 2.2 ns and 27 ns, while the remaining 37 % of the emitted photons have a decay time of 45 ns.
Although the decay times does not depend much on the ionization density, the proportion between the two excited states, singlet and triplet, depends on the recoiling particle. The energy loss per unit length for heavy particles such as α-particles is much higher than that Figure 1.10 -Decay curves of the scintillation light for electrons in LXe with and without an applied electric field [START_REF] Kubota | Recombination luminiscence in liquid argon and in liquid xenon[END_REF].
for electrons, leading to a much higher ionization density. The recombination yield will be higher and the recombination process will occur faster for heavy recoiling particles than for recoiling electrons. The recombination decay component become thus negligible and the scintillation decay times are dominated by the de-excitation of the singlet and triplet states. The ratios I s /I t are found to be 0.45 ± 0.07 and 1.6 ± 0.2 under α-particle and fission-fragment excitation, respectively, showing an enhancement of 1 + u formation with higher ionization density. [START_REF] Aprile | Simultaneous Measurement of Ionization and Scintillation from Nuclear Recoils in Liquid Xenon as Target for a Darl Matter Experiment[END_REF].
Scintillation yield
The number of VUV scintillation photons, N ph , emitted after the interaction of an ionizing particle with the LXe is characterized by the minimum average energy W ph required to emit a scintillation photon. Liquid noble gases are in general very good scintillators. The scintillation yield for electron with 1 MeV of energy has been estimated between 40000 photons to 50000 photons [START_REF] Chepel | Liquid noble gas detectors for low energy particle physics[END_REF]. LXe light yield is therefore comparable to that of the NaI scintillation crystal (43000 photons/MeV). For low energy electrons (20 -100 keV), the light yield in LXe increases significantly producing nearly 70000 photons/MeV. Nevertheless, to date, there is not an accurate measurement of the value of W ph due to the difficulty of the measurement.
The average energy for a scintillation photon emission depends on the number of electron-ion pairs produced by ionization, N i , and on the number of excited atoms N ex . Assuming that all electron-ion pairs recombine ( E = 0 kV /cm), the average energy W ph can be expressed as follows:
W ph = E 0 N ph = E 0 N i + N ex = W 1 + N ex N i (1.12)
where E 0 is the energy of the incident particle and W is the average energy required to generate an electron-ion pair. The maximum value of W ph published by Doke et al. [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF] in 2002 was estimated to be 13.8 ± 0.9 eV , corresponding to a photon yield of ∼ 72000 photons/M eV . More recent studies agrees with the result of Doke et al. [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF], showing results of 13.45 ± 0.29 eV [START_REF] Shutt | Performance and fundamental processes at low energy in a two-phase liquid xenon dark matter detector[END_REF] and 13.7 ± 0.2 eV [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF]. This value corresponds to the minimum possible energy needed to produce a scintillation photon without escape electrons or scintillation quenching. For relativistic electrons (1 MeV) a value of 21.6 eV has been estimated [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF]. A compilation of the existing experimental and theoretically estimated W ph values for LXe can be found in [START_REF] Chepel | Liquid noble gas detectors for low energy particle physics[END_REF].
The scintillation yield depends on the recoiling particle since the recombination process between electrons and positive ions depends on the density of electron-ion pairs. The estimated value of W ph for α-particles in LXe is 21.6 eV with a production ratio β/α of 0.81 [START_REF] Aprile | Development of liquid xenon detectors for gamma ray astronomy[END_REF].
Photon attenuation in LXe
There are two main aspects besides recombination that can affect the collection of the produced scintillation light in a LXe detector: the absorption of electron by impurities (λ abs ) and the photon elastic scattering (λ rayleigh ). The total attenuation coefficient of photons can be expressed according to Equation 1.13:
1 λ att = 1 λ abs + 1 λ rayleigh (1.13)
Thus, the variation of the number of produced photons N ph with the distance can be expressed as an exponential function which depends on the total attenuation coefficient, λ att , also referred as the mean free path of scintillation photons: N (z) = N ph e -z λ att (1.14) LXe is transparent to its own scintillation light since the absorption band of free exciton is at a higher energy than the scintillation light emission band, which is ∼ 0.6 eV [START_REF] Schwentner | Electronic Excitations in Condensed Rare Gases[END_REF]. However, the presence of impurities in the liquid can absorb a large fraction of the scintillation photons. Water is the most serious contaminant due to its high absorption cross section. A concentration of the order of 10 ppm (parts per million) is enough to attenuate 90 % of the scintillation light in two cm [START_REF] Ozone | Liquid xenon scintillation detector for the µ → γ search experiment[END_REF]. O 2 can also contribute to the scintillation photon loss, although its influence is more important for the ionization signal. Photon attachment to impurities can be successfully reduced by installing a purification system (see Chapter 3, Section 3.1.4).
The Rayleigh scattering length in LXe also affects the collection of scintillation light. Its influence is especially important for large volume detectors. The travelled length by a photon through the LXe before undergoing a Rayleigh scattering is strongly dependent on the wavelength of the light as well as on the optical properties. For LXe, the Rayleigh scattering length has been experimentally determined as λ rayleigh = 29 cm [START_REF] Ishida | Attenuation length measurements of scintillation light in liquid rare gases and their mixtures using an improved reflection suppresser[END_REF], which is in good agreement with the theoretical calculations. For a detector with the dimensions as the one reported in this thesis, the Rayleigh scattering contribution can be considered negligible.
Next generation of LXe detectors
Liquefied rare gases are an attractive option for radiation and particle detection thanks to their very interesting and suitable properties previously discussed in this chapter. Among liquid noble gases, LXe and LAr are widely used in modern physics experiments despite the difficulties associated with handling low temperature detectors. In particular, LXe has gained importance in different research fields from medical imaging applications to physics beyond the standard model. Following the success of current LXe-based detectors such as XENON100, EXO, LUX, XMASS, Panda-X..., LXe technology has gained a lot of interest in the scientific community, which has led to the development of a new generation of large-scale LXe detectors. In fact, the possibility of building big massive monolithic detectors makes liquid noble gases very interesting for the study of rare phenomena such as the measurement of low-energy solar neutrinos and direct dark matter search.
The development of large-scale LXe detectors aims to push the performances and sensitivity of the current technology beyond the existing limits. However, ultra-sensitive detectors bring along new technological challenges. These include fast and efficient purification systems, high applied electric fields along long drift distances of the order of 1 m and effective background rejection both environmental and intrinsic to the detectors.
In this section we perform a brief overview of some past, present and future LXe detectors dedicated to the dark matter search, neutrinoless double β decay detection, γ-ray astrophysics and nuclear medicine imaging.
Dark Matter Direct Detection
LXe is one of the most extended target materials used in the direct detection of dark matter. Weakly Interacting Massive Particles (WIMPs) are a particularly interesting candidate for the dark matter "problem", which is one of the remaining unknowns in the universe. The WIMP particle is predicted by many supersymmetric extensions of the standard model [START_REF] Griest | Supersymmetric dark matter above the w mass[END_REF]. They interact only through the weak nuclear force and gravity, and can have masses from 40 GeV to 3.2 TeV [START_REF] Griest | Supersymmetric dark matter above the w mass[END_REF]. The WIMP interacts with matter through elastic collisions producing a nucleus recoil. LXe has shown its potential to discriminate nuclear recoils generated by elastic scattering of a WIMP or neutrons, from electronic recoils produced by, for example, γ-rays, which are the major source of background. The predicted interaction cross sections are in the range of 10 -42 -10 -46 cm 2 , which translates into a interaction rate of the order of 10 -4 event/LXe kg. The extremely low interaction cross sections require large target volumes, with the main challenges being the detection of low-energy signals in the keV-scale and unprecedented levels of sensitivity with a huge control of the background.
Beyond the ongoing direct dark matter experiments based on LXe detectors, we can mention the XENON, ZEPLIN, LUX and XMASS projects. All of them as exception of XMASS, are based on a dual-phase LXe/GXe detector. The main advantage of using a two phase-detector compared to a single-phase technique is the capability to reject background. However, single-phase detectors are technological simpler. The scaling to larger devices of dual-phase detectors might become relatively complicated. Moreover, the presence of a double liquid-gaseous phase limits the possible geometrical designs of the detector. The performances of a single-phase xenon detector depend on whether it relays on measuring exclusively the scintillation light as the XMASS experiment, or both scintillation and ionization signals as in dual-phase technology. The combination of both scintillation-light and ionization-electron signals provides better energy resolution and position sensitivity. In the XENON Dark Matter Search collaboration, two detectors, XENON10 and XENON100, have already been successfully tested, and a larger scale detector XENON1T is currently in the qualification stage. All of them operate underground at the Gran Sasso laboratory (LNGS) in Italy. The XENON100 detector [START_REF] Aprile | The XENON100 dark matter experiment[END_REF] consists of a dual-phase time-projection chamber (TPC) that contains about 70 kg of LXe. Figure 1.12 shows a sketch of the basic principle of a dual-phase xenon TPC (left) and the XENON100 TPC (right). The LXeTPC exploits both the scintillation and ionization signals produced after the interaction of an ionizing particle with the LXe. The bulk volume of the detector is liquid with a drift length of 30 cm, whereas the top is based on a thin layer of gaseous xenon. Both the top and bottom parts of the detector are covered by two arrays of photomultiplier tubes (PMT) to detect both the primary scintillation light (S1) and the secondary scintillation photons (S2). The proportional scintillation light S2 is locally produced in the gas phase located above the liquid level. An electric field applied between the cathode and the anode forces the ionization electrons to drift towards the gas phase. A second electrical field called extraction field, which is higher than the drift field, is applied to convert the electrons into a proportional amount of electroluminescence photons. The time difference between the prompt signal S1 and the S2 signal from the ionization electrons is used to reconstruct the Z position of the interaction points inside the detector. The XY position is determined from the distribution of the S2 signal in the PMTs. Finally, the ration between both S1 and S2 signals is used to discriminate between electronic and nuclear recoils. A more detailed description of the principle of a TPC is performed in the Chapter 2.
Similarly, the LUX (Large Underground Xenon) detector [START_REF] Akerib | The large underground xenon (LUX) experiment[END_REF] is a two-phase TPC filled with 370 kg of LXe (100 kg of target mass) contained in a titanium vessel. The detector is based on the same principle as the XENON detector. LUX is placed underground at the Sanford Underground Laboratory of the Homestake Mine, in South Dakota and it is operational since 2009. The next step on the LUX experiment is a new detector called LZ from LUX-ZEPLIN that merge two dark matter experiments LUX and ZEPLIN (ZonEd Proportional scintillation In Liquid Noble gases). The LZ detector will hold a target mass twenty times higher than that of LUX (7 tons), and which promises a sensitivity much greater [START_REF] Akerib | LUX-ZEPLIN (LZ) Conceptual Design Report[END_REF]. An artistic design of the future LZ detector is depicted in Figure 1.13.
The XMASS experiment [START_REF] Liu | The XMASS 800 kg detector[END_REF], on the other hand, is based on a single-phase TPC with a fiducial volume of ∼100 kg of LXe used to measure exclusively the scintillation signal. Thereby, the entire LXe volume is surrounded by PMTs to efficiently measure the scintillation light. The XMASS detector is depicted in Figure 1.14. The XMASS project was developed for double beta decay, pp and 7 Be solar neutrinos, in addition to the direct detection of dark matter. The future of the XMASS experiment is the XMASS1.5 detector with 5 tons of LXe (3 tons fiducial).
Gamma-Ray Astrophysics
The application of LXe in γ-ray astronomy was first investigated in the late 1980s, following the encouraging results in terms of sensitivity, energy resolution and signal collection of LXe for γ-ray observation in the energy range from 200 keV to [START_REF] Compton | A Quantum Theory of the Scattering of X-rays by Light Elements[END_REF] MeV [START_REF] Aprile | Development of liquid xenon detectors for gamma ray astronomy[END_REF]. The first attempts of astrophysics observations with LXe were carried out with the Liquid Xenon Gamma-Ray Imaging Telescope (LXeGRIT) [55]. LXeGRIT is a Compton telescope based on a LXe time projection chamber (TPC) used as a balloon-borne instrument. The development of LXeGRIT was the start of a novel concept of Compton telescope, different from standard Compton imaging systems.
The device was based on a single monolithic detector that exploited both scintillation and ionization signals. A schematic drawing of the LXeGRIT detector is depicted in Figure 1. [START_REF] Takahashi | Average energy expended per ion pair in liquid xenon[END_REF]. The detector consists of a gridded ionization chamber with an active zone of 400 cm 2 and a drift length of 7 cm filled with ultra-pure LXe. The scintillation light generated after the interaction of an ionizing particle with the LXe is detected by four VUV-sensitive photomultiplier tubes (PMT) and used as the event trigger for the data acquisition. A uniform electric field of 1 keV/cm is applied between the cathode and the anode to force electrons to drift towards the anode. The ionization signal is collected by four independent electrodes placed at 9 mm from a shielding grid used as a Frisch grid. A set of two parallel wired planes is installed to provided information of the X and Y position of the interaction point inside the fiducial volume of the detector. LXeGRIT was tested in three different balloon flights. After a first short duration flight in 1997 for engineering tests and calibration purposes, the feasibly of LXeGRIT was successfully demonstrated in two longer duration flights in 1999 and 2000 [START_REF] Aprile | Calibration and In-Flight Performance of the Compton Telescope prototype LXeGRIT[END_REF][START_REF] Curioni | Laboratory and Balloon Flight Performance of the Liquid Xenon Gamma Ray Imaging Telescope (LXeGRIT)[END_REF]. LXeGRIT showed good performances as γ-ray detector, with a spatial resolution of 1 mm, an energy resolution of 8.8 % FWHM at 1 MeV and an angular resolution of 4 • at 1.8 MeV [START_REF] Aprile | Compton Imaging of MeV Gamma-Rays with Liquid Xenon Gamma-Ray Imaging Telescope (LXeGRIT)[END_REF].
At the moment there are not ongoing experiment for γ-ray astronomy based on LXe detectors. However, the important developments around LXe technologies in the past years, has renewed the interest of using LXe as detection medium for astronomy observations.
Particle Physics
LXe is also a very suitable candidate for particle physics experiments such as the detection of low-energy solar neutrinos, neutrinoless double beta decay search and the measurement of neutrino-nucleus scattering.
The EXO-200 (Enriched Xenon Observatory) is a good example of a LXe detector dedicated to the study of neutrinoless double beta decay [START_REF] Gornea | Double beta decay in liquid xenon[END_REF]. It is based on a 200 kg TPC filled with LXe enriched in the 136 Xe isotope to 80 %, developed for the detection of the double beta decay transition via the 0νββ channel [START_REF] Albert | Search for Majorana neutrinos with the first two years of EXO-200 data[END_REF]. A schematic drawing of the EXO-200 TPC is presented in Figure 1. [START_REF] Doke | Present status of liquid rare gas scintillation detectors and their new application to gamma-ray calorimeters[END_REF]. The TPC is divided in two symmetric zones separated by an optically transparent shared cathode. Both regions are equipped with induction and collection wire planes. The crossed wires are placed at the endcaps of the TPC. An electric field of 376 V/cm is applied between the cathode and the anodes to drift the ionization charges towards the wires. The scintillation light is detected by two sets of Large Area Avalanche Photodiodes (LAAPD) also located at the ends of the TPC and parallel to the cathode. The double beta decay of 136 Xe is therefore detected by the collection of both the ionization signal and the scintillation light. The detector is located at the Waste Isolation Pilot Plant (WIPP) in New Mexico, and it has been operational since 2011. The experiment reported the first observation of the two-neutrino double beta decay 2νββ of 136 Xe and pursued the search of the neutrinoless double beta decay [START_REF] Albert | Status and Results from the EXO Collaboration[END_REF]. nEXO from next EXO is a large scale 5 tons LXe detector design with the aim of increasing the sensibility achieved with its predecessor EXO-200 in at least an order of magnitude.
Medical Imaging
The potential of LXe for medical imaging was already considered in the early 1970s. Most part of the LXe detectors in medical imaging have been focused on Positron Emission Tomography (PET) scanners, where the coincidence of the 511 keV γ-rays emitted in opposite directions after the annihilation of a positron with an electron are used to reconstruct the emission point of a radioactive source (see Section 1.3.3). Although, some attempts have been also made in gamma-cameras for Single Positron Emission Computed Tomography (SPECT) (see Section 1.3.2). A successful LXe gamma-camera was developed in 1983 by Erogov et al. [63]. The detector showed comparable energy and spatial resolutions of the order of 15 % at 122 keV and 2.5 mm respectively, than the values obtained with a standard gamma-camera based on scintillation crystals [START_REF] Chepel | Liquid Xenon Detectors for Medical Imaging[END_REF].
First experiments related to the use of liquid xenon detectors for PET were made almost 40 years ago [65]. Later, in the 90s, the group of A.J.P.L. Policarpo and V. Chepel developed the first prototype of a LXe TPC for PET [START_REF] Chepel | New liquid xenon scintillation detector for positron emission tomography[END_REF][START_REF] Chepel | Performance study of liquid xenon detector for PET[END_REF]. The detector was based on a multiwire chamber design to detect both the ionization and scintillation signals. Other groups have also worked in this direction based on the detection of both production channels [START_REF] Giboni | Compton Positron Emission Tomography with a Liquid Xenon Time Projection Chamber[END_REF][START_REF] Thers | A Positron Emission Tomography (PET) based on a liquid Xenon Time Projection Chamber and Microestructure Devices for Compton tracking[END_REF][START_REF] Miceli | Liquid Xenon Detectors for Positron Emission Tomography[END_REF] or only the scintillation light [START_REF] Gallin-Martel | A liquid xenon positron emission tomograph for small animal imaging: First experimental results of a prototype cell[END_REF]. Thanks to the excellent scintillation properties of LXe, its application for Time of Flight PET (TOF-PET) has been also considered by several authors [START_REF] Doke | Time-of-flight positron emission tomography using liquid xenon scintillation[END_REF]65]. The first prototype of a LXe TPC for TOF-PET was developed in 1997 by Waseda et.al. [START_REF] Doke | Time-of-flight positron emission tomography using liquid xenon scintillation[END_REF]. In 2004, our group proposed the idea of using a LXeTPC for the detection in coincidence of three γ-rays emitted quasi-simultaneously from specific 3γ-emitting radionuclides [START_REF] Grignon | Nuclear medical imaging using β+ γ coincidences from Sc-44 radio-nuclide with liquid xenon as detector medium[END_REF]. This nuclear imaging technique is presented in the next section. The proposed technology is a single-phase LXeTPC that exploits both charge and light yields of electronic recoils. The detector has been developed to fulfill the requirements of a device dedicated to medical applications: high energy resolution, high detector granularity to increase the spatial resolution, the ability to discriminate single from multiple scatter interactions, and very low electronic noise. A detailed description of the detector is presented in Chapter 3.
3γ imaging: a new medical imaging technique
A brief introduction to nuclear medical imaging
Nuclear medicine is a medical specialty that uses radioactive substances for the diagnosis and treatment of diseases. The radionuclides (label) are chemically bound to a biologically active molecule (tracer) forming a radiopharmaceutical or radiotracer, which is introduced into the patient. Once the radiotracer is administered to the patient, the molecule concentrates at specific organs or cellular receptors with a certain biological function. The choice of a specific radioisotope depends on the actual biological function or organ we are interested in. The concentration of the radioactive substance in the region of interest, allows nuclear medicine to image the location and extent of a certain disease in the body, besides the evaluation of its functional activity from a morphological point of view. The ability to evaluate physiological functions is the main difference between nuclear medicine imaging and traditional anatomic imaging techniques, such as Computed Tomography (CT) or conventional Magnetic Resonance Imaging (MRI). To provides the best available information on tumor staging and assessment of many common cancers, functional imaging is usually combined with anatomical imaging techniques [START_REF] Macmanus | Use of PET and PET/CT for radiation therapy planning: IAEA expert report 2006-2007[END_REF].
The origin of nuclear medicine is the result of many scientific discoveries that date back from the end of the XIX century. The discovery of the X-rays by Wilhelm Röntgen in 1895 represented one of the greatest revolution in medicine through the incorporation of the use of ionizing radiation for the diagnosis and treatment of many different diseases [START_REF] Mould | A Century of X-Rays and Radioactivity in medicine[END_REF]. Its discovery triggered the development of medical imaging techniques, which provided an actual image of the inside of the human body. The beginning of image-based diagnosis supposed a greatly increase of our life expectancy, health and welfare. A few years after the breakthrough of the X-rays, many fundamental findings followed, that opened the path to the understanding of the structure of matter and its fundamental interactions. The discovery of natural radioactivity by Henri Becquerel in 1896 and the finding, two years later, of two new radioactive elements polonium and radium by Marie and Pierre Curie laid the foundations of this new medical specialty known as Nuclear Medicine [START_REF] Rootwelt | Henri Beckquerel's discovery of radioactivity, and history of nuclear medicine. 100 years in the shadow or on the shoulder of Röntgen[END_REF]. The use of radioactive substances in medicine was driven, in the coming years, by a series of scientific discoveries such as the finding of artificial radioactivity in 1934 by Irène Joliot-Curie and Frédéric Joliot-Curie, and the production of radionuclides for medical use in 1946 by the Oak Ridge National Laboratory. Nuclear Medicine won official recognition as a potential medical specialty in 1946 after the successful treatment with radioactive iodine ( 131 I) in a patient with thyroid cancer [START_REF] Means | Historical Background of the Use of Radioactive Iodine in Medicine[END_REF]. The use of radioactive iodine was later extended to the diagnosis and treatment of thyroid diseases including the production of images of the thyroid gland, which provided visual information of thyroid disorders.
Nuclear medicine imaging was shaped by the development of scintillation detectors and scintillation cameras, which facilitated external imaging and measurement of radionuclides distributed within the body. The invention in 1950 of the rectilinear scanner by Benedict Cassen, followed by the development of the first scintillation camera, also known as Anger Camera (Hal [START_REF] Anger | Scintillation camera[END_REF], suppose a revolutionary breakthrough in medical diagnosis [START_REF] Van Iseelt | The philosophy of science: a history of radioiodine and nuclear medicine[END_REF]. The significant technological advances in the following years opened a new window to the study of human physiology, which gave medical imaging, until then exclusively anatomical, a functional character. By the 50s, the use of nuclear medicine was widespread for clinical use. In particular, nuclear imaging techniques experienced a rapid development during the mid 1960s. The initial scintillation camera evolved into modern imaging systems such as PET and SPECT. These two functional imaging techniques are widely used in clinical practice. Currently, nuclear medicine imaging has extend from organ imaging for tumor localization, to the diagnosis of a great variety of diseases including the diagnosis of cardiovascular diseases and neurological disorders [78].
During the 60s, a new detector for imaging solar neutrinos was proposed independently by Pinkau (1966) and White (1968). This detector, called Compton camera, exploited the physics of Compton scattering for imaging purposes. The possible application of a Compton telescope in nuclear medicine was first proposed by Todd, Nightingale and Everett in 1974 [START_REF] Todd | A Proposed Gamma Camera[END_REF], although the potential of Compton scattered imaging in medicine was already been noticed back in 1959 by Lale [START_REF] Lale | The examination of internal organs using gamma ray scatter with extension to megavoltage radiotherapy[END_REF]. The detector proposed by Todd et. al. was based on a combination of a standard Anger camera and a pixelated germanium detector. Compton imaging provides high sensitivity, high spatial resolution and 3D information.
One of the main limitations of nuclear imaging techniques is the risk of radiation exposure. The biological effect of ionizing radiation was first observed in 1897 after a prolonged exposure to a source of X-rays. In 1927, H. J. Muller reported the resulting genetic effects and the increase of cancer risk [START_REF] Muller | Artificial Transmutation of the Gene[END_REF]. In the past years, additional efforts have been focused at reducing the radiation dose by the improvement of the actual instrumentation, the implementation of low-dose protocols and the development of new imaging techniques. The goal is to obtain same quality images with a significant reduction of the dose administered to the patient. An innovative nuclear medicine imaging technique was proposed in 2004 by Thers et.al. [START_REF] Grignon | Nuclear medical imaging using β+ γ coincidences from Sc-44 radio-nuclide with liquid xenon as detector medium[END_REF]. This technique called 3γ imaging consists in measuring the position of a radioactive source in 3D using the simultaneous detection of three γ-rays by means of a LXe Compton camera. The 3γ imaging technique exploits the benefits of the Compton imaging providing high sensitivity and very good spatial resolution with an important reduction of the administered dose [START_REF] Manzano | XEMIS: A liquid xenon detector for medical imaging[END_REF].
In this section, a brief description of the two main functional imaging techniques PET and SPECT is performed. The explanation includes some notions of the physics background, detection system and main radiopharmaceuticals used in clinical practice. The principle of a Compton telescope and the basics of the Compton imaging are also discussed in this section. Finally, we introduce the concept of the 3γ imaging technique and the possible 3γ emitter radionuclides that can be used.
Single Photon Emission Tomography
Single Photon Emission Computed Tomography (SPECT) is a non-invasive functional medical imaging technique, which provides 3D information of the distribution of a radiotracer inside the body [START_REF] Saha | Physics and Radiobiology of Nuclear Medecine[END_REF]. The SPECT technique requires a radiopharmaceutical labeled with a γ-emitter radionuclide. An external detector is used to measure the radiation emitted from the radiotracer and generate an image.
The SPECT imaging is based on the principle of the Anger camera [START_REF] Anger | Scintillation camera[END_REF]. A standard scintillation camera consists of a large scintillation crystal, usually NaI(Tl), coupled to an array of PMTs. Figure 1.17 illustrates the principle of operation of a scintillation camera. Since γ-rays are emitted isotropically, a collimator is placed between the patient and the radiation detector to relate the detected γ-ray to the emission point. The collimator only allows photons from certain directions to reach the detector and thus, limits the acceptance angle and defines the spatial distribution of the photons detected by the scintillation crystal. Collimators are made of high atomic number materials, usually lead or tungsten, to efficiently absorb those γ-rays emitted with a certain angle with respect to the collimator holes. A widely-used collimator is the parallel-hole which provides a two-dimensional parallel projection of the source distribution with a constant FOV. Such a collimator only allows photons emitted in the normal direction with respect to the crystal surface to reach the detector, whereas the rest of photons are absorbed. Other types of collimators used in clinical practice are illustrated in Figure 1.18. Collimators are therefore necessary to obtain information of the incoming direction of the emitted γ-rays, at the expense of being the major limitation to the spatial resolution and the sensitivity of SPECT detectors. The adequate choice of a collimator depends on the image purposes in terms of energy resolution, sensitivity and spatial resolution. The advantage of SPECT imaging compared to planar scintigraphy is the tomographic character of the SPECT technique. In SPECT, the gamma camera moves around the patient following a circular, elliptical or contoured orbit. By rotating the detector, two-dimensional projections are acquired from many different angles. These multiple projections are then reconstructed using either analytic or iterative reconstruction algorithms that generate a 3D image slice by slice.
Scintillators
The basic radiation detectors used in SPECT imaging are scintillation detectors that transform the incident γ-rays into optical photons. Historically, the Anger camera was based on a NaI scintillation crystal, commonly doped with thallium to improve the scintillation performances. γ-rays interact with the molecules and atoms of the detector either by photoelectric effect or Compton scattering, leading the crystal in excited state. The excited atoms return to the ground state with the consequent emission of a fluorescence photon. The fluorescence emission process depends on whether the detector is based on an organic or inorganic scintillation crystal [START_REF] Knoll | Radiation Detection and Measurements[END_REF]. In particular, the NaI(Tl) is an inorganic scintillation detector characterized by the emission of visible photons with a primary decay constant at 230 ns and maximum emission wavelength at 410 nm [START_REF] Melcher | Scintillation Crystals for PET[END_REF]. The main characteristics of the NaI(Tl) as scintillation detector are listed in Table 1. [START_REF] Aprile | Noble Gas Detectors[END_REF].
An ideal scintillation crystal should have high detection efficiency, large light output and good energy and time resolutions. Particularly, the NaI(Tl) has a high scintillation yield, in average 40 photons are emitted per keV, and a high detection efficiency for the energies typically used in SPECT (below 200 keV). For example, 85% of the incoming 140 keV γ-rays from the decay of 99m T c deposit all their energy in the detector. The energy resolution of the NaI(Tl) is of the order of 7-8% at 1 MeV [START_REF] Kharfi | Principles and Applications of Nuclear Medical Imaging: A Survey on Recent Developments[END_REF]. Associated with the light yield requirement, a large fraction of the emitted scintillation photons should be detected. This implies that the optical self-absorption of the scintillation crystal should be minimal. The NaI(Tl) is relatively transparent to its own light, with about the 30% of the emitted photon detected by the PMT [START_REF] Kharfi | Principles and Applications of Nuclear Medical Imaging: A Survey on Recent Developments[END_REF]. Due to its good scintillation properties, the NaI(Tl) remains as one of the most used scintillation detector in SPECT, even 60 years later of the development of the first Anger camera.
Photodetectors
The photons emitted by the scintillation crystal are detected by a photosensor, typically a PMT, and converted into an electrical signal. The principle of a PMT is schematically represented in Figure 1.19. Incident photons strike the photocathode located at the entry window of the PMT, which emit a number of photoelectrons as a result of photoelectric absorption. These electrons are directed by a focusing electrode towards a series of electrode called dynodes, where electrons are multiplied by means of secondary emission. The geometry of the dynode chain is such that an increasing number of electrons is produced after each collision. Electrons are accelerated towards the next dynode under the influence of an intense electric field established along the dynodes chain. Each dynode is held at a more positive voltage than the previous one. The electrons reach the first dynode with a kinetic energy equal to the potential difference between the photocathode and the dynode. This energy is sufficient to extract more secondary electrons from this dynode and create an avalanche of electrons. At the end of the multiplication chain, a large number of electrons is produced and collected by the last dynode, called anode. The total number of electrons that arrive to the anode for each photoelectron ejected from the photocathode is defined as the gain of a PMT. Gains of the order of 10 5 to 10 8 are usually reached with these devices. The total collected charge in the anode results in a sharp current pulse that contains the timing information of the arrival of a photon at the photocathode. The amplitude of the pulse delivered by the PMT is proportional to the number of scintillation light that reached the PMT surface, which is proportional to the deposited energy [START_REF] Knoll | Radiation Detection and Measurements[END_REF]. [START_REF] Knoll | Radiation Detection and Measurements[END_REF].
Other photodetectors such as Silicon PhotoMultipliers (SiPM) are also widely used in many different applications [START_REF] Renker | Advances in solid state photon detectors[END_REF]. The insensitivity to magnetic field of SiPMs compared to the standard PMTs, is one of the most interesting advantages of this kind of photosensors, which has given rise to the development of hybrid systems PET-MRI. This new medical imaging technique combines the functional imaging of PET with the high contrast in soft tissues of the MRI [START_REF] Tsoumpas | Innovations in Small-Animal PET/MR Imaging Instrumentation[END_REF].
Positron Emission Tomography
Positron Emission Tomography (PET) [START_REF] Cherry | Treatment of axial data in three-dimensional PET[END_REF] is a nuclear medicine imaging technique widely used in the diagnosis and staging of many sorts of cancers [START_REF] Papathanassiou | Positron Emission Tomography in oncology: present and future of PET and PET/CT[END_REF]. PET is based on the decay of β + -emitting radionuclides. The emitted positron annihilates with an electron of the surrounding matter, resulting in the emission of two γ-rays in opposite directions, each of them with an energy equal to the rest mass of the emitted particles (mc 2 = 511 keV ). The principle of the PET imaging is therefore based on the detection in coincidence of these two 511 keV γ-rays. The devices used in PET are commonly based on a circular array of detectors arranged in a ring configuration around the patient (see Figure 1.20). The registration in coincidence of these two γ-rays defines a line of response (LOR) which contains information about the annihilation position. With a full angular coverage of the patient, the information collected in every LOR is processed and used to produce an image of the functionality of the organism. A PET scanner consists of a set of scintillation crystals positioned in a cylindrical gantry. Each scintillation crystal is coupled to a PMT usually attached via a light guide. Unlike in SPECT, the radiation detector is not an unique monolithic crystal, but it is composed by a array of several rectangular crystals separated from each other by a reflective material like Teflon. Crystal segmentation allows to increase the spatial resolution of the detector. The choice of the crystal dimensions requires, however, a compromise between sensitivity and spatial resolution [START_REF] Peng | Recent Developments in PET instrumentation[END_REF].
To obtain a signal, the system requires interactions in two independent detectors within a short time window. This information is extracted thanks to a coincidence circuit established between the different detectors that compose the tomograph. Besides the coincidental time window, a narrow energy windows is normally set around the 511 keV photoelectric peak to determine a true coincidence. Energy and time conditions depend on the characteristics of the scintillation crystal used in the system. After a coincidence, an annihilation is assumed to have taken place somewhere along the LOR between the two crystals. The information measured by each coincidental unit at different angles is reconstructed providing 3D information of the distribution of the radiotracer inside the body. Moreover, if the study is performed over successive temporal intervals, a temporal distribution of a metabolic function is obtained, allowing dynamic studies.
Same as in SPECT, the scintillation crystals used in PET should have a high detection efficiency for 511 keV γ-rays, i.e. high density, large photon emission rate and good energy and time resolutions. The NaI(Tl) used in SPECT is not a good candidate for PET imaging, due to its poor detection efficiency at higher energies than 200 keV. At 511 keV, for example, only 12% of the detected γ-rays deposit all their energy in the detector. PET requires higher density crystals in order to increase the photoelectric absorption. Some of the scintillation crystals commonly used in PET are the Bismuth Germanate (BGO), Lutetium Oxyorthosilicate (LSO) and Lutetium Yttrium Orthosilicate (LYSO). Their basic properties are summarized in Table 1.4. Compared to the NaI(Tl), these scintillation crystals have lower light output, whereas they show high detection efficiency for 511 keV photons and faster scintillation emission, which results in an important improvement in the counting-rate capability of the detector.
Table 1.4 -Properties of some scintillators used in functional medicine imaging. Table taken from [START_REF] Lewellen | Recent developments in PET detector technology[END_REF].
PET performances
Due to the limited energy and time resolutions of the system, there are also some limitations in the coincidence detection technique that cause image degradation in PET. Figure 1.21 illustrates the main coincidence event types. A true event is considered when the two detected photons come from the same annihilation point (Figure 1.21 a). However, if one or both of the detected photons interact with the surrounding matter before being detected, they may have changed their direction producing an error in the reconstructed position. This kind of events are known as scattered events (Figure 1.21 e). Nevertheless, since scatter produces energy loss, scattered events can be substantially reduced by the energy window condition. Random coincidences between two γ-rays from two different annihilation processes within the coincidence window are also possible, generating a LOR that does not include useful information about the radiotracer distribution (Figure 1.21 d). Finally, multiple coincidences may occur when three or more photons are detected simultaneously within the coincidence window (Figure 1.21 b). These pile-up events cause an ambiguity between possible valid photon pairs, and therefore are usually discarded from the reconstructed data. Equally, single coincidences are produced when only one of the emitted photons is detected, which are directly rejected from the reconstruction (Figure 1.21 c). Since the detection of the two γ-rays is made via the coincidence between the two back-to-back 511 keV γ-rays, there is no need of collimator. Therefore, the sensitivity of a PET scanner is several times higher than that of SPECT. However, there are other factors that ultimately affect the spatial resolution in PET. The emitted positron travels some distance before it annihilates with an electron. The distance from the emission point to the annihilation point is known as positron range and it depends both on the energy of the emitted positron and the surrounding materials. Since PET imaging reconstructs the annihilation sites and not the positron emission points, the non-zero positron range is one of the main limiting factors to the spatial resolution [START_REF] Levin | Calculation of positron range and its effect on the fundamental limit of positron emission tomography system spatial resolution[END_REF]. Another factor that affects the performances of the PET technique is related to the fact that not all of the positrons are at rest when the annihilation occurs. Since the majority of annihilations occur with both positron and electron at thermal energies, by energy and momentum conservation, the two 511 keV γ-rays are emitted with an angle of approximately 180 • [START_REF] Debenedetti | On the angular distribution of two-photon annihilation radiation[END_REF]. However, there is a non negligible probability that the annihilation occurs when the positron is not at rest. This causes a non-collinearity between the emitted γ-rays that adds some uncertainty in the identification of the annihilation position [START_REF] Debenedetti | On the angular distribution of two-photon annihilation radiation[END_REF]. This non-collinearity may be of the order of 0.5 • . The combination of both effects affects the intrinsic spatial resolution of the PET imaging technique. For example, for the 18 F , which is one of the most used positron-emitting radionuclides in PET, this limitation is less than 1 mm. Nevertheless, on a PET system the major constraint to the spatial resolution is still the intrinsic spatial resolution of the PET detector.
Time-of-Flight PET
The Time of Flight PET or TOF-PET exploits the time difference ∆t between the two detected photons to provide information of the depth of interaction inside the crystal. In PET, there is a uniform probability that the emission point lays along the entire length of the LOR. The accurate measurement of the arrival time of the two back-to-back 511 keV γ-rays constrains the position of the annihilation point along the LOR between two coincidental pairs [START_REF] Allemand | Potential advantages of a Cesium Floride scintillator for time-of-flight positron camera[END_REF][START_REF] Mullani | System design of a fast PET scanner utilizing time-of-flight[END_REF][START_REF] Moszynski | New Prospects for Time-of-Flight PET with LSO Scintillators[END_REF]. Figure 1.22 illustrates the basic principle of the TOF-PET technique. The time information correlates to the position of the annihilation point with respect to the center of the FOV according to the formula:
∆x = c 2 ∆t (1.15)
where c is the speed of light. The position constraint achieved by adding the time of flight information is not enough to highly improve the spatial resolution or to avoid the image reconstruction process. The existing commercially available TOF-PET systems achieve timing resolutions of the order of 450 ps to 900 ps allowing a position resolution of 6.75 to 13.5 cm (FWHM) along the LOR [START_REF] Surti | Update on time-of-flight PET imaging[END_REF][START_REF] Lois | An assessment of the impact of incorporating time-of-flight information into clinical PET/CT imaging[END_REF][START_REF] Jakoby | Physical and clinical performance of the mCT time-of-flight PET/CT scanner[END_REF]. In the case of small animal imaging, the annihilation position can be estimated with a resolution of the order of 2 -3 cm. The exploitation of timing information to calculate the emission point along the LOR was introduced in the 1960s, along with the development of PET [START_REF] Brownell | New developments in positron scintigraphy and the application of cyclotron produced positron emitters[END_REF]. However, its application for clinical use had to wait until the 1980s with the development of the first TOF-PET system [START_REF] Ter-Pogossian | Positron emission tomography (PET)[END_REF]. The TOF-PET modality has suffered a very tortuous trajectory until its complete acceptance in the clinical practice. The early TOF-PET systems were based on cesium fluoride (CsF) and barium fluoride (BaF 2 ) scintillation crystals that provided good timing resolutions compared to the standard NaI(Tl) or BGO crystals. However, due to their low detection efficiency and low scintillation yield, the research around TOF-PET was paused and almost forgotten in the early 1990s. Later, in the 2000s, the development of new and faster scintillation crystals such as the LSO, reawoke the interest for this imaging modality for both industry and research. The incorporation of new photodetectors, electronics and more precise image reconstruction techniques have also contributed to the improvement of TOF-PET imaging. LYSO crystals combined with SiPMs have shown a great potential with achievable timing resolutions of between 200 -300 ps [START_REF] Spanoudaki | Photo-Detectors for Time of Flight Positron Emission Tomography (ToF-PET)[END_REF][START_REF] Levin | Potrotype time-of-flight PET ring integrated with a 3T MRI system for simultaneous, whole-body PET/MR imaging[END_REF]. However, even today, there are still some technological limitations that avoid reaching the desired timing requirements of the TOF-PET.
To obtain subcentimeter spatial resolution, a coincidence timing precision of less than 50 ps is necessary. However, this is hard to accomplish with the existing technology. On the other hand, the additional timing information has demonstrated to improve the Signal to Noise Ratio (SNR) in the reconstructed images, which is one of the limitations of PET imaging. The statistical noise reduction factor or gain of the TOF-PET with respect to the Non-TOF mode can be approximated as:
SN R T OF SN R N on-T OF = D ∆x (1.16)
where D is the size of the emission source. The SNR improvement of the TOF-PET for a time resolution of 500 ps and a patient with 40 cm in diameter is about ∼2.3. Since this gain is related to the object size, a largest improvement is expected in heavy patients. No improvement in terms of SNR is, however, obtained for small animal imaging technologies.
Over the recent years, the results show that the advantages of TOF-PET should be focused on the improvement in terms of lesion detection, shortening of the total imaging time and reduction of the injected dose [START_REF] Surti | Update on time-of-flight PET imaging[END_REF].
PET and SPECT Radionuclides
Radioactive tracers are based on a specific biological active molecule coupled to a radioactive isotope, that allow the assessment of physiological and metabolic functions in the organism and the diagnosis of diseases. The type of carrier molecule depends greatly on the purpose of the scan, which can vary from bone imaging (N a 18 F ), myocardial perfusion ( 82 RbCl) or glucose metabolism ( 18 F DG). Properties such as decay scheme, effective half-life, energy and chemical properties make radionuclides appropriate for certain applications.
Radioactive tracer employed in SPECT are all γ-emitting radionuclides. The most common radioactive marker is the 99m Tc, which is characterized by a half-life of 6 hours and the emission of a main γ rays of energy 140 keV. The 99m T c is produced by the decay of 99 Mo that has a much longer half-life of 66 hours. This allows the use of 99 Mo-99m Tc generators, which make 99m Tc daily available in hospitals. Other frequently used radioisotopes in SPECT are listed in Table 1.5.
Radionuclide Half-life
Energy (keV)
99m T c 6.02 hours 140.5 PET requires β + -emitting radionuclides. The most frequently used are listed in Table 1.6 with some of their physical characteristics. Among them, the 18 F , characterized by a half-life of 110 minutes and positron range of less than 0.7 mm in water, is the most commonly used radionuclide in neurology, cardiology and in the diagnosis and staging of many sorts of cancer. The most frequently used 18 F -labeled radiopharmaceutical is 18-fluoro-deoxy-glucose (FDG), which is an analogue of glucose. The other relevant PET isotopes are characterized by much shorter half-lives with respect to 18 F , so they have to be produced in the same center where the medical exam is performed. The positron energy and its resulting range in water are inversely correlated with the PET image resolution.
Future trends in functional imaging
The perspectives of nuclear medical imaging are in constant growth thanks to the technological advances and the development of new radiopharmaceutical. In the past few years, the progress in functional imaging is focused, not so much in the constant improvement of image resolution, which is probably nearing its limit, but in its consolidation as a routine clinical technique. The reduction of the injected dose and the shortening of the imaging time have also become of great importance to reduce the radiation exposure of the patients and working personnel during a medical exam. To overcome some of the limitations of PET and SPECT, which show a relative poor anatomical detail, the development of multimodal imaging system offers more reliable images. Multimodality is based on the combination of two or more different imaging techniques to take advantage of complementary information provided by each independent modality. Anatomical imaging ensures very good spatial resolution of less than 1 mm. However, for the diagnosis and staging of certain diseases, morphological information is not always enough, since functional changes may occur in the absence of any associated structural change. Accordingly, multimodal imaging is currently a preferred approach. Nowadays, there are commercially available PET-CT, SPECT-MR and PET-MR multimodal systems [START_REF] Beyer | A combined PET/CT scanner for clinical oncology[END_REF][START_REF] Mariani | A review on the clinical uses of SPECT/CT[END_REF][START_REF] Jadvar | Competitive advantage of PET/MRI[END_REF]. In particular, the coupling of PET and CT has become a standard in clinical practice due to the improved quality image and robustness of the diagnosis [START_REF] Beyer | Putting 'clear' into nuclear medicine: a decade of PET/CT development[END_REF].
Medical imaging with a Compton Camera
Compton imaging is a well-know technique that exploits the kinematics of Compton scattering for the reconstruction of the origin of incoming γ-rays [START_REF] Everett | Gamma-radiation imaging system based on the Compton effect[END_REF]. The original direction of the incident particle can be determined from the coordinate point (x i , y i , z i ) and the energy deposited (E i ) at each individual interaction i. Figure 1.23 depicts the principle of Compton imaging for a single γ-ray. The basic useful event for a Compton scattering system consists of two interactions, a Compton scattering and a photoelectric absorption. In the first interaction, the incoming γ-ray with energy E γ is deflected from its initial trajectory after losing some of its energy to an atomic electron. The resulting energy E 1 can be determined from the scattering formula:
E γ ′ = E γ 1 + E γ (1 -cos θ)/m e c 2
(1.17)
From the energy conservation, the energy deposited after the first interaction is: E 1 = E γ -E γ ′ . After the second interaction, the scattered γ-ray with energy E γ ′ is completely absorbed. Assuming that both events are calorimetrized inside the detector, the incident energy E γ is equal to the sum of the deposited energies in both interactions:
E γ = E 1 + E 2 .
Applying energy and momentum conservation, the deflection angle for the first interaction can be computed using the Compton scatter formula:
cos θ = 1 + m e c 2 1 E 1 + E 2 - 1 E 1 (1.18)
The incident direction of the incoming γ-ray cannot be directly determined, but it theoretically falls in the surface of a cone with opening angle θ and an axis direction that can be determined from the spatial coordinates (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) of the first and second interactions, represented by the subscripts 1 and 2 respectively. Therefore, the spatial location of the interaction points combined with the deflection angle, constraint the possible location of the emission point. Each detected γ-ray is expected to have different reconstructed Compton cones from different scatter angles and axis. The intersection of the cones resulting from multiple incident γ-rays, reveals the probable location of the radiation source. For an accurate reconstruction, the correct sequence of the two interactions must be known. Compton imaging systems, also known as Compton cameras are widely used to image sources of gamma radiation in a variety of applications such as nuclear medicine imaging, hadrontherapy, γ-ray astronomy and homeland security. Traditional Compton cameras consist of, at least, two independent position-sensitive detectors working in coincidence. The γ-ray undergoes a Compton scattering in the first detector, known as the scatterer, and is completely absorbed in the second module or absorber as illustrated in Figure 1.23. In nuclear medicine, Compton cameras have the potential to overcome some of the inherent physical limitations of standard SPECT and PET systems in terms of system sensitivity and image resolution. The direct triangulation of the position of the radioactive source, allows the possibility of imaging the source distribution without the use of mechanical collimators and reconstruction algorithms. Compton imaging also provides large field of view, good background rejection, direct 3D imaging and an increase of the detection efficiency over a large energy range between hundreds of keV to tens of MeV. However, an accurate reconstruction of the Compton cones requires high spatial and energy resolutions. Some technological limitations have prevented Compton imaging to become a suitable medical imaging alternative in clinical practice. In the recent years, the development of new detectors and optimized technologies have renewed the interest of Compton imaging for medical applications. The use of solid state detectors seems very promising to fulfill the requirements of Compton imaging. The choice of LXe as detection medium has also shown its potential for γ astronomy with the LXeGRIT experiment [START_REF] Curioni | A Study of the LXeGRIT Detection Efficiency for MeV Gamma-Rays during the 2000 Balloon Flight Campaign[END_REF] and in nuclear medical imaging [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF], which is the purpose of this dissertation.
The 3γ imaging technique
As mentioned in the introduction, one of the increasing concerns in nuclear medicine is the radiation exposure of the patients during a medical exam. Typical 18 F -F DG injected activities in human PET studies are in the range from ∼370 to 740 MBq [START_REF] Shankar | Consensus Recommendations for the Use of 18F-FDG PET as an Indicator of Therapeutic Response in Patients in National Cancer Institute Trials[END_REF], while in small-animal PET, activities of the order of 4 -40 MBq are typically applied [START_REF] Simon | Essentials of In Vivo Biomedical Imaging[END_REF]. These injected activities lead to effective doses on the order of ∼10 mSv. The administered doses and the examination times, which are of the order of 10 -45 min depending on the scanner, disease, patient and image reconstruction method, are prohibitive for monitoring disease progression and the assessment of the response of a disease to treatment. A new medical imaging technique called 3γ imaging was proposed to improve not only the spatial resolution of standard nuclear imaging techniques, but also to reduce the administered dose maintaining image quality with reasonable scanning times. This technique opens a new window in the future of medical imaging in which low dose is required.
The 3γ imaging technique is based on the triple coincidence between the two 511 keV γ-rays generated after the annihilation of a positron with an electron, and an additional γ-ray emitted from a specific 3γ-emitting radioisotope. The notion of the triple coincident Compton imaging was introduced for the first time in 1987 by Liang et al. [START_REF] Liang | Triple coincidence tomographic imaging without image processing[END_REF]. However, due to the limited available technology at that time, no further research was performed around this technique. Later in 2001, Kurfess and Phlips [START_REF] Kurfess | Coincident Compton nuclear medical imager[END_REF] resumed the research work based on solid-state detectors. Independently, in 2004 Thers et.al. [START_REF] Grignon | Nuclear medical imaging using β+ γ coincidences from Sc-44 radio-nuclide with liquid xenon as detector medium[END_REF] proposed the idea of the 3γ imaging and the potential of using a large monolithic LXe Compton camera as γ-detector. To consolidate and show the advantages of the 3γ imaging technique, a first phase of research and development (R&D) has been carried out within a research project called XEMIS (XEnon Medical Imaging System). This initial phase involves both fundamental research and the implementation of novel technologies. Currently, 3γ imaging with LXe is in the early stages of the pre-clinical phase with the development and characterization of a small animal LXe Compton imaging system called XEMIS2. Furthermore, the application of the 3γ imaging modality has been recently adopted by other authors such as [START_REF] Lang | Submillimeter nuclear medical imaging with a Compton Camera using triple coincidences of collinear β + annihilation photons and γ-rays[END_REF], who propose the use of a standard Compton camera based on solid state detectors and LaBr 3 scintillator crystals as detection system.
In this section, we describe the basic principle of the 3γ imaging technique. The physical requirements of a LXe Compton camera to achieve an accurate reconstruction of the position of the radioactive source are also discussed. Finally, we present the features required by a 3γ-emitting radionuclide as a possible candidate for this medical imaging modality.
Principle of the 3γ imaging technique
3γ imaging is a new functional medicine imaging technique based on the detection in coincidence of three γ-rays. This medical imaging modality aims to obtain a precise 3D location of a radioactive source with both good energy and spatial resolutions, and a significant reduction of the dose administered to the patient. The principle of the 3γ imaging technique consists in measuring the position of a radioactive source in 3D using the simultaneous detection of three γ-rays. Consequently, this technique requires the use of a specific radionuclide that emits a γ-ray and a positron in quasi-coincidence [START_REF] Grignon | Étude et développement d'un télescope Compton au xénon liquide dédié à l'imagerie médicale fonctionnelle[END_REF]. Assuming that the positron and the γ-ray are emitted simultaneously, the position of the radionuclide is then obtained by the intersection between the LOR, given by the two back-to-back 511 keV γ-rays resulted from the positron annihilation, and the Compton cone defined from the interaction of the third γ-ray with a Compton telescope. The Compton cone surface contains the incident direction of the third γ-ray and it can be directly inferred from the Compton kinematics as explained in the previous section. The aperture angle of the cone, θ, is given by the Compton scattering formula (see Equation 1.18), whereas the axis of the cone is determined by the first two interaction points of the incoming photon inside the detector. Figure 1.24 illustrates the principle of the 3γ imaging technique. Thanks to the additional information brought by the third photon, in contrast to the previously mentioned medicine imaging techniques SPECT and PET, the 3γ imaging allows real-time detection and the direct reconstruction in 3D of the emitter position at a low counting rate. The potential of this new medicine imaging modality lies directly in the reduction of the number of radioactive decays needed to obtain an image. The time required to perform an exam and the radioactive dose received by the patient should be therefore significantly reduced.
Modern small animal PET imaging systems can reach spatial resolution of the order of 1 -2 mm (FWHM) with sensitivities of 1% to 15% for a point source at the center of the FOV [START_REF] Hutchins | Small Animal PET Imaging[END_REF]. The combination of the positron annihilation with a prompt γ-ray enables to reach submillimiter spatial resolution in the 3D with high sensitivity. Previous results showed that a uniform sensitivity of around 5% to 7% along the full FOV are expected with a monolithic LXe Compton camera design for small animal imaging [START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF].
Moreover, 3γ imaging shows the potential to overcome certain limitations of the standard functional imaging techniques. The additional information supplied by the third photon allows to efficiently separate the reconstructed true events from background. Therefore, the spatial uncertainty introduced by the positron range and Compton scattering of the γ-rays within the object before reaching the detector can be strongly reduced. A detailed simulation of the detector have shown the possibility of obtaining good property images only by the triangulation of the position of the source from the interaction of the three γ-rays in a simulated rat phantom, without any additional reconstruction algorithms. First raw images shows very good results with a sensibility of 7%. Furthermore, to increase the quality of the reconstructed image, a deconvolution technique using the ML-EM iterative algorithm has been also implemented showing that good properties images of the whole animal can be obtained with only 20 kBq of injected activity, 100 less of activity than a conventional small animal functional imaging exam.
Requirements of the 3γ imaging technique
To exploit the advantages of the 3γ imaging technique the Compton telescope requires a high Compton scattering efficiency and a high spatial and energy resolutions on each γ interaction. As discussed in the previous section, for each coincidence event, a Compton cone is reconstructed. The precision on the reconstruction of the intersection point between the cone and the LOR is what determines the image spatial resolution of the detector. The errors on the reconstruction of the cone involve the uncertainty on the cone apex (σ 2 θp ), and opening angle (σ 2 θ E ), that can be quantified as an angular resolution on the emission point along the LOR (σ θ ). In fact, the angular resolution can be considered as one of the most important parameters to characterize the image performances of a Compton camera. Assuming that each contribution is Gaussian-distributed and uncorrelated, the total angular resolution can be expressed as:
σ 2 θ = σ 2 θp + σ 2 θ E + σ 2 θ DB (1.19)
where σ 2 θ DB represents the intrinsic limitation to the angular resolution due to the Doppler broadening effect (see Chapter 2, Section 2.3.2). Therefore, there are three sources of error that limits the angular resolution of a Compton camera: the position resolution, which limits the precision in measuring the the direction of the scattered γ-ray, the energy resolution, which affects the determination of the scatter angle, and the inherent resolution given by the Doppler broadening.
Energy resolution
The aperture angle of the reconstructed cone is computed using Equation 1.18, that can be expressed in terms of the energy transferred to the ejected electron E e according to the following expression:
cos θ = 1 + m e c 2 1 E γ - 1 E 1 = 1 + m e c 2 1 E γ - 1 E γ -E e (1.20)
where E γ is the energy of the incoming photon. Therefore, any error in the determination of the deposited energy affects the value of the Compton angle. The contribution of the energy resolution can be estimated by applying error propagation to Equation 1.20 and assuming that E γ is known:
σ 2 θ E = ∂θ ∂E e σ Ee 2 (1.21)
where σ Ee is the energy resolution of the detector. The energy contribution to the angle θ is therefore given by:
∂θ ∂E e = 1 sin θ m e c 2 (E γ -E e ) 2
(1.22)
The energy resolution in a LXe Compton telescope depends on three main factors:
σ 2 Ee = σ 2 LXe + σ 2 el + σ 2 others (1.23)
The first contribution σ LXe is the intrinsic energy resolution of LXe and it comes from statistical fluctuations in the charge carrier formation (see Chapter 2, section 2.2). The number of electron-ion pairs produced by an ionizing particle in the LXe is proportional to the deposited energy, E 0 , in such interaction: N i = E 0 /W . However, the number N i of generated charge carriers fluctuates statistically, which results in a physical limit to the energy resolution of the detector. In a first approach, it can be expected that this number fluctuates according to Poisson statistic. However, due to the correlation between the events along the track of the recoiling electron, they can not be treated independently. The deviation from the Poisson limit is quantified by the Fano factor, F, that depends on the target material [START_REF] Fano | Ionization Yield of Radiations. 2. The Fluctuations of the Number of Ions[END_REF]. The energy resolution of a detector expressed in terms of the full width at half maximum (FWHM), can be inferred from the Fano factor through the following formula:
σ Ee = 2.35 √ F N i N i = 2.35 F W E 0 (1.24)
LXe has an estimated low Fano factor of 0.041 [START_REF] Doke | Fundamental Properties of Liquid Argon, Krypton and Xenon as Radiation Detector Media[END_REF], which predicts a theoretical energy resolution of about 2 keV (FWHM) at 1 MeV, of the order of that of germanium detectors. However, it has been demonstrated experimentally that this value is underestimated since several contributions are missing. In LXe the lifetime of the electrons in the detector is limited by the recombination between electrons and positive ions. The recombination rate depends on the applied electric field along the drift length of the detector as discussed in Section 2.2, and it introduces an additional source of uncertainty in the measurement of the energy resolution. The best published result in LXe, combining scintillation light and ionization electrons, is 1 % (σ/E) for 1.33 MeV γ-rays, measured by Stephenson et al. [START_REF] Stephenson | MiX: A Position Sensitive Dual-Phase Liquid Xenon Detector[END_REF] at an electric field of 0.2 kV/cm. With our LXe TPC, an energy resolution of 5 % (σ/E) has been obtained for 511 keV 22 N a γ-rays at an electric field of 1 keV/cm [START_REF] Manzano | XEMIS: A liquid xenon detector for medical imaging[END_REF][START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF].
The second contribution to the energy resolution σ el comes from the electronic noise introduced by the readout electronics (see Chapter 4) and its value does not depend on the energy. The electronic noise must be minimized to increase the SNR and to allow the detection of low deposited energies. A electronic noise of the order of 100 e -has already been reported with our experimental set-up [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF]. Such low electronic noise is necessary to recuperate all the deposited energy after an interaction that may be shared by multiple neighboring pixels. An electronic noise of less than 100 e -is negligible with respect to the contribution due to the intrinsic energy resolution, which results in a measured energy resolution of 5 % at 511 keV [START_REF] Manzano | XEMIS: A liquid xenon detector for medical imaging[END_REF]. Finally, the last term σ others accounts for other contributions to the energy resolution such as the inefficiency of the Frisch grid or pulse rise time variations (see Chapter 5).
Position resolution
The axis of the Compton cone is determined from the intersection point between the first and second interactions of the incoming γ-ray inside the detector. The position of the source r is related to both interaction positions via the following geometric expression:
cos θ = ( r 2 -r 1 ).( r 1 -r) || r 2 -r 1 ||.|| r 1 -r|| (1.25)
The spatial resolution is thus proportional to separation distance between the two interactions. The impact of the position to the opening angle can be approximated by:
σ 2 p = σ 2 (x 1 ,y 1 ) + σ 2 (x 2 ,y 2 ) (1.26)
where σ (x 1 ,y 1 ) and σ (x 2 ,y 2 ) represent the spatial resolution of the first and second interaction points respectively. The uncertainty on the position of each interaction depends on the geometry of the collecting electrode. In the case of a pixelated anode, the lateral spatial resolution depends on the pixel size as well as on the size of the electron cloud. If the charge cloud is completely detected by a single pixel, the position of the interaction is referred to the center of the pixel. On the other hand, if the charge cloud is spread over several pixels due to the charge sharing effect, the interaction point is calculated as the centroid position of the charge distribution. Either way the pixel size introduces an error on the determination of the interaction point which depends on the pixel size. Considering a uniform spatial resolution with the energy and the drift distance of about σ s ≃ 500 µm (σ x ≈ σ y ≈ σ z ), the impact of the position resolution can be approximated as:
σ p = √ 2 σ s d(E γ ′ ) (1.27)
where d(E γ ′ ) is the 3D separation between the two interaction vertex that depends on the deposited energy in the first interaction point, which in turn depends on the scatter angle θ. The distance d increases with the energy of the scattered photon. To estimate the contribution of the position resolution, the value of d for an incident γ-ray of 1.157 MeV should take into account the minimum possible distance due to both the pixel size and the electronic shaping time to avoid pile-up. 1.157 MeV corresponds to the energy of the third γ-ray emitted from a source of 44 Sc (see next section). The anode is divided in 64 pixels of 3.125×3.125 mm 2 each. Therefore, two interaction will be separated only if the distance between them is at least 3.125 mm along the two dimensions X-Y. Moreover, to consider the separation along the drift length, the collected signals from each interaction point should be separated a minimum time that comprises the rise and fall times of the integrated pulses, which is of the order of 7 to 8 µs. A minimum distance of 1 cm was then considered. This separation between two consecutive interaction also depends on the deposited energy in the first interaction. The higher the deposited energy, the smaller the remaining energy of the scattered γ-ray, a thus smaller the distance between interaction. This average distance is determined by the mean free path of the scattered γ-ray.
Doppler broadening
Doppler broadening constitutes an irreducible limitation to the angular resolution in a Compton telescope. The effect of Doppler broadening is higher for target materials of larger atomic number, such as Ge or Xe, compared to Si or liquid scintillators [START_REF] Zoglauer | Doppler Broadening as a Lower Limit to the Angular Resolution of Next Generation Compton Telescopes[END_REF]. In the case of LXe and for γ-rays with incident energies of 1.157 MeV, its contribution is very small and can be directly neglected [START_REF] Curioni | Laboratory and Balloon Flight Performance of the Liquid Xenon Gamma Ray Imaging Telescope (LXeGRIT)[END_REF]. For a more detailed description of the Doppler broadening effect see Chapter 2, Section 2.3.2.
Angular resolution
Taking into account the contributions of the energy and spatial resolutions, we can estimate the angular resolution of a LXe Compton camera. Figure 1.25 shows the angular resolution as a function of the scatter angle determined from Equation 1.19, where the term due to Doppler broadening is zero. The energy (red line) and position (green line) resolutions as a function of the scatter angle were obtained using Equations 1.21 and 1.27 respectively, for an incoming energy of 1.157 MeV. As we can see, the angular resolution is dominated by the contribution of the energy resolution. For scatter angles up to ∼ 60 • , the angular resolution is less than 3 • and improves for more forward scatterings. The energy resolution has a minimum value at around ∼ 7 • . For smaller values of the scatter angle, the energy resolution worsens and diverges at very small values of θ. Equally, for scatter angles bigger that 60 • , the energy resolution increases. This can be better understood from Figure 1.26 that shows the energy of the recoil electron as a function of the scatter angle obtained from Equation 1.3.
The energy transferred to the electron varies from zero, when the electron is scattered at right angles, to a maximum value obtained for a scattering angle of 180 • (see Figure 1.26). In a backscattering collision, the electron moves forward in the direction of the incident photon. At high electron energies the term (E γ -E e ) 2 from Equation 1.22 dominates and thus, the angular resolution degrades at higher scatter angles. This can be also deduced from the slope of the curve on Figure 1.26. A small error in the calculation of the electron energy, implies a significant error in the scatter angle determination. Similar results are observed at small Compton scattering angles. For small electron energies, i.e. small scatter angles, the term 1/θ of Equation 1.22 dominates, which tends to infinity as the angle decreases.
The dependence of the angular resolution on the interaction point separation is shown in the green line of Figure 1.25. As we can see the position resolution is almost constant with the scatter angle. This implies that, for the typical separation between interaction of the order of few cm used to estimate the contribution of the spatial resolution, a position resolution of millimeters is required for good imaging performance.
From Figure 1.25 we can deduce that not all scatter angles are thus valid to perform the Compton cone reconstruction. Fair values of the angular resolution are, therefore, for scattering angles between ∼ 10 • to 60 • , which translate into a deposited energy between 40 keV -610 keV (see Figure 1.26). This implies that an additional cut on the clusters energy should be included in the analysis.
Figure 1.25 -Expected angular resolution of XEMIS as a function of the scatter angle for an electric field of 2 kV/cm (black line). The electronic noise was fixed to 150 e -/cluster and the intrinsic energy resolution σ LXe was parametrized using the results of [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF]. The 3γ imaging technique requires a specific radioisotope that emits a positron and γ-ray in quasi-coincidence. Among the possible candidates the radionuclide must present a series of specific characteristics. After the emission of the positron, the daughter nuclide should remain in an excite state to follow a transition to its ground state with the consequent emission of a γ-ray. The decay time to the ground state should be fast enough to consider that both emissions, positron and γ-ray, are almost simultaneous. This feature is important to exploit the temporal coincidence of the three γ-rays. The branching ratios of the positron and gamma emissions should be close to 100% to increase the detection efficiency. Moreover, it is preferable that the de-excitation of the daughter nuclide is followed by the emission of an unique γ-ray, to reduce the dose administered to the patient, the noise background and the counting rate of the detector. The energy of the emitted photon should also be higher than the energy of the two 511 keV γ-rays produced after the annihilation of the positron, to favor the Compton scattering and avoid mispositioning the emission source. The limitation due to attenuation within the patient before reaching the detector is also reduced as the energy of the emitted photon increases. Finally, although the information of the third γ-ray, which comes directly from the source, helps to reduce the spatial uncertainty produced by the positron range, the energy of the emitted positron is also relevant for 3γ imaging. Higher positron energies implies higher average positron ranges.
There are a large number of radioisotopes like 94m Tc, 67 Br, 124 I, 86 Y, 152 Tb, 52 Mn, 82 Rb, 22 Na and 44g Sc that are possible candidates for the 3γ imaging technique. Table 1.7 list the properties of some of these radionuclides. Beyond them, the 44g Sc seems to be a particularly attractive choice. The 44g Sc has a half life of ∼4 hours, which is good for clinical practice. In contrast, the 14 O and 82 Rb have too short half lives, whereas the 22 Na has a extremely long half life of 3 years. The 94m Tc has also a relative adequate half life of ∼ 1 hour. However, the probability of positron decay is only 67.6 % with the emission of one or more γ-rays with energies between 871.05 keV to 2740.1 keV. The 44 Sc, on the other hand, decays 94.27 % of the time in a positron (5.73 % by electron capture), with a γ-ray emission probability of 99.9 % [START_REF] Center | [END_REF]. Figure 1.27 shows the decay scheme of 44g Sc. The 44g Sc decays to excited 44 Ca by β + -decay emission. The positron is emitted with a maximum energy of 1.474 MeV. Then, the excited 44 Ca decays in about 2.61 ps to the ground state by emitting an unique γ-ray of 1.157 MeV. This path is followed 99.9% of the time. Due to the fast emission of the third photon, we can assume that they are emitted simultaneously.
The 44g Sc can be directly produced from a 44 T i/ 44 Sc generator (T 1/2 = 60.4 years) [START_REF] Filosofov | A 44 T i/ 44 Sc radionuclide generator for potential application of 44 Sc-based PET-radiopharmaceuticals[END_REF]. However, nowadays the production of 44 T i is limited for clinical use. An alternative via of production is based on the irradiation of natural calcium or enriched calcium-44 targets [START_REF] Severin | Cyclotron Produced 44g Sc from Natural Calcium[END_REF][START_REF] Duchemin | Production of scandium-44m and scandium-44g with deuterons on calcium-44: cross section measurements and production yield calculations[END_REF]. Currently, 44g Sc is produced at the ARRONAX cyclotron (Nantes, France) via the irradiation of a 44 CaCO 3 target with 16 MeV deuterons, through the production of the 44 Sc/ 44g Sc in-vivo generator [START_REF] Duchemin | Etude de voies alternatives pour la production de radionucléides innovants pour les applications médicales[END_REF]. This projectile and energy allow to avoid the production of 43 Sc, a radioisotope with the same half-live as the 44g Sc.
The disadvantage of this method is, however, the co-production of longer-lived radioactive impurities such as 44m Sc (T 1/2 = 58.6 hours). The 44m Sc mainly decays by internal transition to its ground state (98.80%). Due to its long half life, the use of a 44m Sc/ 44 Sc generator can be an interesting option for the tracking of long-lived radioisotopes for targeted therapy with monoclonal antibodies (mAbs). However, for the 3γ imaging technique, only the 44g Sc is interesting. Therefore, it requires a production route that limits the production of both 44m Sc and 43 Sc. Indeed, these isotopes will result in an undesirable source of background in the detector, and an unnecessary increase of radiation exposure to the patient. The concentration of impurities depends on the energy of the particle beam. Duchemin et al. [START_REF] Duchemin | Production of scandium-44m and scandium-44g with deuterons on calcium-44: cross section measurements and production yield calculations[END_REF] showed that the production of 43 Sc and 44m Sc can be reduced by irradiating a 44 Ca target with protons of 15 MeV. The use of 44g Sc labeled to DOTAT-conjugated peptides has already been tested in pre-clinical [START_REF] Miederer | Small animal PET-imaging with Scandium-44-DOTATOC[END_REF] and clinical trials [START_REF] Rösch | Generator-based PET radiopharmaceuticals for molecular imaging of tumors: on the way to THERANOSTICS[END_REF] showing promising results. The 44 Sc is a particularly promising element for nuclear medicine since several radioisotopes are suited for both therapy and/or diagnosis. The physical properties of the scandium isotopes suitable for nuclear medicine are listed in Table 1.8.
Conclusions Chapter 1
In this chapter, the main characteristics as a detection medium of liquefied noble gases and in particular liquid xenon have been discussed. One of the most basic advantages of liquid xenon is its high density and atomic number, which results in a high stopping power for ionizing radiation. Moreover, the simultaneous emission of both a scintillation and an ionization signals after the interaction of an ionizing particle, is a particular interesting property of this kind of media. Among liquid noble gases, liquid xenon has the smallest W-values and thus, the highest ionization and scintillation yields. In addition to the large light yield, the fast response of liquid xenon to radiation, which is of the order of a few ns, makes it suitable for timing applications. Liquid xenon has proven to be a perfect candidate as a γ-ray detector in the energy range from several tens of keV to tens of MeV due to its ideal properties. For this reason, its use as radiation detection medium has increased in the recent years in numerous applications in particle physics, astrophysics and medical imaging. A short overview of some of the currently liquid xenon-based detectors used in the detection of rare events, such as neutrinoless double beta decay or the direct detection of dark matter, γ-ray astronomy and functional medical imaging is also presented in this chapter.
In fact, liquid xenon is considered as a promising detection medium for medical applications since the 1970s. Early approaches were focused on the use of liquid xenon as a detector medium in SPECT and PET. These two medicine imaging modalities, based on standard scintillation crystals, are well established in the clinical practice. A brief description of these two functional imaging techniques is presented in Section 1.3.1.
The potential of LXe has led to the development of a new concept of nuclear medical imaging based on the precise 3D location of a radioactive source by the simultaneous detection of 3γ-rays. To take advantage of this imaging technique called 3γ imaging, a detection device based on a single-phase liquid xenon Compton telescope and a specific (β + ,γ) emitter radionuclide, 44 Sc, are required.
The XEMIS project (XEnon Medical Imaging System) developed at Subatech laboratory, has started a research program that revolves around the feasibility of the 3γ imaging concept for its future in pre-clinical and clinical applications, and the development of new technologies around liquid xenon detection and cryogenic systems. The biggest advantages of this technique are the improvements in detection sensitivity, position resolutions and reduction of the injected activity to the patient. The basic principle and the main requirements of the 3γ imaging technique are discussed in Section 1.3.6.
A first prototype of a liquid xenon time projection chamber called XEMIS1 has been successfully developed. A second phase for small animal imaging has already begun with the construction and calibration of a larger-scale liquid xenon Compton camera called XEMIS2. The characteristics of both XEMIS1 and XEMIS2 are presented in Chapter 3. In the next chapter, we introduce the basic principle of a liquid xenon time projection chamber and the main transport properties of charge carriers in liquid xenon. D ue to the signal generation properties of LXe, which can produce simultaneously both ionization and scintillation signals after the passage of an ionizing particle through the medium, LXe detectors can be divided into three different categories according to which kind of signal they detect: the ionization signal, the scintillation signal, or both [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]. Ionization detectors have been commonly used since the first half of the 20th century to detect and measure only the ionization signal [START_REF] Sauli | Gaseous Radiation Detectors. Fundamentals and Applications[END_REF]. However, the difficulties associated with the scintillation light detection mainly due to the emission wavelength (178 ns) and the very low working temperature of xenon-based detectors, made the measurement of the light signal much complicated. During the last half of the 20th century, the development of new photodetectors with UV sensitivity and capable of withstanding the temperature of LXe, launched the development of a new generation of LXe detectors. LXe Time Projection Chambers (TPC) belong to the category where both ionization and scintillation signals are simultaneously detected. A TPC is a sophisticated ionization detector that provides accurate information of individual events. They are extensively used in modern nuclear and particle physics experiments due to their ability to provide spatial information of the position of the particle track with high resolution, energy deposit information, as well as particle identification.
This chapter summarizes the fundamentals of the LXe TPC detection technology. This includes a discussion of the different mechanisms that affect the ionization signal production. The diffusion of the electron cloud during the drifting process and the electron-ion recombination are two factors that have a direct impact in the performances of a detector of the characteristics of XEMIS. Moreover, the range of the primary electrons, the X-ray emission and the Doppler broadening effect are other processes that affect the measurement of the ionization signal in a LXe TPC. The theoretical explanation of some of these processes is supported by experimental results reported by other authors including some of our work. The last part of this introductory chapter is devoted to technical considerations, such as the presence of impurities inside the LXe, that can degrade the measurement of the ionization signal.
Liquid xenon Time Projection Chamber
The 3γ imaging technique requires a detector that provides high sensitivity to emitted radiation, precise interaction localization and high energy resolution in order to obtain an image of the source with an acceptable SNR. As discussed in Section 1, a LXe Compton telescope is a perfect candidate to exploit the benefits of both the LXe and the 3γ imaging technique. However, Compton sequence reconstruction requires precise information of the 3D position and deposited energy of each individual interaction in the detector. To provide these informations the XEMIS detector is based on a Time Projection Chamber (TPC).
The concept of TPC was first introduced by D. Nygren at Berkeley in 1974 for the study of electron-positron collisions at the PEP colliding-beam ring at the SLAC National Accelerator Laboratory [START_REF] Nygren | The time-projection chamber: A new 4π detector for charged particles[END_REF]. The original design of the Nygren's TPC is presented in Figure 2.1. The detector was based on a cylindrical chamber of 2 m long and 2 m in diameter filled with methane gas. In 1977, the concept of the TPC technology using a liquid instead of a gas was first proposed by C. Rubbia for the ICARUS project [START_REF] Rubbia | The Liquid-Argon Time Projection Chamber: a new concept for neutrino detectors[END_REF]. The detector was a large-volume TPC of 600 ton filled with LAr dedicated to the study of the properties of neutrino interactions. But it was not until 1989 when E. Aprile proposed for the first time the use of LXe as detection medium in a TPC for γ-ray spectroscopy and Compton imaging (see Section 1.2) [START_REF] Aprile | Development of liquid xenon detectors for gamma ray astronomy[END_REF].
In this section we introduce the basic principle of a LXe TPC. A more detailed description of the characteristics of the TPC used in XEMIS is presented in Chapter 3.
Basic principle of a TPC
A TPC is a homogenous large-volume tracking detector that provides three-dimensional event reconstruction and energy information. Figure 2.2 illustrates the principle of a LXe TPC. The basic design of a TPC consist of two parallel plane electrodes (anode and cathode) separated by a certain distance d and filled filled with a gas or a liquid.
When an ionizing particle such as a γ-ray crosses the sensitive volume of the TPC, it ionizes the medium producing a track of electron-ion pairs. A uniform electric field of the order of several kV/cm applied between the two opposed electrodes prevents the recombination between electrons and ions. The electric field forces the electrons to drift towards the anode where they are collected by a pixelated array. The segmented anode provides two-dimensional information of the interaction inside the fiducial volume (X-Y coordinates). The third coordinate Z defined along the TPC axis, represents the distance from the point of interaction to the collecting electrode and is given by the drift time of the ionization electrons, as presented in Equation 2.1:
Z = t t 0 v drif t (t ′ , E) dt ′ (2.1)
where v drif t is the electron drift velocity, which depends on the applied electric field E, t 0 is the time when the interaction occurs or trigger time, and t is the electron collection time.
For a uniform electric field Equation 2.1 can be simplified by: Z = v drif t (t -t 0 ). An advantage of using LXe as detection medium is the simultaneous production of a scintillation signal that provides to the detector self-triggering capabilities. Due to the fast scintillation light emission in LXe of the order of a few ns, t 0 can be obtained from the scintillation signal detection by for example a VUV sensitive PMT. Signal amplification is commonly used in TPCs due to the reduced amount of charge carriers produced per interaction. The charge amplification was traditionally accomplished by multi-wire proportional chambers (MWPCs). However, in the last decades the development of micropatterns detectors such as Gas Electron Multiplier (GEM) [START_REF] Sauli | GEM: A new concept for electron amplification in gas detectors[END_REF] and MICROMEsh GAseous Structure (MICROMEGAS) [START_REF] Giomataris | MICROMEGAS : A high-granularity position-sensitive gaseous detector for high particle-flux environment[END_REF], have become a successful alternative in TPCs. These kind of detectors include a micro-mesh located at several hundreds of µm from the collecting electrode. The small gap between the mesh and the collecting electrode, also called amplification zone, is intended to amplify the ionization signal by applying a large electric field.
In LXe no charge multiplication in necessary, so the collected charge corresponds directly to the amount of charge produced by ionization, which in turn is proportional to the energy lost by the interacting particle in the medium. Therefore, the information of the deposited energy can be directly extracted from the amplitude of the ionization signal. Instead of using one of these amplification systems, we use a Frisch grid to collect the generated charges. The grid is placed between the cathode and anode to shield the anode from the motion of positive ions inside the sensitive volume (see Figure 2.2). Electrons start inducing a signal on the anode after passing through the grid, so the signals are independent of the position of the interaction with respect to the anode. To measure the amplitude and drift time of the collected signals, even in high-multiplicity events, fast read-out electronics are required.
Waveform formation
The information of the 3D position and deposited energy of each individual interaction is directly obtained from the waveforms generated by the read-out electronics. A brief summary of the signal formation process from the interaction of radiation with the detector to the charge collection by the electronics is presented in this section.
As discussed in Chapter 1, after the passage of an ionizing particle through the detector, the deposited energy E 0 is converted into a number of electron-ion pairs due to ionization.
The number of produced charge carries is given by the expression: N 0 = E 0 /W where W is the average energy needed to create an electron-ion pair. The small W-value in LXe implies high ionization yield and thus, the deposited charge can be directly measured from the amplitude of the signals.
Several effects such as electron-ion recombination and the purity of the medium play also an important role in the electron collection. Due to electron-ion recombination, a fraction of the produced electrons recombine before arriving to the anode. The recombination rate depends on the ionization density and on the applied electric field (see Section 2.2.1). On the other hand, electron attachment to electronegative impurities dissolved in the LXe may also reduce the number of ionization electrons that reach the anode. Similarly, the scintillation light may be absorbed by impurities dissolved in the LXe (Section 2.2.3). Both effects should be therefore minimized in order to increase the energy resolution of the detector by an adequate purification system and a relative high electric field.
The remaining charges drift towards the anode under the influence of the electric field. A pixelated anode gives directly the information of the X and Y position of each interaction point. The granularity of the anode limits the spatial resolution of the detector. The number of fired pixels per interaction affects the transverse spatial resolution of the detector and depends on several factors, such as the electron diffusion and the size of the electron cloud. A criteria concerning the location of the fired pixels and the arrival time of electrons to the anode should be established to identify those pixels that may be triggered by the same interaction. The regrouping of pixels from the same interaction vertex is called clustering.
Charges start inducing a current in the anode from the moment they start drifting. A Frisch grid placed between the cathode and the anode shields the anode from the movement of ions and partially removes the position dependence of the induced signal with respect to the collecting electrode. Ideally, electrons induce a signal in the anode from the moment the pass through the grid. The induced current is then processed by a charge sensitive preamplifier and filtered by a second order low-pass filter. A more detailed description of the read-out electronics is presented in Chapter 4. After processing, the amplitude of the output signal is proportional to the number of collected electrons, which in turn it is proportional to the deposited energy in the detector. The drift time and thus the Z coordinate along the drift length, are also deduced from the waveform of the output signal.
Besides the intrinsic energy resolution of LXe, the electronic noise and the energy threshold used in the data analysis will also play an important role in the final energy resolution of the detector. Both contributions should be as low as possible, although in the case of the threshold level the volume of data due to noise events should be reduced before being registered.
Ionization Signal Production in Liquid Xenon
When charged particles travel through the LXe, they produce a track of electron-ion pairs by ionization. On a TPC a uniform electric field is applied to force electrons and ions to drift in opposite directions towards the TPC electrodes. This electric field contributes to reduce the recombination between the electrons and the ions during migration. The recombination fraction depends on both the number of electron-ion pairs created and the applied electric field. The recombination effect and the influence of the electric field on the charge and scintillation light productions are described in this section. Inside LXe, as well as the recombination effect, other factors such as electron diffusion may also influence the final charge collection. Doppler broadening effect, X-ray emission and the spread of the electron cloud are discussed in the next section. Some of these properties have been experimentally studied during this work. A more detailed description can be found in the following Chapters.
Electron-ion recombination
The number of electron-ions pairs generated by radiation in LXe or ionization yield, depends on many different factors such as the energy of the incident particle (see Section 1.1.3), the applied electric field which is related to electron-ion recombination, and the purity of the medium. Charge loss due to either ionization carriers recombination or electron capture by electronegative impurities must be minimized in order to increase the energy resolution of the detector.
Besides the dependence of electron-ion recombination on the applied electric field, the fraction of recombined electrons also depends on the local ionization density, being greater at low fields and in dense ionization tracks. Understanding the mechanism of electron-ion recombination is of primary importance in order to predict the response of liquefied noble gas detectors. To date there are three main models that pretend to explain the phenomenon of recombination in liquid noble gases: the germinate recombination model proposed by Onsanger, which is based on the Coulomb interaction between the charge carriers [START_REF] Onsanger | Initial Recombination of Ions[END_REF], the columnar model of Jaffé [START_REF] Jaffé | Zur Theorie der Ionisation in Kolonnen[END_REF] and the Thomas and Imel box model [START_REF] Thomas | Recombination of electron-ion pairs in liquid argon and liquid xenon[END_REF][START_REF] Thomas | Statistics of charge collection in liquid argon and liquid xenon[END_REF]. Beyond them, the Thomas and Imel approach uses more realistic assumptions regarding liquid noble gases, and it is typically used to describe the energy resolution limitation observed in LXe beyond the Fano-limit [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF] (see next section). In fact, this model was originally developed in 1987 to explain the mechanism of recombination in liquid xenon and liquid argon.
The fraction of collected electrons, i.e. those escaping recombination, predicted by the Thomas-Imel box model is given by Equation 2.2:
Q(E) Q 0 = 1 ξ ln (1 + ξ) (2.2)
in which the free parameter ξ is defined as:
ξ = N 0 α 4a 2 µ -E (2.3)
where N 0 is the initial number of electron-ion pairs, µ -is the electron mobility, α is the recombination coefficient and E is the applied electric field. In this model, electron diffusion is ignored and the positive ions are treated as stationary due to their slow motion compared to electrons. In addition, Thomas and Imel assumed that the initial number of produced electron-ion pairs N 0 is distributed in a three-dimensional box of size a, in contrast to the Jaffé columnar model in which electrons are assumed to be uniformly distributed in a column of charge [START_REF] Thomas | Recombination of electron-ion pairs in liquid argon and liquid xenon[END_REF]. Equation 2.3 states that the probability of electron recombination decreases with increasing the applied electric field. At an infinite electric field the parameter ξ → 0, which means that all electrons are collected, whereas at zero applied electric field ξ → ∞.
The ionization density dependence of the recombination rate implies a non-linear ionization yield. Along a particle track, the ionization density is determined by the electronic stopping power of the primary electrons, which refers to the amount of energy loss per distance traveled [START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF]. Figure 1.2 shows the stopping power as a function of energy for electrons in LXe. Since the stopping power depends on the particle's energy, the ionization density distribution varies as the recoiling electron losses its energy. The specific energy loss of the primary electron along its track increases as its velocity decreases. The "single density model of charge collection" proposed by Thomas and Imel, presented in Equation 2.2, does not take into account these ionization density fluctuations along the trajectory of the recoiling electrons, in addition to the random emission of δ-rays along the track of the ionizing particle. These δ-rays are secondary electrons with enough energy to ionize the medium and produce more lower energy δ electrons. δ-rays loss most of their energy at the end of their trajectory creating a minimum ionizing region followed by a high local charge density region or blob at the end of the trajectory. These non-uniformities along the electron traces caused by the presence of δ-rays affect the recombination rate, and thus limits the intrinsic energy resolution of LXe.
A better description of the experimental data that takes into account ionization density fluctuations and δ-ray production was later proposed by Thomas et al. [START_REF] Thomas | Statistics of charge collection in liquid argon and liquid xenon[END_REF] by adding a second term to Equation 2.2, which depends on an additional recombination parameter:
Q(E) Q 0 = E c E p = a ln (1 + ξ 0 ) ξ 0 + (1 -a) ln (1 + ξ 1 ) ξ 1 (2.4) a ≡ ln E 2 E 1 - E 2 E p + 1 ln E p E 0 (2.5)
As described in detail in T. Oger [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF], the Thomas and Imel model accounts for two different rates of recombination. The minimum ionizing region is described by the parameter ξ 1 , whereas the high charge density blob at the endpoint of the δ electron tracks is described by ξ 0 . In Equation 2.4, E p is the energy of the primary particle, E c is the actual charge collected by the anode and E 0 is the minimum kinetic energy of a δ electron before thermalization. The parameter a given by Equation 2.5, represents the fraction of the total charge that is distributed along a δ-ray track, which depends on the initial energy of the secondary electron. Three δ electron energy intervals are considered in the Thomas and Imel model, which are defined by the parameters E 0 , E 1 and E 2 .
Influence of electron-ion recombination on the energy resolution
The energy resolution of a detector depends on the number of collected electrons. As discussed in Chapter 1, the statistical fluctuations in the number of generated electron-ion pairs do not follow a Poisson distribution, but they can be described as a function of the Fano factor. The energy resolution can, therefore, be expressed as a function of the Fano factor, F, according to the following expression [START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF]:
σ E E p = F.W E p (2.6)
where W = 15.6 eV is the average energy required to produce an electron-ion pair [START_REF] Takahashi | Average energy expended per ion pair in liquid xenon[END_REF], and E p is the energy of the ionizing particle. The Fano factor in the case of LXe is predicted to be 0.041 [START_REF] Doke | Fundamental Properties of Liquid Argon, Krypton and Xenon as Radiation Detector Media[END_REF], Without δ-electron production, the theoretical energy resolution based on the Fano factor is of the order of 2 keV (FWHM) at 1 MeV [START_REF] Doke | Estimation of Fano factors in liquid argon, krypton, xenon and xenon-doped liquid argon[END_REF]. However, charge loss due to electron-ion recombination limits the intrinsic energy resolution of LXe. Based on the Thomas and Imel model, a more realistic description of the energy resolution is given by the expression:
σ E (%) = b E p ln (1 + ξ 1 ) ξ 1 - ln (1 + ξ 0 ) ξ 0 E p E c (2.7)
where
b ≡ E p F (E p ) 2E 2 -E 1 E 2 2 E p F (E p ) ln ( E p E 0 ) (2.8)
In Equation 2.8, the factor F (E p ) is a function of the energy of the ionizing particle, that for the particular case of LXe can be approximated by the formula [START_REF] Thomas | Statistics of charge collection in liquid argon and liquid xenon[END_REF]: Other factors such as the attachment of free electrons to electronegative impurities and the electronic noise also contribute to the energy resolution of the detector. These two contributions are taken under consideration in Section 2.2.3 and 4.2 respectively.
F (E p ) = -5.
The free parameters of the Thomas and Imel model have been determined experimentally and reported by different authors. The authors themselves tested their model with 976 keV conversion electrons from a 207 Bi source showing good agreement with the experimental data [START_REF] Thomas | Statistics of charge collection in liquid argon and liquid xenon[END_REF]. The values of ξ 0 E, ξ 1 E, a and b were obtained by a simultaneous fit of the collected charge (Equation 2.2) and the energy resolution (Equation 2.7). Figures 2.3 and 2.4 show the relative ionization yield and the energy resolution of LXe versus the electric field strength for 570 keV γ-rays from a 207 Bi source and 662 keV γ-rays from a 137 Cs source Figure from [START_REF] Aprile | Detection of γ-rays with a 3.5 l liquid xenon ionization chamber triggered by the primary scintillation light[END_REF].
reported by [START_REF] Aprile | Performance of a liquid xenon ionization chamber irradiated with electrons and gamma-rays[END_REF] and [START_REF] Aprile | Detection of γ-rays with a 3.5 l liquid xenon ionization chamber triggered by the primary scintillation light[END_REF] respectively. Both curves have been fitted by the Thomas and Imel model. In both cases the collected charge is well described by Equation 2.2, whereas the electric field dependence of the energy resolution seems to saturate at high field strengths. The Thomas and Imel model cannot completely explain the electron-ion recombination observed in LXe. The discrepancies between the model and the experimental data may be related to the assumption that the electron drift velocity is directly proportional to the electric field strength. In LXe the electron drift velocity varies very slowly with the applied electric field and tends to saturate at fields higher than 3 kV/cm. Nevertheless, the hypothesis of a non-uniform distribution of the secondary electrons along the track of the primary electrons seems necessary in order to explain the experimental results. The analysis of our experimental data using the Thomas and Imel model is presented in Chapter 7.
Influence of the electric field on charge and light yields
As discussed previously, the presence of an electric field of the order of several kV/cm reduces the recombination rate between electrons and positive ions, which implies an increase in the collected charge (see Figure 2.3 and 2.4). However, this effect is always accompanied by a reduction of the scintillation signal since the recombination component of the scintillation light is suppressed by the influence of the electric field. Figure 2.5 shows the field dependence of both light and charge yields for 662 keV γ-rays from a 137 Cs source in LXe [START_REF] Conti | Correlated Fluctuations between Luminiscence and Ionization in Liquid Xenon[END_REF]. As we can observe, the scintillation light yield, in contrast to the ionization yield, decreases as the electric field increases. The electric field dependence of the ionization and scintillation yields in LXe and LAr was first observed by S. Kubota and A. Nakamoto [START_REF] Kubota | Recombination luminiscence in liquid argon and in liquid xenon[END_REF]. The results presented by the authors showed a reduction of the scintillation light by a 74 % at an electric field of 12.7 kV/cm, followed by a saturation of the collected charge. This provided a strong evidence of the recombination luminescence in LXe. The simultaneous increase of the collected charge and the decrease of the scintillation signals in liquid noble gases implicate a correlation between the ionization and the scintillation yields. The strong light and charge anti-correlation was first measured for relativistic electrons in LXe by E. Conti et al. [START_REF] Conti | Correlated Fluctuations between Luminiscence and Ionization in Liquid Xenon[END_REF] (see Figure 2.6). This anti-correlation effect is explained by the fact that high charge density regions imply a higher recombination rate between electrons and ions, which produces a large number of scintillation photons accompanied by a reduction in the number of free electrons. The correlation between charge and light in LXe has been used by many authors to improve the energy resolution of the detector, since the individual fluctuations of the ionization and scintillation signals are compensated when both signals are combined [START_REF] Aprile | Observation of Anti-correlation between Scintillation and Ionization for MeV Gamma-Rays in Liquid Xenon[END_REF]. E. Aprile et al. reported, for example, an energy resolution of 1.7 % from the combination of the two signals for 662 keV γ-rays from a 137 Cs source, whereas energy resolutions from charge and light spectra of 4.8 % and 10.3 % respectively were obtained when the signals were treated separately.
Transport properties of charge carriers in LXe
Electrons and positive ions which escape recombination drift in opposite directions under the influence of the applied electric field. The drift velocity of the charge carriers depends thus on the applied electric field but also on the temperature and density of the medium. Furthermore, electrons diffuse while they drift towards the anode causing a certain spread of the electron cloud. In this section a brief summary of the electron and ion transport properties is presented. Other aspects that may affect the time development of the ionization signal are discussed in the following sections.
Electron drift velocity in LXe
The transport characteristic of charge carriers as a function of the applied electric field in LXe and LAr have received considerable experimental and theoretical attention since the last decades [149,[START_REF] Miller | Charge Transport in Solid and Liquid Ar, Kr, and Xe[END_REF]151,152,153,154,[START_REF] Atrazhev | Electron Transport Coefficients in Liquid Xenon[END_REF]. In a LXe TPC, since the Z coordinate along the drift length is directly determined by the product of the electrons drift time and the electrons drift velocity v drif t , an accurate knowledge of v drif t is important. The electron drift velocity in a LXe TPC depends on the electric field applied between the cathode and the anode. The variation of the electron drift velocity in LXe and gaseous xenon (GXe) as a function of the density-normalized electric field is presented in Figure 2.7. Similarly, the electron drift velocity dependence with the electric field for solid xenon (SXe) compared to LXe is presented in Figure 2.8. In all three cases, the electron drift velocity has a linear field dependence at low-field strengths, while at high applied fields, above a few kV/cm, the electron drift velocity is no longer linear with the field and tends to saturate. In the low-field region the electron mobility becomes constant and the electron drift velocity can be approximated by:
v drif t = µ -E (2.10)
where µ -is the electron mobility in LXe and E is the electric field. For example, at 1 kV/cm, the drift velocity of electrons in LXe is approximately 2 mm/µs. Equation 2.10 is a good approximation since at low electric fields the electrons are assumed to be in equilibrium with the medium. However, as the strength of the electric field increases, the electrons are no longer in thermal equilibrium and the electron mobility becomes dependent of the applied electric field. A theoretical description of the electron drift velocity dependence with the applied electric field has been addressed by several authors [START_REF] Atrazhev | Electron Transport Coefficients in Liquid Xenon[END_REF][START_REF] Shockley | Currents to Conductors Induced by a Moving Point Charge[END_REF]. A correct description of experimental data in LXe and LAr has been provided by the Cohen-Lekner theory [151,152], which takes into account the properties of the medium. However, the behavior of the electron drift velocity at high electric fields is still not fully understood. 2.8 also show that the electron drift velocity in LXe is higher than that in gaseous state in the whole field range presented in Figure 2.7, whereas higher values of v drif t are obtained for SXe. This effect can be explained due to the dependence of the electron mobility with density. The drift velocity of a charge carrier depends on the number of collisions per unit length. The probability that an electron undergoes a collision along its path is given by the scattering cross section, which is inversely proportional to the drift velocity. The scattering cross section depends on both the energy of the electron and the number of atoms per unit of volume N, i.e. the density of the medium. For xenon, the scattering cross section for the liquid state is smaller than that for the gas, which implies a higher drift velocity. This makes LXe a more suitable detector medium for an ionization detector [START_REF] Atrazhev | Electron Transport Coefficients in Liquid Xenon[END_REF].
Besides the dependency on the electric field, the mobility of electrons in LXe also depends on the density of the medium, which implies a dependency on the pressure and temperature of the system and thus, on the experimental conditions of the detector. Figure 2.9 shows the electron mobility as a function of the temperature for liquid and solid xenon and argon. At a temperature of 165 K, the mobility of electrons in LXe is around 2200 cm 2 V -1 s -1 , whereas at 195 K the mobility rapidly increases to ∼4500 cm 2 V -1 s -1 [START_REF] Miller | Charge Transport in Solid and Liquid Ar, Kr, and Xe[END_REF]. As a consequence, the electron drift velocity slightly depends on the temperature of the medium as presented in Figure 2.10. Moreover it was observed that contaminations with certain molecules such as carbon hydroxides can enhance the electron drift velocity [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]. [START_REF] Kimura | Electron Transport in Liquids: Conduction Bands and Localized States[END_REF] for the Xe and [START_REF] Schnyders | Electron drift velocities in liquefied argon and kypton at low electric field strengths[END_REF] for the Ar. The point represents the experimental values reported by [158]. Figure is taken from [START_REF] Aprile | Noble Gas Detectors[END_REF]. Original figure is from [158].
Holes and positive ions mobility in LXe
For the case of positive carries, which in LXe are positive Xe + 2 ions and holes, the mobility is significantly lower than for electrons. For positive ions the mobility is of the order of 3.10 -4 cm 2 V -1 s -1 , which is around 10 -5 times smaller than the electron mobility [START_REF] Suzuki | Technique and Application of Xenon Detectors[END_REF]. Whereas, the hole mobility is about 40.10 -4 cm 2 V -1 s -1 . The dependence of the hole mobility with temperature is depicted in Figure 2.11. As a result of the slow mobility of positive charge carriers in LXe, the induced charge due to the motion of positive charge carriers has a very slow rise time and it can be easily rejected with an appropriate front-end electronics.
Electron diffusion in LXe
Positive ions and free electrons produced after the interaction of an ionizing particle with LXe diffuses as they drift through the volume of the detector. This diffusion of the charge cloud is the result of the collision of the charges with the atoms and molecules of the liquid, which leads to a random motion of charge carriers during migration. The diffusion process is more important for electrons than for ions due to slower mobility of ions in the liquid. According to the kinetic theory, the spread of an initial point-like electron cloud can be described by a Gaussian distribution with a standard deviation equivalent to the diffusion of charge carriers along a certain axis x. Therefore, the diffusion can be described according to:
σ x = 2 D x t drif t (2.11)
where D x is diffusion coefficient along the axis x expressed in cm 2 s -1 and t drif t is the electron drift time. The spread of the electron cloud depends on the electrons collection time, which is related to the travelled distance by the electron through the LXe (d drif t ) and the electron drift velocity (v drif t ). Therefore, Equation 2.11 can be expressed in terms of the distance according to:
σ = 2 D d drif t v drif t (2.12)
Since the electron drift velocity depends on the applied electric field, a small spread of the electron cloud is expected at higher fields strengths. Moreover, the electron diffusion coefficient D x also depends on the magnitude and direction of the applied electric field. Diffusion takes place along the three spatial dimensions. In LXe, the transverse diffusion coefficient D T (perpendicular to the drift direction) is much larger than the longitudinal diffusion coefficient D L defined along the drift direction [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]. As a consequence, the electron cloud will not maintain its original shape as it drifts through the LXe, but it will result is an ellipsoid with minor axis along the drift direction.
The values of the longitudinal and transverse diffusion coefficients obtained experimentally as a function of the electric field in LXe are presented in Figure 2.12. According to the results, the longitudinal diffusion coefficient is around ten times smaller than the transverse diffusion coefficient, so we can assume that its contribution to the electron cloud spread is negligible [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]. Unfortunately, the results available so far are not conclusive due to the difficulty of the measurement. Doke et al. [3] measured a value of D T which varies between 44 cm 2 s -1 to 80 cm 2 s -1 with the electric field, resulting in a transverse diffusion of the order of 210µm √ cm to 270µm √ cm [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF]. The experimental results as well as the values obtained for LAr as a function of the electric field are depicted in Figure 2.13. In a study carried out by our group [START_REF] Chen | Measurement of the Transverse Diffusion Coefficient of Charge in Liquid Xenon[END_REF] with a small dimension LXe TPC (see Chapter 3), a transverse diffusion coefficient of 37 cm 2 s -1 was measured for an electric field of 0.5 kV/cm (E/N = 5.5.10 -18 V cm 2 ), which is in good agreement with the value reported by Doke et al. [3] as can be deduced from Figure 2.13. Our results of the variation of the spread of the electronic cloud with the applied electric field are shown in Figure 2.14. For an electric field of 1 kV/cm, we estimated a lateral diffusion of the order of 170 µm √ cm. This means that in our LXe Compton telescope, for an interaction that takes place close to the cathode, i.e. 12 cm far from the segmented anode, the diffusion process will typically expand the electron cloud on the order of 600 µm in the x-y plane with an almost negligible spread along the drift direction. Electron diffusion may affect the collection of the ionization signal. The transverse diffusion of the electrons may, on one side, limit the intrinsic spatial resolution of the detector and, on the other side, increase the number of multiple-pixel events, i.e. events with more than one fired pixel per interaction. Whether or not this is a problem depends on the electron cloud size, the position of the cloud with respect to the anode and the deposited energy in the interaction. If a small amount of charge is shared with a neighboring pixel the reconstruction may be difficult and part of the charge may be lost due to the pulse selection threshold. Longitudinal diffusion along drift length of the detector, on the other hand, may produce a smearing of the collected signal due to the spread in the drift time between arrival electrons. In the case of LXe, the later can be neglected due to the small longitudinal diffusion coefficient.
Electron attachment by impurities
Another mechanism that significantly affects the collection of charges is the presence of impurities inside the liquid, particularly electronegative impurities such as O 2 , N 2 , CO 2 and H 2 O. Electron attachment to impurities causes a reduction in the number of collected charge that increases as electrons drift towards the anode. This phenomenon translates into a decrease on the electron lifetime and a degradation of energy resolution of the detector.
The collision between a free electron and electronegative impurity may lead to the capture of the electron by three different processes described in the following: Three body attachment:
e + AB ←→ (AB -) * (AB -) * + X → AB -+ X (2.
13)
Dissociative attachment:
e + AB → AB * + e → A + + B -+ e e + AB → AB -→ A + B - (2.14)
Radiative attachment:
e + AB → AB -+ hν (2.15)
where e denotes a free electron, AB is an impurity (atom or molecule), X stands for a Xe atom or molecule and * represents an excited state. For a more detailed description please refer to [START_REF] Aprile | Noble Gas Detectors[END_REF]. In all three cases, the presence of impurities causes a reduction on the number of produced electrons, which increases over the electron drift time.
The effect of electronegative impurities on the number of collected electrons N e (t) after a drift time t can be described by the following equation [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]:
N e (t) = N 0 e -t i k i [N i ] (2.16)
where N 0 is the initial number of ionization electrons, k i is the attachment rate constant of a given contaminant i and [N i ] are the molar concentrations of such impurity. Detailed measurements of the attachment rate constants for different electric fields and molecular contaminants are described in [START_REF] Bakale | Rate constant for Electron Attachment to O2, N2O and SF6 in LXe at 165K[END_REF]. In order to estimate the amount of impurities the term i k i [N i ] is commonly expressed as oxygen equivalent impurity concentration k.[O 2 ] eq measured in ppb (parts per billion).
The impurity concentration is related to the electron lifetime τ e , i.e. the average time elapsed before a free electron is captured by an impurity, as follows:
τ e = 1 i k i [N i ]
(2.17)
According to this, the number of collected electrons N e (t) can be expressed as a function of the electron lifetime as:
N e (t) = N 0 e -t τe (2.18)
Using Equation 2.18 the free electron lifetime can be directly obtained without prior knowledge of the present concentration of contaminant inside the detector. Moreover, the electron lifetime is related to the electron attenuation length though the expression:
λ = v drif t . τ e (2.
19)
Using these equation, the rate of electron attachment of O 2 was calculated in [START_REF] Bakale | Rate constant for Electron Attachment to O2, N2O and SF6 in LXe at 165K[END_REF]. The results are showed in Figure 2.15. As we can see, the rate of electron attachment of O 2 is reduced by almost a factor of 5 as the applied electric field increased from 1 kV/cm to 10 kV/cm. However, in most cases this reduction is not enough to avoid charge loss. A much greater reduction of electron attachment is achieved by reducing the impurity concentration by means of a purification system. In our LXe TPC a closed re-circulation system was installed to continuously purify the xenon through a SAES MonoTorr Phase II (model PS4-MT3-R) getter. The purification sub-system is described more in detail in Section 3.1.4. The measurement of the attenuation length is further described in Chapter 7. The value of the electron attenuation length is necessary to correct the collected charge for charge losses as a function of the distance to the anode.
Other mechanisms that affect electron signal detection
Besides the transport properties of charge carriers in LXe, other physical aspects, such as the mean free path of primary electrons, may also contribute to the performances of the detectors. In this section, three aspects related to the production and detection of the ionization signal in LXe are considered. These are the displacement of the electron cloud from the point of interaction due to the mean free path of the recoiling electrons, the emission of fluorescence X-rays after the interaction of a γ-ray with the LXe and the angular resolution limitation due to the non-zero momentum of bound electrons in the atom. The discussion is supported by some experimental results reported by other authors and by a simulation study performed by our group.
Primary Electron Cloud
In general, the electronic cloud produced by ionization is considered as a point-like distribution where all electrons start drifting towards the anode from the same starting point. However, as discussed in Section 2.2, the ionization density is not uniform along the recoiling electron track, which causes a variation in the amount of produced charge along the trajectory of the primary electrons before completely loss their energy. Moreover, electrons do not follow a straight line but they describe erratic trajectories before completely slow down. The spatial extent of the track of the recoiling electrons may affect both the final spatial resolution of the detector and the shape of the collected signal. In this section we perform a detailed study of the primary electron cloud in LXe in order to better understand the physics and limitations of a LXe TPC, and to provide a better description of the charge cloud in LXe.
To estimate the size and shape of the recoil electron cloud in LXe, we performed a Monte Carlo simulation using CASINO V3.2 (CArlo SImulation of electroN trajectory in sOlid) [START_REF] Casino | monte CArlo SImulation of electroN trajectory in sOlids[END_REF]. CASINO is a 3D simulation software developed to simulate electron interactions with matter, which provides information step by step of the trajectory of the primary electrons through the medium until its energy is lower than 50 eV. The program simulates both elastic and inelastic collisions of electron with matter. We used the Mott model for elastic scattering and the Joy-Luo model to describe energy loss rate between two consecutive collisions.
In the simulation, we consider a source of primary electrons with energies between 30 keV to 511 keV and an incident direction normal to the LXe surface ( p z = 1, p x = p y = 0). At each step, the simulation provides information of the energy loss (E i ) and electron location (x i , y i , z i ) where i = 1, ..., n represents the interaction point and n is the total number of collision experienced by the primary electron. Figure 2.16 shows a typical ionization track for a 511 keV primary electron in LXe. High-energy electrons travel a certain distance producing a minimum ionization region, whereas most of their energy is deposited at the end of the trajectory creating a non-uniform distribution of secondary electrons along their track. Moreover, we can see that the electron recoil track is not at all a straight line, but electrons can undergo hard scatters with the primary electron giving several keV to a secondary electron. Even secondary electrons with a few keV of energy may result in a dense ionization blob along the track [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF]. This important energy loss in one interaction shared among secondary recoils gives rise to recombination fluctuations along the track of the primary electron.
The penetration depth or Z mean depends on the energy of the recoil electron. The path length is measured as the average value of the distance between two successive collisions over the total number of collisions. The minimum mean free path of electrons between scattering events is around 1 nm. Figure 2.17 shows the spatial extent of the primary electron cloud for 511 keV primary electrons. The recoil electron cloud has an almost spherical shape with an average radius in 3D of the order of ∼ 190 µm. The barycenter of the charge cloud is displaced of about 100 µm towards the incoming direction from the initial interaction point which represents the γ ray absorption point, whereas no displacement is observed in the X and Y directions. This implies that the photoelectrons created by 511 keV γ-rays are more probably emitted in the forward direction with respect to the incident photon with a small emission angle. The radius in 3D of the primary electron cloud from the barycenter point is estimated according to the following expression:
R = (|x -x |) 2 + (|ȳ -ȳ |) 2 + (|z -z |) 2 (2.20) where x = n 1 x i E i n 1 E i , ȳ = n 1 y i E i n 1 E i and z = n 1 z i E i n 1 E i
represent the mean value over successive interactions along the primary track, and x , ȳ and z indicate the mean value of the x, ȳ and z distributions obtained over the total number of simulated events N. The average value of the radius in 3D, which represents the isotropic radial extension of the electron cloud, is then estimated from the mean value of the R distribution obtained for all the simulated events. The shift of the barycenter in the forward direction and the isotropic radial extension as a function of the electron recoil energy are presented in Figure 2.18.
The error bars represent the standard deviation of the R distribution. For each energy, 2000 electrons were simulated. We can see that the cloud size increases as the energy of the recoil electron increases and the maximum of ionization, i.e. the center of the cloud, shifts to higher distances as the energy increases. Moreover, for energies near the shell edges (K-shell and L-shell), the size of the electronic cloud is completely negligible and most of the energy is deposited near the X-ray absorption point. The Monte Carlo simulation demonstrates that the assumption of a uniform distribution of electrons liberated by the primary electron along an electron track is not realistic. Moreover, since electrons do not follow straight line trajectories, the estimation of the length of the electron track from the average path length traveled by a charged particle before coming to rest, called the CSDA range, is not appropriated [START_REF] Zhu | Electric Field Calculation and Ionization Signal Simulation in Liquid Xenon Detectors for PET[END_REF]. For 511 keV electrons, the CSDA range is of the order of 1 mm (0.3012 g/cm 2 ) [169]. This value is consistent with the total length of the electron track obtained with CASINO. Our results show, however, that a maximal radial extension in 3D of less than 200 µ is expected for 511 keV recoil electrons, although no information about the charge distribution inside the electron cloud is reported. The effect of the size of the electron cloud in the measured signals is further discussed in Chapter 7.
Doppler Broadening Effect
The Compton equation 1.4 is only valid under the assumption that the interaction between the γ-ray and an atomic electron occurs when such electron is unbounded and at rest. However, in a real medium, the Compton scattering takes place with bound and moving electrons. The motion of the atomic electrons around the nucleus produces a broadening of the scattered photon energy E γ ′ , which results in a corresponding broadening of the scattering angle. This effect is known as Doppler broadening [START_REF] Ordonez | Doppler Broadening of Energy Spectra in Compton Cameras[END_REF]. The Doppler broadening effect implies an intrinsic limitation on the angular resolution of any Compton telescope.
After a Compton scattering process, the energy conservation leads to: E γ = E b + E γ ′ where E γ and E γ ′ are the energies of the incident and scattered photons respectively, and E b is the binding energy of the electron in the atom for its corresponding subshell. However, the electron momentum inside the atoms introduces an uncertainty in the energy of the deflected photon and the recoil electron. For a given incident photon energy E γ and a scattering angle θ, a broadening of the possible E γ ′ is produced. The broadened energy distribution is usually called Compton profile. Figure 2.19 shows the energy distribution of deflected photons for 140 keV incident γ-rays and a scatter angle of 45 • for three different materials [START_REF] Ordonez | Doppler Broadening of Energy Spectra in Compton Cameras[END_REF]. The uncertainty in the scattered photon energy, given by the standard deviation of the Doppler broadening distribution, increases as the atomic number increases [START_REF] Ordonez | Doppler Broadening of Energy Spectra in Compton Cameras[END_REF]. The momentum of such electron p z before the interaction, projected upon the γ momentum transfer vector is given by Equation 2.21 [START_REF] Zoglauer | Doppler Broadening as a Lower Limit to the Angular Resolution of Next Generation Compton Telescopes[END_REF]:
| p z | = p γ . p e - | p γ | (2.21)
where p γ and p e -are the momentum vector of the incident photon and the ejected electron respectively. Applying energy and momentum conservation:
p z = -m e c E γ -E γ ′ -E γ E γ ′ (1 -cosθ) m e c 2 E 2 γ + E ′ γ 2 -2E γ E γ ′ cosθ (2.22)
If the electron is at rest, p z = 0, and Equation 2.21 reduces to the formula of the scattering Compton 1.4.
The Compton scattering interaction probability is characterised by the cross section. The differential cross section for the scattering of photons with unbound and at rest electrons was first derived by O. Klein and Y. Nishina [START_REF] Klein | Über die streuung von strahlung durch freie elektronen nach der neuen relativistischen quantendynamik von dirac[END_REF], and the angular distribution of the recoil electrons is given by the Klein-Nishina differential cross section presented in Equation 2.23 [START_REF] Knoll | Radiation Detection and Measurements[END_REF]: where r e is the classical electron radius and Ω is the solid angle. The Klein-Nishina equation for Compton scattering is a good approximation for high energy photons (> 1 MeV), especially for low atomic number materials. Figure 2.20 shows the angular distribution of the differential cross section of Compton scattering. For example, 511 keV γ-rays have a high probability to undergo a forward scattering with small deflection angle. However, due to the non-zero momentum of the recoil electrons before the interaction, this is not completely accurate.
dσ dΩ KN = r 2 e 2 E ′ γ E γ 2 E ′ γ E γ + E γ E ′ γ -sin 2 θ [cm 2 sr -1 electron -1 ] (2.23)
To account for the momentum distribution of the bound electrons, the atomic shell effects should be taken into account in the Klein-Nishina formula, which leads to the following expression [START_REF] Zoglauer | Doppler Broadening as a Lower Limit to the Angular Resolution of Next Generation Compton Telescopes[END_REF]:
dσ dΩ DB = dσ dΩ KN . S i (E γ , Z, θ) (2.24)
where S i (E γ , Z, θ) is the incoherent scattering function of the i-th shell electrons of an atom with atomic number Z, which depends on the momentum transfer of the incident photon. The total scattering cross section for an specific orbital is obtained by integrating Equation 2.24 over all angles. The results show that the Doppler broadening effect is more important for low γ-ray energies, high scattering angles and material with high atomic number. The uncertainty in the angular resolution introduced by the Doppler broadening can be estimated from the Angular Resolution Measure (ARM) profile, which is obtained from the difference between the scatter angle θ, derived from the Compton equation, and the geometric scatter angle θ geo , calculated from the real position of the photon emission and the position of the interaction in the material. The Doppler broadening effect on a Xe Compton telescope was determined by Zoglauer and Kanbach [START_REF] Zoglauer | Doppler Broadening as a Lower Limit to the Angular Resolution of Next Generation Compton Telescopes[END_REF]. The angular resolution was estimated from the FWHM of the ARM distribution. Figure 2.21 shows the dependence of the angular resolution on the energy of the incident photon for Xe, Si and Ge. For a Xe Compton camera and 511 keV γ-rays, a minimum angular resolution due to the Doppler broadening of ∼ 1.45 • was estimated, whereas a resolution of ∼ 0.8 • was measured for 1 MeV γ-rays [START_REF] Zoglauer | Doppler Broadening as a Lower Limit to the Angular Resolution of Next Generation Compton Telescopes[END_REF]. This limitation in the angular resolution is present regardless the energy resolution of the detector.
Isotropic X-ray emission
As discussed in the previous chapter, when a γ-ray interacts with LXe there is a non negligible probability that it will ionize an inner shell of an atom with the consequent emission of a X-ray or an Auger electron. After a photoelectric absorption, for example, the photon is completely absorbed and an inner shell electron, generally from the K or L shells, is ejected from the atom. Part of the energy of the γ-ray is used to overcome the binding energy of the electron and the rest is transferred to the ejected electron. In xenon, the absorption edges corresponding to the K, L and M atomic shells correspond to energies of around 34.5 keV, 5 keV and 2 keV respectively. The ejected electron creates a vacancy leaving the atom in a highly excited state. The atom will rapidly return to its ground state by filling the vacancy with an outershell electron. This de-excitation process is usually accompanied by the emission of a fluorescence X-ray or an Auger electron, with energies equal to the difference in the binding energies between the two atomic levels involved in the transition.
Inner shell excitation and X-ray emission may also be produced after a Compton scattering interaction. However, this process is more unlikely than X-ray emission after a photoelectric effect since the γ-ray must collide with an electron from the inner K or L shells of the atom. Assuming a constant probability of Compton scattering per electron, the probability of interaction with an electron of the K-shell can be roughly estimated as ∼ 4 %.
For γ-rays with incident energies above 34.5 keV, the probability of photoabsorption to occur in the K-shell is around 86 %. After the ionization of the K-shell, the probability of atomic relaxation via a K α (29.7 keV) or a K β (33.8 keV) fluorescence photons is of the order 87 %, whereas the emission of an Auger electron is almost negligible. On the other hand, if the incident photon has an energy lower than the binding energy of the K-shell, the interaction will more probably occur with an electron from the L-shell or from a higher level. In this case, the de-excitation via the emission of an Auger electron is more likely than the emission of a L fluorescence photon. After the emission of an Auger electron, the atom will return to the ground state by a series of cascade relaxation processes.
In most cases, the emitted X-ray is reabsorbed by the LXe producing a second electron cloud displaced from its production point. Photoelectrons produced with energies of the order of ∼ 30 keV will travel a maximum distance of ∼370 µm in LXe (CSDA) [13]. Whether or not the two electron clouds are spatially resolved depends on the emission direction of the X-ray photoelectron with respect to the primary electron cloud. While the X-ray emission is isotropic, the primary photoelectron has a certain directionality, which depends on the energy of the incident γ-ray. The electron is emitted forward in the direction of the incoming photon at high incident energies, while it is emitted perpendicular to the γ-ray at low energies [START_REF] Gavrila | [END_REF]174].
If both electrons are emitted in the same direction, a single electron cloud will be formed and all electrons will drift to the anode at the same time. In this case, the total collected energy will be equal to the energy of the incident γ-ray. On the other hand, if both interactions take place far from each other, two different electron clouds will be formed and electrons will drift towards the anode with a certain time difference. In LXe, since the mean free path of 30 keV γ-rays is small, the production of multiple scattering events due to the emission of a X-ray from the K-shell should not be very significant. An Auger electron can also produce a considerable amount of secondary electron-ion pairs by ionization. However, due to their small range (see Figure 2.18) only a single electron cloud is expected.
To better understand the implications of X-ray emission in the data analysis, a precise Monte Carlo simulation of the interaction of the 3γ-rays emitted by a 44 Sc source with our LXe TPC has been carried out using Geant4 [START_REF] Agostinelli | Geant4 -a simulation toolkit[END_REF][START_REF] Allison | Geant4 Developments and Applications[END_REF]. The physics of photoionization were simulated using the low-energy Penelope model [START_REF] Salvat | Penelope -A code system for Monte Carlo simulation of electron and photon transport[END_REF]. X-ray fluorescence and Auger electron emission were also considered in the simulation. Figure 2. 22(a) shows the energy spectra for photoelectric and Compton scattering processes for all the interactions inside the detector. On the other hand, the energy spectrum for only those interactions with an emitted X-ray is depicted in Figure 2. 22(b). The simulation shows that 84 % of the time a photoelectric absorption in the LXe is followed by the emission of a X-ray from the K-shell, whereas only a 4 % of the X-rays are emitted after a Compton scattering. These results are in good agreement with the theoretical expectations. For small deposited energies the emission of a X-ray is more likely due to a Compton interaction than a photoelectric absorption. However, this probability remains small compared to the probability of the emission of a high energy photoelectron. Moreover, a widening of the front Compton due to Doppler broadening is also appreciable in the Figure 2.22(b) compared to the total Compton spectrum. This is because X-ray emission due to a Compton scattering is produced by the interaction with a inner electron, most likely from the K-shell. Inner electrons are, in turn, more affected by the Doppler broadening effect due to their higher momentum compared to outershell electrons.
Photo-Electric Compton (b) Figure 2.
22 -Geant4 simulation of the interaction of the 3γ-rays emitted by a 44 Sc source with a LXe TPC. a) all hits and b) only those hits with the emission of a X-ray from the K-shell.
Conclusions Chapter 2
A LXe TPC combines the detection of the ionization and scintillation light signals to provide information of the 3D position and energy deposit of each individual interaction inside the fiducial volume of the detector. In this chapter we have described the basic principle of a LXe TPC based on a monolithic detector filled with LXe. The detector provides information of each interaction point along the transverse (X, Y) and longitudinal (Z) coordinates, besides energy loss information along the particle track. The measurement of the Z coordinate requires a precise knowledge of the electron drift velocity, which depends on the applied electric field. The information of the time of interaction t 0 is obtained from the emission of scintillation light, that provides the detector self-triggering capabilities. Therefore, a LXe TPC technology allows for three-dimensional event reconstruction and precise energy measurements, providing good angular and calorimetry resolutions. These characteristics are essential to obtain a good quality image of the source distribution on a medical imaging Compton telescope.
We have seen that the final charge collection depends on the electron-ion recombination rate and on the purity of the xenon. A description of the recombination process in LXe was proposed by Thomas and Imel [START_REF] Thomas | Statistics of charge collection in liquid argon and liquid xenon[END_REF]. In Chapter 7 we analyze our experimental results with the Thomas and Imel model. The obtained results are consistent with the values reported by other authors, as shown in this chapter.
The time evolution of the electron cloud has been also discussed in this chapter. When a electron cloud moves through a liquid under the influence of an electric field, three processes are involved in its time development: electron drift velocity, electron diffusion and electron attachment to electronegative impurities. All three aspects have discussed in detail in this chapter. Other effects such as the initial size of the electron cloud, the emission of fluorescence X-rays and the Doppler broadening effect may also impact the performances of the detector. A simulation using CASINO shows that the barycenter of the electron cloud is shifted by around 100 µm from the γ-ray interaction point, which introduces a small systematic error to the position determination. We have observed that the average size of the electron cloud for 511 keV γ-rays is of the order of 200 µm. The impact of the size of the electron cloud is further discussed in Chapter 7.
The interaction of a γ-ray with the LXe is followed almost the 85 % of the time by the emission of a X-ray from the K-shell after a photoelectric absorption and 4 % of the time after a Compton scattering. This means that there is a high probability that two ionization clouds are generated in the chamber after the interaction of a γ-ray. Due to the small mean free path of X-rays with energy of the order of 30 keV, in most of the cases both interaction will be confused leading to a total collected charge equal to energy of the incident γ-ray. Finally, we have discussed the effect of the Doppler broadening to the final angular resolution of a Xe Compton telescope. In the case of Xe, a minimum angular resolution of ∼ 0.8 • (FWHM) was estimated for 1 MeV γ-rays [START_REF] Zoglauer | Doppler Broadening as a Lower Limit to the Angular Resolution of Next Generation Compton Telescopes[END_REF].
In the next chapter, a detailed description of the two prototypes of a LXe Compton telescopes developed within the XEMIS project is presented. I n order to prove the feasibility of the 3γ imaging technique and consolidate this new imaging modality in the scientific community, a first phase of research and development (R&D) has been performed at Subatech laboratory. This first stage represents the beginning of the XEMIS (XEnon Medical Imaging System) project that started at Subatech in 2006, with the development of a first prototype of a LXe Compton telescope called XEMIS1. The promising results obtained to date with this small dimension device have led to the development of a second prototype dedicated to small animal imaging called XEMIS2. In this chapter, a detailed description of the two prototypes XEMIS1 and XEMIS2 is presented. The main characteristics of the experimental set-up of XEMIS1 are described in Section 3.1. The light detection and charge collection systems are described in Sections 3.1.2 and 3.1.3 respectively, while the cryogenic infrastructure used to liquefied and purified the xenon in XEMIS1 is introduced in Section 3.1.4. Section 3.2 is devoted to give a detailed description of the new prototype XEMIS2 that comprises the experimental set-up and signal detection systems. The innovative cryogenic infrastructure used to recuperate and store the xenon, especially developed for XEMIS2, is introduced in Section 3.2.4.
XEMIS1: First prototype of a liquid xenon TPC for 3γ imaging
Detector description
In order to test the possibility of using a LXe Compton telescope for the 3γ imaging technique, a small dimension single-phase LXe TPC, XEMIS1, has been developed and tested. A general view of the experimental set-up of XEMIS1 is presented in Figure 3.1, that includes the XEMIS1 TPC, the data acquisition system, the cryogenic system required to liquefy the xenon, the purification and recirculation systems and the rescue tank used to recuperate the xenon in case of necessity. The XEMIS1 prototype consists of a cylindrical TPC full of liquid xenon with a drift volume of 2.8 x 2.8 x 12 cm 3 . Figure 3.2 shows the main parts of the XEMIS1 TPC. A photomultiplier tube (PMT) is placed at the top of the TPC with the aim of detecting the VUV scintillation photons (178 nm) generated after the interaction of a γ-ray with the LXe. The PMT is especially designed to work at LXe temperature and, as previously mentioned in Chapter 2, it is used as a trigger for the signal acquisition. The charge carriers produced in the ionization process are collected by a segmented anode of 2.5 x 2.5 cm 2 . The anode represents the entrance window for the incoming radiation and it is located facing the PMT. The drift length of the TPC is delimited by a micro-mesh placed 1.6 mm below the PMT and the anode. The micro-mesh is used as a cathode for the drift electric field. A set of 24 copper field rings is located around the TPC to provide a homogeneous electric field up to 2.5 kV/cm between the cathode and the anode. A mesh used as a Frisch grid is placed above the anode. To ensure a uniform electric field inside the drift volume, the potential is distributed from the cathode, which is directly connected to a high voltage supply, to the first field shaping ring by means of a resistive divider chain of 24 x 500 MΩ resistors immersed in the LXe (see Figure 3.2). The first resistor of the divider chain is connected to the rim of the high-voltage electrode, while the last one is connected to ground through a 500 MΩ resistor. Two independent power supplies are used to provide a constant voltage to the first copper field ring and to the Frisch grid separately. The pixels are connected to ground via the front-end electronics. To ensure a 100% electron transparency of the Frisch grid, an electric field in the gap of at least 5 to 10 times higher than the electric drift field is necessary. The ratio between the two electric drift fields depends on the kind of Frisch grid. A study of the transparency of the grid as a function of the electric field is presented in Section 5.5.1.
The sensitive volume of XEMIS1 is then defined by the drift length and by the intersection area of the segmented anode, which is 2.5 x 2.5 x 12 cm 3 . A drift length of 12 cm was chosen to ensure a high γ-ray detection efficiency with a maximum trigger efficiency. The probability that a 1.157 MeV γ-ray interacts at least once in 12 cm of LXe is 88%. However, due to the trigger system configuration, the PMT, which is responsible of the data acquisition triggering, is located 13.6 cm far from the anode. This implies that a reasonable small solid angle is covered by the light detection system, mostly for those interactions that take place close to the anode. A higher drift length will result in a smaller solid angle, and thus in an important lose of trigger efficiency.
The anode is connected to an ultra-low noise front-end electronics consisting of two standard IDeF-X HD-LXe 32 channels chips that generates 64 independent analog signals [START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF]. The electronics are placed inside a vacuum container at a temperature lower than -60 • , allowing a reduction of the electronic noise ENC (Equivalent Noise Charge) down to 80 electrons.
As we can see in Figure 3.2, the different components of the TPC are mounted on a column structure made of four Macor ceramics rods. The base of the rods are fixed on a stainless steel flange. Furthermore, to compensate for temperature-induced changes in some of the materials, a set of four stainless steel springs is used to support the whole assembly. The ensemble is built inside a stainless steel cylindrical cryostat filled with about 30 kg of high-purity LXe, which is located in a vacuum chamber to ensure good thermal insulation. All materials used to construct the TPC, including the cryostat and the assembly structures, were chosen due to their low outgassing. Moreover, to minimize the effect of thermal contraction of the TPC during cooling, all materials were selected due to their similar coefficients of thermal expansion, excluding the Macor ceramic which undergoes a very small thermal contraction. Table 3 a See section 3.1.3.
Light detection system
As we have seen in Section 1.1.4, LXe is an excellent scintillator with high photon yield and short decay times consisting of a fast (2 ns) and a slow (27 ns) decay components [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF].
In order to detect the scintillation light in XEMIS1, we use a VUV-sensitive Hamamatsu R7600-06MOD-ASSY PMT especially developed to work at LXe temperature. A picture of the PMT is presented in Figure 3.3. The PMT is directly immersed in the LXe, showing a good UV photon sensitivity with a quantum efficiency of 35% at 175 nm. The PMT has a quartz window of 28 x 28 mm 2 and a bi-alkaline photocathode of 18 x 18 mm 2 that constitutes the active area of the detector. At the LXe temperature the total gain is of the order of 10 6 for a voltage of 750 V. Besides the high scintillation yield of LXe and the relatively large active area of the light detection system with respect to the geometry of XEMIS1, the design of the TPC is not optimal for light detection. In absence of an electric field, a large photon yield of around 42000 UV photons are generated after the interaction of a γ-ray of 1 MeV of energy. However, the necessity of an electric field to collect the charge carries reduces the amount of photons produced after the interaction. At a drift field of 1 kV/cm, which is a standard value used to characterize XEMIS1, the light reduction is about 50% leaving to around 21000 photons/MeV. Even though this value is comparable to those of the best scintillation crystals used in PET [START_REF] Melcher | Scintillation Crystals for PET[END_REF], only a reduced fraction of the emitted photons reaches the PMT due to the small solid angle coverage, especially for those interactions that take place far from the PMT surface. For example, the interaction of a 1.157 MeV γ-ray at 12 cm from the PMT with an electric field of 1 kV/cm will result in the detection of around 25 photoelectrons. Other effects such as the reflection of the UV photons in the cathode or in the copper field rings, the optical transparency of a screening mesh placed below the PMT surface or the reflection at the PMT window could also affect the fraction of photons that reaches the PMT. As a result, the number of photoelectrons is too small for energy resolution measurements, which is not our goal, but enough to give a suitable signal for triggering of the data acquisition system and for measuring the interaction time t 0 .
Signals from the PMT are sent both to an electronic logic chain and to a Flash Analog-to-Digital Converter (FADC). In the FADC the signals are digitized with a sampling rate of 250 MHz and a resolution of 16 bits.
Charge collection system
To collect the electrons produced by the ionization of LXe after the passage of an ionizing particle, XEMIS1 includes: a mesh used as a Frisch grid, a segmented anode divided in 64 pixels and the data acquisition electronics.
Frisch grid
After the interaction of an ionizing particle with the LXe, and in order to collect an ionization signal with an amplitude proportional to the deposited energy and independent of the distance of the interaction with respect to the collecting electrode, a grid is placed between the anode and the cathode. This grid, known as Frisch grid, is then introduced with the aim of removing the position dependency of the collected signals. The grid shields the anode of the motion of the drifting electrons between the cathode and the grid [START_REF] Frisch | Isotope analysis of uranium samples by means of their α-ray groups[END_REF]. Therefore, with an ideal Frisch grid no signal is collected in the anode until the electrons pass through the grid. The shielding of the anode from electrons motion is, however, not perfect, which leaves to signal induction on the anode while electrons drift towards the Frisch grid. This effect known as Frisch grid inefficiency affects the time shape of the anode signal and hence, it has a direct impact on the energy resolution [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF].
In addition to the charge induction opacity of the Frisch grid, the mesh should be transparent to the drifting electrons. This implies that all electrons created in the ionization process should be able to pass through the grid to be collected by the anode. However, if the transparency is not good enough, and the electrons are collected by the grid, the full ionization charge is not detected, worsening the SNR and hence, negatively affecting the energy resolution. The electron transparency of a grid depends on the geometry of the grid. For this reason, the choice of the Frisch grid in a LXe TPC is of crucial importance for the collection of the ionization signals. During the course of this thesis four different grids have been tested. A more detailed description of the properties of a Frisch grid is presented in Chapter 5.
The Frisch grid is basically a metallic mesh usually made of nickel, aluminium, copper or stainless steel. There are many different types of meshes depending on the manufacture technique that limit the size and shape of the mesh as well as the distance between the anode and the grid. Electroformed and chemical-etched micro-meshes are both commonly used in Micro-Pattern Gas Detectors (MPGD) such as Micro-Mesh Gaseous Structure (Micromegas) [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF][START_REF] Giomataris | MICROMEGAS : A high-granularity position-sensitive gaseous detector for high particle-flux environment[END_REF]. Electroformed micro-meshes manufactured by Precision Eforming LLC are usually very thin meshes with a thickness of 5 µm and a wide variety of geometries. Three different types of electroformed micro-meshes have been tested in XEMIS1. Their main properties are summarized in Table 3.2. The chemical-etched micro-meshes also known as CERN meshes, are also commonly used in a gridded ionization chamber. However, they are not a reasonable option for the XEMIS TPC due to the gap region is generally limited to 25 to 50 µm. Such a small gap is crucial in Micromegas detectors since the amplification gain depends directly on the distance between the micro-mesh and anode. However, in our case no amplification is needed and the mesh is only used as Frisch grid. A very narrow gap will increase the electronic noise reducing the SNR and hence, affecting the energy resolution, in addition to make harder the manage of the mesh. Another good option are the metallic woven wire meshes. This kind of grid has the advantage of allowing larger detectors with a more robust structure, which ease the handling of the mesh to achieve good flatness and parallelism with respect to the anode plane. The main characteristics of two different types of woven wire meshes tested during this work are listed in Table 3 Besides the geometry of the grid, another important parameter to take into account when selecting a Frisch grid for a LXe detector is the material. Due to the TPC is immersed in LXe, all the structures that compose the detector are subjected to extreme temperature changes. This implies that all materials should have similar thermal expansion coefficients, to minimize the expansion and contraction effects. Any deformation of the grid issues from thermal expansion would increase the electronic noise or even cause catastrophic damage to the electronics. Taking this into consideration, stainless steel and copper meshes are good choices for XEMIS1. To preserve the distance between the anode and the Frisch grid, the mesh is stretched and glued on a spacer made of copper to ensure an uniform electric field. Two different gaps of 500 µm and 1 mm have been tested during this thesis.
Mesh
The cathode
The cathode is based on an electroformed 5 µm thick micro-mesh manufactured by Precision Eforming LLC, that presents an optical transparency of 90 % for the passage of the UV photons to the PMT. It consists of a set of intertwined metallic bars made of copper to form square holes of 344 µm in steps of 365 µm (70 LPI, line per inch). The cathode is placed 1.6 cm below the PMT. Furthermore, to protect the PMT from the high voltages applied to the cathode that can reach 24 kV for a drift field of 2 kV/cm, an additional screening mesh was installed 1 mm below the PMT surface and 1.5 cm above the cathode. The screening mesh is also a 70 LPI electroformed copper micro-mesh. The total transparency for the passage of the UV photons to the PMT of the set of meshes is 81 % [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF].
The segmented anode
The pixelized anode gives information of the two-dimensional localization of an interaction inside the active area of the TPC, in addition to the deposited energy in such of interaction. For this reason, its design must be optimized to achieve a compromise between a good energy resolution and a good transverse spatial resolution. Both properties of the detector performance depend, among other things, on the pixel size inside the sensitive area of the anode. A reduction on the pixel size is good to reach a high spatial resolution but at the expense of worsening the energy resolution, since charge sharing effects due to diffusion become more important. Moreover, some technical constraints should be taken into account when selecting the pixels size. The total number of pixels is limited by the number of tracks that can be connected. The granularity is therefore limited by the minimum distance between PCB (Printed Circuit Board) tracks. In the case of XEMIS1, the anode is segmented in 64 pixels or pads of 3.1 x 3.1 mm 2 that gives an active area of 2.5 x 2.5 cm 2 . A frontal view of of the anode where the 64 pixels can be identified is presented in Figure 3.5.
Besides the relevance of the anode in terms of energy and spatial resolutions, its mechanical design and the different materials that compose it are also of primary importance. The anode in XEMIS1 is not only used to collected the ionization signal but it also works as the entrance window for the incoming radiation, being responsible for the insulation between LXe and vacuum. This implies that the mechanical structure of the anode has to support both temperatures close to the LXe temperature and pressure differences of the order of 1 to 2 bar without any important deformation or delamination. The anode has therefore represented an important challenge in the development of XEMIS1. The anode consists of a multilayer structure composed of seven main layers with a total thickness of 2.6 mm. Figure 3.6 shows a schematic design of a transversal cut of the anode. It has four copper layers called Top, Layer2, Layer3 and Bottom. The Top layer, which is directly in contact with the LXe, has a thickness of 40 µm and is responsible for charge collection. To isolate the Top layer from the next copper surface, Layer2, three sheets of prepreg with a total thickness of 147 µm are used. Layer2 has a total thickness of 46 µm. The prepreg, abbreviation of preimpregnated, refers to a fiberglass reinforced substrate which has been preimpregnated with a resin system such as epoxy. The prepeg coat serves also as bonding between the two copper electrodes to form an integrated structure. Equally, between Layer2 and Layer3, there is a 2 mm thick dielectric coating based on alternate layers of Rogers RO4350B ceramic laminates and prepreg. Both ceramics and prepreg present good mechanical resistance and they are good insulators. Moreover, both materials have a thermal expansion coefficient similar to that of the stainless steel and copper, which is important to provide good dimensional stability when the system cools down to the LXe temperature. Layer3 is finally separated from the last copper foil, Bottom, through a prepeg coating of 147 µm in thickness. Layer3 and Bottom are, respectively, 46 µm and 40 µm thick. Figure 3.7 shows the four copper layers that make up the anode.
In this multilayer structure the copper layers serve as the structural units while the prepreg and ceramics provide dielectric insulation between adjacent layers of copper, in addition to minimize electrical signal loss, reduce the crosstalk between conductive layers and reduce the electronic noise. The electrical connection between the copper layers through the dielectrics is made via conductive plated-through holes created by either laser or mechanical drilled. In Figure 3.6 the pathway via are also illustrated. The signal induced in a pixel is transmitted from the Top layer to the Bottom one through this plated-through via. Due to the anode is used as the entrance window for the incoming radiation in the TPC and thus, it is in direct contact with the LXe, to prevent a LXe leak from the TPC to the vacuum container, the different layers are not communicated with one another through the same plated hole. Instead, the layers are communicated through a series of L-shaped drilled holes as depicted in Figure 3.6. Each pixel of the anode has its own pattern of plated holes between the different layers to collect the ionization signal. The design is made to minimize and equalize the length of the conductive tracks between the copper layer for all the pixels, with the aim of minimizing the electronic noise. In the Bottom layer, pixels are wire bonded directly to the ASIC electronics through two standard 32 channels vertical mini edge card connectors.
Cryogenics Infrastructure
During operation, XEMIS1 runs at a stable temperature of 171 K and a pressure of 1.25 bar. Pressure and temperature stability are crucial to maintain the LXe at a steady state during long data-taking periods, since both ionization and scintillation yields vary with temperature. For this reason, an advanced and reliable cryogenic system is required. A schematic diagram of the XEMIS1 LXe cryogenic installation is shown in Figure 3.8. The cryogenic system consists of: a double-walled stainless steel vessels that host the TPC, an external double-walled vacuum-insulated stainless steel vessels used for thermal insulation, and a cooling tower. A Pulse Tube Refrigerator (PTR) especially developed for LXe applications is used to liquefy the xenon and maintain the temperature of the system constant. A gas-phase purification system is used to achieve high purity levels of the LXe, which is essential to measure the ionization signal. This purification system requires the continuous circulation of the xenon to evaporate it and re-condense it after purification. An effective heat transfer between the boiling and condensing xenon during circulation is achieved by means of a coaxial heat exchanger. Because of the narrow temperature margin between xenon normal boiling point (165 K) and the triple point (161 K) (see Figure 6.26), a set of security systems and a rescue tank have been introduced in order to recuperate the xenon in case of necessity. In addition, a Slow Control System allows a continuous control and monitoring of the cryogenic infrastructure.
Xenon Cooling System
The main part of the cooling system of XEMIS1, responsible of liquefying and keeping the xenon at constant temperature, is based on a Iwatani PC150 PTR specially developed and optimized for LXe cryogenics. The PTR was originally designed at KEK (The High Energy Accelerator Research Organization in Japan) and it provides stable cooling power up to 200 W at 165 K [START_REF] Haruyama | High-power pulse tube cryocooler for liquid xenon particle detectors[END_REF][START_REF] Haruyama | Development of a high-power coaxial pulse tube refrigerator for a liquid xenon calorimeter[END_REF]. The maximum cooling power is achieved when the PTR is connected to a water-cooled helium compressor (CryoMini Compressor OW404 commercialized by Iwatani / CW701) with nominal input power of 6.5 kW. The PTR provides the necessary cooling power to compensate the expected heat load of 80 W leaked into the detector due to thermal losses. In Figure 3.9, a picture of the Iwatani PC150 PTR and the available cooling power as a function of the cold head temperature are shown. As we can deduce from the plot, a cooling power of 200 W is required to maintain the temperature of the cold head at 165 K. In XEMIS1, the liquefaction process does not take place in the cryostat itself but in a separate vessel called the cooling tower. The cooling tower is located about 50 cm above the detector volume and it consists of a double-walled vacuum-insulated container made of stainless steel. The PTR is placed on top of the cooling tower and outside the vacuum vessel. The cooling power is then transferred to the system via the cold head of the PTR which is integrated inside the vacuum enclosure. The distance between the cryostat and the PTR is extended to ∼ 2 m to reduce mechanical noise [START_REF] Chen | Improvement of xenon purification system using a combination of a pulse tube refrigerator and a coaxial heat exchanger[END_REF]. A cross-section of the cooling tower is shown in Figure 3.10. The vacuum insulation of the cold part of the cryogenic system is important to reduce the heat transfer between the outside and the LXe and hence, to maintain the required temperature constant with a minimum cooling power.
The PTR and the cold head are not in direct contact with the xenon but they are coupled to a copper plate known as cold finger, which is the actual responsible of the gas xenon liquefaction (see Figure 3.10). The coupling is made by a set of indium foils for optimal heat transfer. The PTR is then connected to the cryostat via a double-walled tube. This tube allows the gas xenon from the upper part of the cryostat to pass through the cooling tower for liquefaction. If the temperature is low enough, xenon condenses on the cold finger. Afterwards, the drops of liquefied xenon are recuperated in a stainless steel funnel and guided into the tube to return into the cryostat. The cold head of the PTR is also equipped with an ohmic resistor acting as a heater to compensate any excess of cooling power of the PTR. The cold head of the PTR, heater and all temperature sensors are within vacuum insulation and cover by a Multilayer insulation (MLI) in order to avoid radiative heat transfer. The separation of the PTR from the xenon via the cold finger was first implemented for the XENON10 experiment [START_REF] Haruyama | Design and Performance of the XENON10 Dark Matter Experiment[END_REF]. This design reduces the number of impurities released into to the LXe and facilitates the replacement of the PTR without exposing the inner vessel into the air since the cold tower is hermetically closed by the cold finger. In order to control and monitor the overall temperature of the system, a set of 7 pt100 sensors are placed to control the temperature during all cryogenic process. These sensors measure for example, the temperature of the cold head and the cold finger. The cooling power of the PTR can be adjusted by fixing the temperature of the cold head. The desired temperature of the LXe can be set and maintain constant by controlling the power supplied to the heater placed on the cold head using a Proportional-Integral-Derivative (PID). This PID regulator is necessary due to the only way to control the temperature of the cold head is in fact by means of the heater. This means that if the cooling power provided by the PTR to liquefy the xenon is higher than the actual power required to maintain the xenon in liquid state, the temperature of the cold head would reach the freezing point and the xenon would start to freeze. Therefore, the PID can regulate the heater power to maintain the temperature of the cold head on the requested value. The heater has a maximum capacity of 100 W, which is enough to counteract the cooling power at all times.
Internal Cryostat and Vacuum Enclosure
Another important part of the cryogenic system is the cryostat itself. The TPC is placed inside a double-walled stainless steal vessel for thermal insulation. The inner vessel is 20 cm in diameter and 35.4 cm high. It contains around 36.5 kg of LXe and its design was made in order to host the TPC and necessary cables with the minimum extra amount of liquid. Due to thermal exchange by convection, the inner container is maintained at LXe temperature. Additionally, in order to maintain the xenon in liquid state and to avoid the possible evaporation of part of the LXe, the thermal exchanges between the cryostat and the outside are minimized. Both design and materials of the cryostat were optimized to reduce the thermal losses by conduction. The number of tubes that connect the cryostat with the rest of the system are limited. Moreover, all tubes are made of stainless steel which is a bad heat conductor. The cryostat is mounted on a support made of epoxy with a reduced surface to minimize the contact with the outer wall.
To ensure good thermal insulation the inner vessel is placed inside a vacuum container that consist of a double-walled stainless steal vessel. This outer vessel reduces the heat flow into the detector and thus, reduces the required cooling power. Due to the large volume of the whole system, the containers are first evacuated using a set of two primary vacuum pumps to reduce the pressure down to 10 -2 bar. Then and in parallel to the primary pumps, two turbo-pumps are used to achieve a vacuum level of the order of 10 -6 to 10 -8 bar. During normal operation, only a primary vacuum pump and a turbo-pump are continuously running to maintain the pressure level. In addition to the vacuum insulation, multiple layers of aluminized Mylar foil (MLI) are applied between the inner vessel and the vacuum container and also around all tubes and connexion wires to avoid radiative heat transfer. Figure 3.11 shows an internal view of the vacuum vessel that enclosures the cryostat. The system can be optimized by covering the outer flange with MLI. In this way, we can reduce the radiative heat transfer from the outer vessel to the LXe.
Precooling and Liquefaction Procedures
In addition to the xenon liquefaction, the PTR has the task of lowering the temperature inside the cryostat chamber. The goal of this initial phase, known as precooling, is to reduce the temperature of the cold head and the entire system down to around 165 K in order to start the liquefaction process.
Before starting the precooling the required vacuum level is achieved. Afterwards, 2 bar of xenon are injected into the cryostat. This pressure is enough to reduce the temperature of the whole system without damaging any part of the detector, such as the PMT or the cryostat, that can withstand pressure up to ∼2 bar. Once the gaseous xenon is inside the inner vessel, the PTR is turned on and the temperature of the cold head is set to 165 K. Figure 3.12 shows the temperature variation of the cold finger during the precooling. As we can see, the temperature decreases continuously from room temperature to around 165 K in about 1 hour. At this point the temperature of the cold finger becomes stable, which means that the temperature of the cold head of the PTR is low enough so the xenon starts to condensate in the surface of the cold finger. The drops of liquefied xenon are recuperated by the stainless steel funnel placed below the cold finger which is still warm. This implies that when the liquefied xenon touches the funnel, it will evaporate cooling the funnel progressively. Thanks to these condensation-evaporation cycles of the 2 bar of injected xenon, the temperature of the funnel and the rest of component of the detector including the tubes that connect the cold finger with the cryostat and the cryostat itself, falls to 165 K in less than 24 hours. The time required to cool the whole system depends on the quality of the insulation. Figures 3.13 and 3.14 shows both temperature and pressure profiles during the precooling until achieve stability. The pressure inside the chamber decreases gradually to 0.9 bar. The pressure fluctuations observed at the beginning of the precooling are the result of liquid drops that fall and touch the bottom of the cryostat which is still warm. When this happens, the pressure rises rapidly. However, these pressure increases are far from the pressure security value of 1.8 bar, and thus they are harmless to the detector. The precooling stage can be accelerated by injecting more xenon in gas state once the heater starts to compensate the extra cooling power of the PTR. The cooling power required to liquefy the xenon will be reduced if the temperature of the system is low enough before starting the xenon injection. Once the cryogenic system is cooled down sufficiently, we can start filling the inner vessel with GXe. The gas injection should be made progressively to avoid an excess of pressure inside the chamber that could damage the detector. The gas flow depends on the pressure difference between the storage bottle and the system. When the pressure inside the bottle decreases, the flow rate into the chamber rises, increasing consequently the pressure inside the cryostat. For this reason, to compensate the pressure variations during gas injection a Mass Flow Controller (MFC) is placed between the injection valve, that regulates the gas flow that gets out the bottle, and the cryostat. The gas flow is set to 5 l/min during all liquefaction process. Since the temperature inside the detector is low enough, the gas is immediately condensed at the cold finger and the detector is completely filled with liquid in about 24 hours. The amount of LXe can be controlled by monitoring the mass of the storage bottle or by measuring the level of LXe inside the cryostat. The level of LXe is measured thanks to a liquid level meter placed at the bottom of the chamber (see Section 3.24). The xenon mass variation inside the storage bottle and the level of LXe inside the TPC are presented in Figure 3.15(a) and 3.15(b) respectively. In both cases, the amount of xenon varies progressively during the liquefaction process. The drastic variation on the level of LXe inside the chamber from zero to 2.8 cm is due to the position of the liquid level meter with respect to the bottom of the cryostat. In less than 1 day, the detector is full with around 20 cm of LXe. A reduction of about 2 cm in the level of LXe is produced when the circulation process starts, since a constant amount of LXe remains inside the connecting tubes during circulation. The cryogenic system is ready to start the LXe circulation and purification after 36 hours of cooling.
Xenon Purification and Recirculation Systems
The detector performances are limited by the level of purity of the LXe. If the purity is not good enough, the drifting electrons may be absorbed by electronegative impurities such as O 2 or N 2 before being collected by the anode. Moreover, the presence of water inside the detector, as main contaminant, may also degrade the scintillation signal. For this reason, in order to achieve and maintain an adequate level of purity, constant circulation of the xenon through a purification system is required. In XEMIS1, the purification system is based on two rare-gas purifiers connected in parallel, which consist of a high-temperature SAES MonoTorr Phase II getter, model PS4-MT3-R/N, based on zirconium (see Figure 3.16). This kind of purifier provides output impurity levels below 1 part per billion (ppb
) O 2 equivalent [185].
With such a purification system, the xenon must be in a gaseous state, so it has to be continuously evaporated and condensed by a re-circulation system. The circulation is driven by an oil-free membrane pump that creates the required pressure drop to pump up the xenon from the cryostat and send it to the getter (see Figure 3.17 The continuous phase changes from gas to liquid and vice versa, require a considerable amount of energy. For a flow rate of 3 liters per minute, the power required to decrease the temperature from 165 K to room temperature is about 13 W (specific heat of xenon is 0.34 kJ/kg • C at 1 bar). Additionally, another 28 W are needed for the phase change from GXe to LXe (latent heat of xenon 96.26 kJ/kg). This means that for a circulation rate of 3 liters per minute at least 41 W are required to re-liquefy the purified xenon. Without a heat exchanger incorporated to the set-up all this cooling power is provided by the PTR, which implies that the flow rate is limited by the available cooling power. For circulation rates higher that 15 liters per minute, even the maximum cooling power delivered by the PTR would not be enough to liquefy the xenon. In fact, if we also take into account the thermal load leaked into the detector, a maximum circulation rate of the order of 3.8 liters per minute is achieved without heat exchanger. With a gas flow of 3.8 liters per minute only ∼38 % of the total volume of LXe is purified per cycle [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF].
In order to mitigate the heat load required by these phase changes and to achieve an effective thermal transfer between the boiling and condensing xenon, a coaxial heat exchanger has been installed in XEMIS1. It consists of a stainless steel 22.5 cm long hollow tube with a diameter of 48.26 mm, which has been bent to form a ∼1 m height cylindrical shape. The XEMIS1 heat exchanger is presented in Figure 3.18(a). The concentric tube is placed inside a vacuum enclosure for thermal insulation as shown in Figure 3.18(b). To optimize the heat transfer between the evaporating liquid stream and condensing gas stream, the gaseous xenon that returns into the detector enters the heat exchanger through a thinner tube of 17.08 mm in diameter. This tube is contained inside the main tube on the side of the exchanger that collects the GXe after purification, known as tube side (see Figure 3.8). Outside the thin tube, the xenon remains in gas state, so the heat transfer to re-condense the xenon is made around this thiner tube. The heat exchanger is placed ∼50 cm above the cryostat and coupled to the PTR to tackle space constraints and also to insulate the PTR through the heat exchanger. During normal operation, the xenon is pumped out the cryostat thanks to the circulation pump and passes through the heat exchanger where it is evaporated. Afterwards, the gaseous xenon flows through the getter for purification. The gas flow can be measured by a MFC and controlled manually by a valve placed between the outlet of the heat exchanger and the circulation pump. After purification, the xenon returns into the detector volume through the tube side of the heat exchanger where it is re-condensed. The maximum circulation flow rate achieved with XEMIS1 is 32 liters per minute, which implies that almost the total volume of xenon can be purified continuously. An schematic diagram of the re-circulation process is also illustrated in Figure 3.8. The integration of a coaxial heat exchanger in the circulation system with respect to a previous prototype based on a parallel-plate heat exchanger [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF], has led to a great improvement not only on the performances of XEMIS1 but also on the development of a bigger scale LXe Compton telescope prototype such as XEMIS2 [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF].
In terms of operational effectiveness, the heat exchanger was designed in order to achieve an efficiency of 95% for a re-circulation rate up to 50 NL/min (4.92 g/s). This efficiency was estimated for a pressure difference of 0.5 bar between both sides of the heat exchanger and a temperature difference of 6 K at the cold end. At detailed study of the performances of the heat exchanger used in XEMIS1 as a function of the gas flow has been performed by Chen et al. [START_REF] Chen | Improvement of xenon purification system using a combination of a pulse tube refrigerator and a coaxial heat exchanger[END_REF]. As we can see in Figure 3.19(a), an efficiency of the order of 99%, better than the expected value, is achieved even at high gas flows. This means that during circulation less than half of the available cooling power is used to compensate heat leaking into the detector (about 80 W of thermal losses has been estimated for XEMIS1), since heat exchange with an efficiency of the order of 99% up to a flow rate of 32 liters per minute is possible. With the XEMIS1 re-circulation and purification systems, we achieve electron drift lengths greater than 1 m after 1 week of circulation. Figure 3.19(b) shows the estimated cooling power required to keep stable the pressure inside the cryostat as a function of the gas flow when using the coaxial heat exchanger. As we can observe thanks to heat exchanger the required cooling power is almost the same regardless the circulation rate, except for very low deliveries. We achieve good stability levels in both pressure and temperature during long data-taking periods.
Xenon Storage and Recuperation
Whenever the detector is not running, the xenon is stored in gaseous state in a stainless steel bottle where the xenon is kept at a pressure of 60 bar. Figure 3.22 shows the high pressure storage bottle and the connexion to the injection valve. During operation the bottle is isolated from the rest of the set-up through this valve. A pressure regulator is used to monitor the pressure inside the bottle. A second empty bottle is also installed in case the xenon has to be immediately recuperated during operation. When the cryostat needs to be emptied, the xenon is recuperated during a process known as cryopumping. This process is based on cooling down the temperature of the bottle that contains the xenon when the set-up is not operational, so the pressure difference between the detector and the storage bottle will force the gaseous xenon to flow from the detector to the bottle. For recuperation, a dewar that surrounds the storage bottle is filled with liquid nitrogen, so the container is cooled down to a temperature of around 77 K. The temperature of the bottle is therefore lower than the xenon freezing point (161 K). The valve that separates the bottle from the system is opened, so GXe from the detector volume flows into the cool bottle and it immediately sublimates on its walls. Since all the gas freezes, the pressure in the bottle remains low and the xenon will keep flowing to the storage vessel. During the cryopumping the PTR is turned off. A second heater also based on an ohmic resistance is located at the bottom of the cryostat to help the xenon evaporation and hence to increase the recuperation rate. The gas flow is limited by the evaporation of the xenon inside the detector, which avoids the xenon to freeze inside the detector. In addition, the primary vacuum pump and the turbo-pump are switched off preserving a static vacuum inside the chamber. After all xenon is recuperated, the injection valve is closed and the bottle starts to heat up. Once the temperature of the bottle drops to room temperature, the xenon will remain inside the storage container in gaseous form. The complete cryopumping procedure takes about 6 hours (see Figure 3.23). The temperature of the outer and inner vessels rises above 0 • C in around 12 hours. At this moment, the chamber can be opened ensuring that no ice is formed on the interior walls.
Slow Control System
A Slow Control System (SCS) was developed using LabView to constantly monitor the essential values of the experimental set-up such as temperature, pressure, LXe level, gas flow rate and heater power. Figure 3.24 shows a screen shot of the slow control system of XEMIS1. A total of 26 parameters are monitored and continuously saved to a file every 1 to 5 seconds. The continuous data acquisition allows the user to access the stored information at any particular moment. In addition, the SCS allows access to all the variables monitored over the last 24 hours and provides real-time plots of the different parameters. The SCS also includes an alarm system that sends a text message when the temperature or pressure inside the cryostat approach a critical value. The alarm is also sent whether the vacuum inside the chamber is altered. Pressure monitoring inside the system is of crucial importance, not only for operation but also in terms of safety. The PMT and the anode support pressures up to 2 bar. Above this value, permanent damages can be inflected to the detectors. Equally, pressures higher than 3 bar can damage the membrane of the circulation pump. The pressure is controlled inside the chamber, in the purification and circulation systems and also in the storage bottle that collects the GXe when the detectors is not in use. These are measured by an absolute pressure sensor with a pressure range of 0 -3 bar. The pressure measurement is converted to an electric signal with typical output of 4 -20 mA. The signal is transmitted to a programmable logic controller and then sent to the computer for monitoring and registering.
Temperature at the main parts of the experimental set-up is controlled by 10 pt100 temperature sensors. We can highlight cold head and cold finger temperature measurements, indispensable during the precooling and liquefaction process, as well as a set of pt100 sensors to control the temperature at heat exchanger inlet and outlet, and at the bottom of the cryostat. Temperature control is important due to the narrow margin between the xenon boiling point and triple point. The freezing of xenon inside the chamber can reduce the performances of the detector and even cause irreversible damages. The level of LXe inside the cryostat is measured by a capacitive sensor manufactured by American Magnetics Inc. The sensor is place around 2 cm above the bottom of the cryostat, which means that no information of the level of LXe inside the chamber are available below this value. Alternatively, the amount of xenon inside the chamber can be controlled by a balance placed under the storage bottle. The difference between the weight of the bottle before and after the injection of GXe into the chamber provides information of the volume of xenon inside the cryostat. The gas flow is monitored during both the injection phase and re-circulation process by a MFC from Bronkhorst High-Tech. Finally, the vacuum insulation is controlled by means of a standard vacuum gauge.
Safety Systems
During a data-taking period XEMIS1 runs continuously for several weeks or even months, which means that the detector should be able to work in the absence of lab personnel. In case of cooling power failure the system should be ready to recuperate immediately the xenon to protect the workers and the detectors, and also to avoid a potential loss of xenon. If the PTR or any component of the cryogenic system stop running due to, for example, a power outage, the xenon will start to warm up and the pressure inside the detector will start to increase. To limit the pressure inside the chamber a pneumatic valve shown in Figure 3.25(a) is included in the set-up as a safety device. The valve opens at a pressure above 1.8 bar, so the detector is never exposed to pressure levels that could damage the different components of the TPC. When the pneumatic valve opens, the xenon is sent to a 4m 3 rescue tank (see Figure 3.26). Once the pressure inside the chamber falls below 1.8 bar, the valve closes off. Since this pneumatic valve requires power supply, after an electrical failure the valve will not work. For this reason, a second safety device based on a burst disk is also included (see Figure 3.25(b)). The goal of this disk is exactly the same as the valve. It opens if the pressure inside the cryostat exceeds 2 bar. However, once the burst disk is opened all xenon is recuperated in the rescue tank. The rescue tank can host 35 kg of xenon is gaseous state at a pressure of 1.6 bar. The tank is maintained at a vacuum level of the order of 10 -2 bar thanks to its own primary vacuum pump. To prevent freezing due to an important pressure drop, a one-way valve is located before the tank inlet. Moreover, to overcome an electricity cut, two backup power supply systems are incorporated to the set-up: an Uninterruptible Power Supply (UPS) and a power generator. The UPS provides emergency power supply with a typical autonomy of 15 minutes after an electrical failure. In addition, the UPS provides near-instantaneous protection from input power interruptions. Immediately after the UPS starts, the generator starts up providing power supply to the detector during a maximum of 12 hours. After a power failure the system is allowed to warm up for at least 2 hours before the pressure achieves a critical value. The laboratory personnel is notified by the slow control alarm system, which means that 12 hours are enough to restore power or to perform the cryopumping procedure to recuperate the xenon.
XEMIS2: A small animal imaging LXe detector
Detector description
A small animal imaging LXe Compton telescope called XEMIS2 is currently under development as a prototype for quantifying the potential benefit of the 3γ imaging technique in terms of injected dose. The design of the system is based on a cylindrical camera filled with LXe that completely covers the whole animal. This particular geometry maximizes the FOV and thus, increases the sensitivity. A general view of the experimental set-up of XEMIS2 is presented in XEMIS2 consists of a cylindrical TPC full of LXe with a total drift volume of 26.7 x 26.7 x 24 mm 3 . Figure 3.29 illustrates a design of the active zone of XEMIS2. At detection level, the camera is composed by two identical TPC arrays placed back to back and separated by a shared cathode. The active zone of the detector is a cylinder of 7 cm of inner radius, 19 cm of outer radius and 24 cm of total depth (12 cm for each TPC array). The active volume is completely covered by 380 Hamamatsu PMTs to detect the VUV scintillation photons (178 nm) generated after the interaction of a γ-ray with the LXe. Two circular segmented anodes are located at the edges of the active zone. A total amount of 24000 pixels of 3.125 x 3.125 mm 2 size are used to collect the ionization signal. A mesh used as a Frisch grid is placed above each anode. To provide a homogeneous electric field between the cathode and the anode, a set of 104 field rings is located around both sides of each TPC. A schematic diagram of the active zone of XEMIS2 is illustrated in Figure 3.30. For simplicity, the design only shows the upper half of one of the TPCs placed at the right side on the cathode. The other half of the TPC is completely symmetrical. All the dimension are expressed in mm. Each TPC has a drift length of 12 cm between the cathode and the grid. This length is the same as the one used in XEMIS1, and its value was chosen according to the requirements exposed in Section 3.1.1. A gap between the grid and the anode of the order of 100 µm has been chosen in order to reduced the effects of charge sharing. The TPC is divided in two by a hollow 2.5 mm thick tube made of Aluminium. The tube is place outside the chamber and it is in direct contact with the air. In Figure 3.30 only half of the tube is represented. The dimensions of the tube are 100 mm of diameter and a total length of 875 mm. The tube crosses the chamber side to side. The purpose of this tube is to hold the small animal during the medical exam. For this reason, to insulate the animal from the internal temperature of the cryostat and maintain it at room temperature, the tube is separated from the stainless steel inner vessel by a 7.5 cm thick vacuum insulation. The inner container has a thickness of 1.5 mm that separates the LXe from the vacuum. A 7 cm thick Teflon support is added around the inner radius of the stainless steel vessel, that serves both as insulation and to increase the light collection by the PMTs. The tracks are directly printed in a kapton support which serves as insulation. These two sets of electrodes are necessary to ensure a homogeneous electric field along the chamber. The sensitive volume of XEMIS2 is then defined by the drift length between the cathode and the grid, and by the intersection area of the segmented anode, which is in total of de order of 24 x 24 x 24 cm 3 .
In the same way as in XEMIS1, to ensure a uniform electric field inside the drift volume, two resistive divider chains (one for each set of electric field rings) immersed in the LXe are used to drive the potential from the cathode to the first field ring. Since in XEMIS2 there are two TPC, each of them can be treated independently from one another. Each resistive divider chain is based on 500 MΩ resistors. The last resistor of each chain is connected to ground, whereas the first one is connected to the rim of the high-voltage electrode, which is common for the two detectors. The first copper field ring and the Frisch grid of each TPC are biased through two independent power supplies. Each anode is connected to ground via the front-end electronics. As in XEMIS1, to read-out the collected charge by the pixels of the anode, we use an ultra-low noise front-end electronics based on an IDeF-X HD-LXe 32 channels chips [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF]. Each ASIC is connected to 32 pixels, which generates 32 analog signals. To reduce the volume of data produced by 24000 pixels, each IDeF-X is coupled to a new front-end electronics called XTRACT that extracts from each detected signal that crosses a given threshold, the amplitude, time and pixel address. The performances of this ASIC are presented in Section 4.6. The electronics are placed inside a vacuum container allowing a reduction of the electronic noise.
The assemblage and the materials of the different components of the TPCs follows the same principle as the one presented for XEMIS. The device is designed in such a way that the total TPC constituted by the common cathode and the field rings can be mounted independently of the PMTs, which allows a correct adjustment of all the components. The total structure is then assemble together inside a stainless steel cylindrical cryostat. This inner cryostat is located in a vacuum chamber to reduce the convection heat transfer.
Light detection system
In order to detect the scintillation light in XEMIS2, we use a set of 380 VUV-sensitive Hamamatsu R7600-06MOD-ASSY PMT especially developed to work at LXe temperature. The main characteristics of this kind of PMT are reported in Section 3.1.2. The PMTs are located all around the active zone of XEMIS2 and mounted on a stainless steal structure. A picture of the mounting bracket is shown in Figure 3.32. The structure is directly immersed in the LXe. Each PMT has a quartz surface of 26.8 x 26.8 mm 2 with an active area of 18 x 18 mm 2 . A distance of 3 mm between PMTs is set by the separation structures, that constitutes a dead zone between PMTs of the order of 20.6 mm. The PMTs surface is place at around 31 mm from the LXe and 22 mm from the external part of the ring shaping rings. To protect the PMTs from the hight voltage applied to the cathode and the copper electrodes, a screening mesh is installed 7 mm below the PMTs surface and 1.5 cm above the electric field rings. The screening grids is based on a copper mesh with a 6.3 mm pitch and an optical transparency of 89 %.
The design of the XEMIS2 has been optimized to increase the light detection with respect to XEMIS1. Every interaction inside the active zone of the detector will produce a certain number of VUV scintillation photons depending on the energy of the γ-ray and the applied electric field. Due to the full coverage of the chamber by photodetectors, we ensure that the produced light will be collected. The scintillation signal will be exclusively used to measure the time of interaction t 0 of the 3γ-ray interaction, which allows to determine the z-position, and not for spectroscopy purposes. In addition, the light detection can be useful to reduce the fraction of pile-up events.
Charge collection system
To collect the ionization signal produced after the passage of an ionizing particle through the LXe, XEMIS2 includes: two meshes used as a Frisch grid, two segmented anodes divided in 12000 pixels each, a shared cathode and the data acquisition electronics.
The cathode is based on a 2 mm thick stainless steel plane electrode (see Figure 3.33). The cathode is completely opaque for the passage of the UV photons, so for a given interaction the produced light is not divided between the tow TPCs. The cathode has a circular shape with an inner radius of 6.3 cm and an outer radius of 20.3 cm, allowing the aluminium tube that holds the small animal to pass through. The cathode is placed in the middle of the two TPC a 12 cm from each anode. Under standard experimental conditions, the cathode will be biased at a high voltages of 25 kV for a drift field of 2 kV/cm. The generated charges inside the detector are collected by two segmented anodes place diametrically opposed. Based on the results obtained with XEMIS1, the pixel size has been optimized to achieve good energy and spatial resolutions, withstanding the space limitations imposed by the electronic connectors. Each anode is divided in 12000 pixels of 3.1 x 3.1 mm 2 that gives an active area of 24 x 24 cm 2 . A frontal view of of the design of one of the anodes is shown in Figure 3.34. The internal structure of the anode is the same as the one reported in Section 3.1.3. The anode has a circular shape with an inner diameter of 135 mm and an outer diameter of 424 mm. The internal and external parts of the anode are based on a copper circular-shaping ring with a thickness of 5.3 mm and 10 mm respectively. Between these copper rims and the first pixels there is a 1 mm thick insulating layers. Due to the circular shape of the anode, a set of smaller pixels is placed around the inner and outer parts of the anode to fill up with electrodes the entire structure. Both internal and external copper rings are connected to a high voltage supply. obtain the charge and timing information of each signals that crosses a certain threshold level. Each set of eight XTRACTs is read-out by a common PU card. This new connexion system significantly reduces the amount of cables that go out from the chamber to the outside, reducing the risk of leak between the vacuum and the air.
Electronics cooling system
On low temperature detectors, a complete knowledge of the heat transfer in the detector is necessary in order to reduce both the power consumption and the possible operational problems caused by liquid variations inside the active volume (see Section 6.4.1). The conduction of heat from the outside surfaces to the LXe imply a major source of heat load. In XEMIS2 the large number of read-out electronics will cause an important heat flow from the electronics towards the inner vessel.
The electronic system used to collect the ionization signal require ∼100 mW per channel, which leads to a heat dissipation problem. In order to minimize the heat load from the front-end electronics to the LXe, two options have been proposed: a cooling system and the installation of the electronics directly inside the LXe. Figure 3. 35(a) shows the mechanical design of the cooling system. It is based on an coiled stainless steel pipe design to circulate LXe at a temperature of 168 K and a pressure of 1.2 bar. The LXe is directly extracted from the detector cryostat and evacuated to the surface of the chamber. The tube is placed on a copper structure. Since the cooling is conductive, the heat sink is placed in direct contact with the electronics to remove the dissipated heat. On the other hand, the mechanical design of the chamber with the electronics inside the LXe is depicted in Figure 3.35(b). Only the IDeF-X LXe front-end electronics would be placed inside the liquid, whereas the XTRAC and Pu boards would remain in the vacuum. This option provides better temperature stability, but on the other hand, implies more technological difficulties, especially in the interface vacuum-liquid. Please note that the goal is not to cool down the electronic components themselves, but to minimize the heat flow towards the liquid.
Cryogenics Infrastructure. ReStoX: Recovery and Storage of Xenon
As discussed in Section 3.1.4, during operation the LXe inside the chamber should be kept under stable temperature and pressure conditions during long data-taking periods. However, temperature and pressure stability becomes more and more difficult as the dimensions of the detector increases. Same cooling and storage systems as the one used in XEMIS1 are no longer a feasible option for XEMIS2, where the amount of LXe increases from 8 l to 70 l.
In this section, we introduce the new cryogenics infrastructure developed for XEMIS2. Figure 3.39 shows an schematic diagram of the XEMIS2 LXe cryogenic installation, that can be divided in three different sub-systems: the cryostat that encloses the TPC, the purification and recirculation systems and a new recovery and storage system called ReStoX (Reservoir Storage Xenon), which is also engaged in the pre-cooling and liquefaction processes.
Internal Cryostat and Vacuum Enclosure
The detector cryostat consists of a double-walled stainless steel vessels that host the TPC. This inner vessel is 60 cm in diameter by 35.4 cm long and it provides thermal insulation to the detection volume. The inside of the cryogenic chamber is illustrated in Figure 3.36. The camera holds 200 kg of LXe and its is placed inside a double-walled vacuum-insulated stainless steel vessels used to improve the thermal insulation.
The design of the chamber has been optimized in order to reduce the amount of liquid xenon, the power consumption and heat transfer into the detector. The vacuum enclosure limits the convective heat transport. To increase the shielding, the vacuum envelope is based on a cylindrical shell 5 mm thick with a diameter of 80 cm and 87.5 cm of length. Under normal operating condition a vacuum level of the order of 10 -6 bar is continuously maintained. This outer vessel reduces the heat load into the detector and thus, reduces the required cooling power. The conduction of heat from the outside surfaces to the LXe can also occur through the components that supports the inner vessel, cables, detectors, etc. On the worst possible scenario, a thermal loss by conduction of ∼28 W has been estimated through the mechanical components of the chamber including, for example, the three bracket supports, the high voltage lines, the PMT cables and the two upper exit pipes. A small heat transfer of ∼10 W has been estimated from the 380 PMTs to the LXe.
To reduce the thermal radiation, the outside of the inner chamber is wrapped with MLI. Without any kind of insulation, a heat load of 156 W due to radiative losses has been calculated. Most of the thermal losses comes from the stainless steel enclosure, whereas a negligible loss has been determined through the central tube. With an insulation consisting of 20 layers of MLI, a reduction of the radiative heat transfer by a factor of 11 has been estimated, which leads to a total radiative heat load of ∼7.5 W. This improvements against thermal radiation reduces the cooling power consumption and increases the circulation rate.
The cabling inside XEMIS2 is also a challenge compared to a small dimension prototype. 380 cables with a length of 1.5 m from the PMTs must be extracted from the external vessel through the outside avoiding any possible leak, in addition to the 98 cables from the front-end electronics, the high voltage lines and the calibration and monitoring wires from the slow control sensors. XEMIS2 has four side openings around the outer vessel as shown in Figure 3.27. The two upper tubes have a diameter of 270 mm and are designed to exit, on one side the 380 PMTs cables, and on the other side the high voltage line. All these cables are extended toward the top of the cryostat through the exit pipes and they pass through the cap of the outer vessel to out of the chamber. The lateral tubes are equally a 200 mm extraction pipe. One of them is used to host the front-end electronics and exit the read-out signal cables from the two segmented anodes towards the outside, where the data acquisition systems and computers are located. Once all cables are within their respective exit tubes, the tubes are pumped up to remove the presence of impurities due to the cable outgassing. The other lateral exit tube is connected to the vacuum pump.
ReStoX: Recovery and Storage of Xenon
Same cryogenics infrastructure as the one reported in Section 3.1.4 for the small dimensions prototype XEMIS1 is not longer viable when a 200 kg of LXe detector is developed. The operation principle and purposes are the same, but dealing with a large amount of LXe requires a new technological concept to liquefy, store and transfer the xenon in an efficient and low consuming way. Subatech in collaboration with Air Liquide Advanced Technologies has developed a sophisticated cryogenic storage and recovery system called ReStoX (Reservoir Storage Xenon) that brings together the three last requirements. A storage system of similar characteristics has already been installed and tested in the Gran Sasso Underground Laboratory (LNGS) Laboratory for the dark matter XENON experiment [188].
The design of this new cryogenic infrastructure has to comply with the criteria of a medical imaging facility. The detector should only be operational during medical exams or calibrations tests. Therefore, when the detector is not working, the chamber is empty of LXe, which implies that a reliable and relative fast recovery and transfer system is necessary. Unlike XEMIS1, where the xenon is kept in a gas bottle when the detector is not in use, storing 200 kg of LXe in gas state will require at least 6 standard bottles (60 bar) at ambient temperature, besides the important time required to liquefied this large amount of xenon (see Section 3.1.4). For this reason, one of the main goals of ReStoX is to store the xenon already in liquid state.
ReStoX is based on a double-walled insulated stainless steel tank, that can hold up to 280 l of gaseous xenon. For security reasons, it has been designed in such way that only the 25 % of the total volume should be in liquid state, which makes 70 l (203 kg) of LXe inside ReStoX, while the rest of the volume is occupy by xenon in gas state. The inner vessel that holds the xenon, is 2 cm thick and it has a diameter of 20 cm and a length of 1 m. To protect and insulate the inner vessel from convection heat transfer, it is enclosed in a vacuum insulation container. The vacuum enclosure is a 300 l volume stainless steel cylinder with a wall thickness of 2 cm. The vacuum jacket is 25 cm thick and it is filled with perlite insulation. Both stainless steel containers have total weight of 440 kg.
Precooling and Liquefaction Procedures
To initially liquefy the xenon, ReStoX has a cooling system based on liquid nitrogen (LN2). Instead of using a PTR, the cooling system of XEMIS2 is based on a massive aluminium heat exchanger, which behaves as a thermal buffer. The heat exchanger is a cylinder with a cross section of 600 mm by 205 mm and 260 kg of aluminium. The condenser is located at the top of the reservoir and placed inside the inner vessel. The LN2 circulates in an open loop through the aluminium heat exchanger via a double stainless steel tube of 2 x 12 m long. This continuous circulation of LN2 reduces progressively the temperature of the aluminium block, until it reaches a temperature slightly below the xenon transient temperature. Since the thickness of the tube that holds the liquid nitrogen is small, after some time both surfaces reach the same temperature. Due to the high thermal inertia of aluminium, the condenser is capable of maintaining the temperature of the system stable over long-term periods without any refrigeration. Moreover, since the inner vessel is also in contact with the aluminium condenser, its temperature is automatically reduced. Around 250 -300 kg of liquid nitrogen are necessary to reduce the temperature of the heat exchanger, the stainless steel inner vessel and 2 bar of xenon from 300 K to 170 K.
Figure 3.37 shows the liquid nitrogen container, that holds around 3500 kg of LN2. The liquid nitrogen is transferred to the ReStoX though a 50 cm thick stainless steel tube, and the injection rate is regulated by a control valve. Once the liquid nitrogen pass trough the cooling loop, it is evacuated to the open-air. Liquid nitrogen is not only used to initially liquefy the xenon, but also to maintain the temperature and pressure of the system constants by a continuous compensation of the specific heat losses of the whole system. An increase of the LN2 flow reduces the pressure system, whereas no LN2 circulation will imply a progressive increase of the pressure inside the system. Unlike XEMIS1, no heater is used to regulate the temperature inside the chamber. Under equilibrium condition, the xenon is kept at a temperature of 172 K and a pressure of 1.5 bar inside ReStoX. Same as in XEMIS1, before sending the xenon inside the cryostat, we need to reduce the temperature of the whole system in order to avoid the pressure to increase above the security value of 2 bars. Figure 3.38 illustrates the GXe and LXe injection procedures from ReStoX to the detector cryostat. We use the GXe which is at the surface of ReStoX to inject 2 bar of xenon inside the cryostat though a valve. When the chamber is cold enough, we can start filling the detector vessel with LXe. A UHV manual valve directly connects the inner vessel with the chamber. Since the pressure inside ReStoX is 1.5 bar and the pressure of the connexion tube is approximately ambient pressure (∼ 1 bar), this depression makes the LXe to go naturally towards the cryostat when the valve is opened. The moment the valve is opened, the flow of liquid nitrogen is stopped. However, thanks to the high thermal inertia of the aluminium condenser, the inner pressure of ReStoX is stable. During the injection, the level of xenon and pressure inside the chamber starts to increase. The steady state is reached when the pressure inside the cryostat is 1.2 bars, whereas the pressure inside ReStoX remains at 1.5 bars. The temperature inside the chamber is therefore ∼168 K. When the injection process is finished, a pressure difference of 300 mbar between the slower point of ReStoX and the LXe surface inside XEMIS2 is reached. Same stainless steel tube is used to recuperate the xenon. This reversible transfer system allows to drive back the LXe into the ReStoX from the cryostat in case of necessity in a about eight minutes. Moreover, the valve allows to completely separate ReStoX from the rest of the system and used it as a safe storage tank. To recuperate the LXe inside ReStoX, the temperature and pressure of the reservoir is reduced by increasing the flow of LN2. Due to the pressure difference between ReStoX and the cryostat, the LXe naturally returns to the storage tank. The circulation of LN2 continuous until no liquid remains inside the chamber.
Xenon Purification and Recirculation Systems
To accomplish for the high LXe purity levels required for signal detection, the LXe has to be continuously purified during its storage, so the detector cryostat is directly filled up with ultra pure xenon at any moment. The xenon is continuously liquefied and purified thanks to a closed loop that includes the detector cryostat and the storage tank. The purification process is illustrated in Figure 3.39. The purification and re-circulation systems used in XEMIS2 are the same as the one used in XEMIS1 (see Section 3.1.4). The LXe is extracted from the chamber and sent to the purification loop by a double oil free pump [186]. The LXe enters in a 30 m long coaxial heat exchanger placed at the top of the aluminium condenser. No thermal contact between the two heat exchanger is established. The coaxial heat exchanger is placed inside the vacuum enclosure to not disturb the heat transfer. Once in the coaxial heat exchanger, the LXe is evaporated and pumped through a SAES MonoTorr Phase II hot getter, model PS4-MT3-R/N [185]. The pure GXe returns to the ReStoX through the coaxial heat exchanger, where it is cooled and re-condensed with a ∼ 99% efficiency. A maximum re-circulation rate of 2.9 g/s is expected due to the pumps.
Under equilibrium conditions, the xenon inside the TPC is at a temperature of 168 K and at a pressure of 1.2 bars. However, the thermal losses due to radiation and conduction increases the temperature of the liquid above the stationary regime. An estimation of possible heat transfer on the three different sub-system reveals that most of the heat load is due to the cryostat (44 W) and ReStoX (33 W), while only 15 W are lost through the purification system. The heat transfer inside the TPC causes the evaporation of part of the LXe stored inside the chamber. Part of the heat load is directly evacuated through the ReStoX due to the temperature difference between the two systems. However, if the heat load is higher that the heat flow between the cryostat and ReStoX, the GXe accumulated in the surface of the chamber will cause an increase of the pressure. To maintain the pressure at 1.2 bar, the GXe must be evacuated from the chamber. A pump is used to extract the GXe from the cryostat surface towards the purification loop. This evacuation process is illustrated in Figure 3.38. Furthermore, in the process, part of LXe is also removed from the chamber. When the xenon evaporates due to the heat transfer, the equilibrium level required to perform the medical exam decreases. The amount of LXe lost has to be replaced by injecting more LXe from ReStoX to the detector.
Safety System and Slow Control
The narrow operational margin of 4 K (at 1 bar) requires a continuous monitoring of the temperature and pressure inside both ReStoX and the detector. In addition, as discussed in Section 3.1.4, a set of security systems should be introduced in order to recuperate the xenon inside the storage tank in case of emergency.
ReStoX is able to work even in case of a power failure. The continuous liquefaction of xenon is based on the circulation of LN2 through the aluminium heat exchanger. If the LN2 stops, ReStoX is design to withstand about 70 bar of pressure at ambient temperature of around ∼30 • C. This means that the 200 kg of xenon may be stored in gas state safety. Nevertheless, if the pressure inside ReStoX exceeds the design value, a burst disk is included that will irreversibly open and liberate the xenon to the outside. To prevent the completely loss of all the xenon, another safety valve has been installed. This relief valve opens proportionally with increasing pressure discharging a small quantity of xenon. The valve closes once the pressure falls below a certain reseting value.
Similarly, the pressure inside the cryostat should not exceed 2 bar to avoid damages on neither the detector, nor the cryostat itself. To limit the pressure inside the chamber the same security system as the one installed in XEMIS1 is used. A pneumatic valve and two burst disks open if the pressure reaches 1.8 bar. If that happens, the xenon is evacuated to the outside. As the same way as for ReStoX, the total loss of xenon is prevented through the GXe injection pipe. A control valve closes the purification and re-circulation loop, and regulates the passage of LXe towards ReStoX. The opening and closure of this valve is regulated by a slow control interface.
Conclusions Chapter 3
In this chapter, a detailed description of the first prototype of a LXe Compton telescope XEMIS1 has been carried out. This small dimension prototype represents the experimental evidence of the feasibility of the 3γ imaging technique. XEMIS1 consists of a time projection chamber of 2.5 × 2.5 × 12 cm 3 active volume full of LXe. The VUV scintillation photons generated after the interaction of a γ-ray with the LXe are detected by a PMT, which has been especially designed to work at liquid xenon temperature. The ionization signal, on the other hand, is collected by a 64 pixels segmented anode. A complete description of the internal structure of the anode has been also presented in this chapter. A homogeneous electric field between the cathode a the anode is established thanks to a set of 24 copper field rings connected through a resistive chain. The charge collection system of XEMIS1 is also equipped with a Frisch grid. The properties of four different grids tested during this thesis are presented is this chapter. The advanced cryogenics system, which has contributed to a high liquid xenon purity with a very good stability, described in detail. The different cryogenic processes performed before any data-taking period, such as liquefaction and purification of the xenon are also presented.
In the second part of this chapter, we have presented the characteristics of the second prototype XEMIS2 designed to image small animals. Main properties and materials of both the detector and the cryostat have been described. This new prototype is a monolithic liquid xenon cylindrical camera, which totally surrounds the small animal. XEMIS2 holds around 200 kg of liquid xenon. The active volume of the detector is completely covered by 380 1" PMT to detect the VUV scintillation photons, allowing a pre-localization for the detection of the ionization signal. The ionization signal is collected by two segmented anodes with a total amount of 24000 pixels. This innovative geometry will allow the simultaneous detection of the three γ-rays with a high sensitivity and a high Field-Of-View. Moreover, in order to manage such as important quantity of liquid xenon, an innovative high-pressure subsystem called ReStoX (Reservoir Storage Xenon) has been developed and successfully installed. ReStoX allows to maintain the xenon in liquid state at the desired temperature and pressure, distribute the xenon into the detector and also recover the xenon in case of necessity. A brief allusion of the possible thermal losses presented in the whole system has been made. Special attention has been paid to the thermal load from the front-end electronics towards the LXe that may cause certain operational problems (see Section 6.4.1).
This new prototype target to obtain good quality images with only 20 kBq of injected activity, 100 less of activity than a conventional small animal functional imaging exam. T he main goal of a LXe Compton telescope is to provide information of the 3D location of each individual interaction of an ionizing particle with the LXe, as well as to provide a precise measurement of the produced ionization charge in such an interaction. Both the scintillation light and the electron-ion pairs produced by the incoming radiation provide relevant information of the interaction in the medium. For this reason, the data acquisition system should be capable of recording for each interaction point both the ionization and scintillation signals simultaneously and without any dead time. In addition to that, the 3γ imaging technique requires very good energy and spatial resolutions in order to triangulate the position of the source. Consequently, charge and time determination on the detected signals must be optimized. In this chapter, we focus on the measurement of the ionization signal. The design and performance of the readout front-end electronics used in XEMIS1 is presented in Section 4.1.1. A detailed study of the electronics response is necessary in order to characterize the detector performances. The study of the influence of the charge sensitive pre-amplifier and shaper on the shape of the output signals is reported in Section 4.3. Charge linearity and electronic noise contributions are also discussed in this section. Moreover, the optimization of the time and amplitude measurement on the ionization signals is performed thanks to a Monte Carlo simulation. The goal of this study is to develop an advanced acquisition system for the measurement of the ionization signal in XEMIS2 (see Section 4.4). A description of the main characteristics of this new analog ASIC is presented in Section 4.6.
General overview
Incident radiation in the detector induces a current pulse on each pixel of the anode that is commonly read-out by means of a charge-sensitive preamplifier (CSA) [START_REF] Spieler | Radiation Detectors and Signal Processing[END_REF]. The preamplifier generates an output pulse which amplitude is proportional to the integral of the induced current pulse on each pixel. Figure 4.1 shows a basic preamplifier circuit composed of a feedback capacitor C f . The capacitor integrates the input current signal i(t) and generates a voltage step pulse at the CSA output:
V out (t) = 1 C f i(t)dt.
A reset system is incorporated to the CSA to discharge the feedback capacitance in order to avoid amplifier saturation. In the configuration shown in Figure 4.1, the reset is based on a feedback resistor, R f , placed in parallel to the feedback capacitor C f . The resistor continuously discharges the capacitor without degrading the signal to noise ratio and the linearity performances of the system. Ideally, the rise time of the output pulse generated by the preamplifier depends only on the charge collection time in the detector, and is independent of the characteristics of the preamplifier and the capacitance of the detector [START_REF] Knoll | Radiation Detection and Measurements[END_REF]. On the other hand, the decay time of the output pulse is determined by the time constant of the CSA: τ p = R f C f . Therefore, the output signal is proportional to the total collected charge on a pixel, as long as the duration of the input pulse, t c , is short compared with the time constant of the preamplifier (t c << R f C f ). Consequently, the choice of the feedback capacitor has an influence on the charge collection efficiency and on the shape of the output signals. Large feedback capacitors, i.e. long time constant, are required to achieve a high charge collection efficiency and also to decrease the electronic noise contribution of the CSA. However, if the charge collection time in the detector is smaller than the CSA time constant, consecutive arriving pulses may overlap at the CSA output. This pile-up effect becomes more important for high event rate.
The preamplifier is also a major source of electronic noise to the system. That is why it should be placed as close as possible to the anode to increase the SNR. A brief discussion of the different sources of electronic noise is presented in Section 4.2. A pulse-shaping amplifier or shaper is generally added to the signal processing chain in order to filter and shape the CSA output signals (see Figure 4.2). The shaper generally consists of a band-pass filter that limits the available bandwidth by attenuating the undesirable spectral noise components outside the frequency band of interest and thus, increases the SNR. In addition, the shaper contributes to diminish the pile-up effect by shortening the pulse decay time, but preserving the maximum of the signal. In general, shapers are based on a combination of a first order CR high-pass filter also called differentiator, and a n th order RC low-pass filter or integrator (CR-RC n filter). For example, a second order low-pass filter is used in the front-end electronics of XEMIS1 called IDeF-X (see Section 4.1.1) with equal differentiator and integrator time constants. The resulting output signals generated by the shaper have a quasi-gaussian shape with a rise time that depends on the integration time of the filter or peaking time τ 0 , which is usually established by the time constant of the low-pass filter. In the following, the peaking time refers to the time between the moment the output signal reaches 5 % of its amplitude and the maximum. On the other hand, the decay time of the pulse is usually governed by the time constant of the CR filter, which is in general lower than the CSA time constant to reduce the pile-up effect. The choice of the time constants of the shaping circuit affects the maximum amplitude of the output signal. A loss of amplitude on the shaper output pulse compared to the CSA output signal is known as the ballistic deficit of the electronics, and it is discussed more in detailed in Section 4.3.2.
Front-end electronics: IDeF-X LXe
As discussed in Chapter 3, in the XEMIS1 TPC the ionization signal generated after the interaction of an ionizing particle with the LXe is directly collected by the anode, which means no amplification of the signals is used. This implies that a very low electronic noise is needed in order to maximize the SNR and hence, to achieve a good energy resolution. The location of the electronics close to the anode also reduces the electronic noise, as well as the signal degradation during transmission. However, this implies that they should be able to tolerate temperatures close to the temperature of LXe. In the last version of XEMIS1, the front-end electronics are placed inside the vacuum container reaching temperatures below -60
• C. Several tests with the electronics placed directly inside the LXe have been also performed during this thesis. The coupling between the anode and the ASICs are also of critical importance to maintain the required levels of noise and to withstand low temperatures.
The readout front-end electronics used in XEMIS1 consists of a low-noise 32 channels analog IDeF-X HD-LXe chip (Imaging Detector Front-end for X rays) [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF]. This low power ASIC was initially developed by the CEA1 for X-ray and γ-ray spectroscopy with a Cd(Zn)Te detector named Caliste [START_REF] Meuris | Étude et optimisation du plan de détection de haute énergie en Cd(Zn)Te de la mission spatiale d'astronomie et gamma Simbol-X[END_REF]. The initial IDeF-X HD ASIC has self-triggering capability that includes a baseline restoration circuitry, a peak detector and a discriminator [START_REF] Michalowska | IDeF-X HD: A low power Multi-Gain CMOS ASIC for the readout of Cd(Zn)Te Detectors[END_REF]. Subatech has successfully adapted the existing ASIC for the purpose of collecting the ionization signal with a LXe detector. The differential output and multiplexes present on the ASIC have been replaced by 32 analog outputs with no memorization. The segmented anode is connected to two IDeF-X HD-LXe of 32 channels each. Every pixel of the anode is then connected to its own ultra low-noise readout channel, generating a total of 64 independent analog signals. A general architecture of a readout channel of the chip is presented in Figure 4.3. Each analog channel includes a charge-sensitive preamplifier that integrates the induced signal on a pixel. To continuously discharge the integration capacitance, C f , the CSA is provided by a reset system based on a PMOS feedback transistor, which is equivalent to a several GΩ resistor. The output of the CSA is fed to a Pole-Zero cancellation (PZC) stage based on a PZ filter [START_REF] Geronimo | A CMOS Fully Compensated Continuous Reset System[END_REF]. The PCZ is used for baseline restoration by compensating pulse long duration undershoots. It is also used to perform a first signal amplification of the CSA output. The differentiation filter is also included in this module. A shaper based on a RC 2 second order low-pass filter with variable shaping times is also integrated. Each channel is provided with an injection capacitance of 50 fF, C inj , used to inject a well known charge on the input of the CSA of each channel for test and calibration purposes. Furthermore, to emulate the detector dark current each channel includes a adjustable internal current source i leak to polarize the ASIC input.
The main characteristics of the IDeF-X HD-LXe are summarized in Table 4.1. Every parameter of the chip such as the peaking time, the gain and the leakage current among others, are easily tunable with a four wires slow control protocol that is provided with an easy and intuitive graphical user interface [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF]. The time constants of the RC 2 filter and hence, the peaking time of the output signals can be varied between 0.73 to 10.73 µs. The gain of each channel can be set from 50 to 200 mV/fC, which means that a dynamic range of around 1.3 MeV can be reached. Moreover, a power consumption of 800 µW per channel has been reported [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF]. To reduce the total power consumption or to select only those channels that are needed, the slow control system also allows to switch off every channel independently. The IDeF-X HD-LXe chip includes a temperature sensor with an absolute resolution of 0.5 4.5). In order to comply the temperature and pressure requirements, the different material that compose the PCB circuit are the same as those used to build the anode. The anode and the electronics are directly coupled to two standard 32 channels vertical mini edge card connectors. This type of connectors allow for excellent electrical performances and ensures good signal transmission. Figure 4.5 shows the Bottom layer of the anode with the two 32 channels connectors wire bounded directly to the pixels of the anode. A perpendicular position of the electronics with respect to the anode facilitates the wiring, as well as allowing a reduction of the pixels size. Earlier versions of the segmented anode with a higher pixel size of 3.5 x 3.5 mm 2 and a 1 in 2 PCB board coupled to the anode through a flat Cinch connector, have been also tested during this thesis. However, this oldest configuration led to a higher electronic noise and a worse spatial resolution (see Section 7.6) [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF].
The two vertical IDeF-X front-end ASICs are connected together through a 64 channels interface board, as we can see in the right-side image of Figure 4.5. The 64 output signals of the interface board are transferred to an analog buffer through a ≈ 20 cm kapton bus. The buffer stage enables the transmission of the analog signals from the vacuum vessel to the outside by increasing the available current. The kapton nap also provides the power supply to the front-end electronics. Figure 4.6 shows the inside part of the outer vessel of XEMIS1 where the kapton nap and buffer are visible. Finally, the 64 analog signals from each pixel are extracted from the vacuum container to the air and transferred to the acquisition board through 64 standard wires. Signals are then digitized with a sampling rate of 12.5 MHz by an external 12 bits FADC. Thanks to the dedicated electronic setup used in XEMIS1 an electronic noise of the order of 80 electrons is achieved.
Electronic noise
The noise is the result of statistical fluctuations either cause by the detector, the electronics or both, that are superimposed to the output signal. It is clear that the noise limits the smallest detectable charge signal, so it has an important impact on the energy resolution of the detector. Only signals with an amplitude several multiples larger than the noise can be clearly distinguished from random noise fluctuations. The spectral resolution of a detector depends therefore on the SNR. Likewise, timing measurements are also affected by noise. The noise produces an uncertainty or jitter in the time where the maximum of the signal is measured, so the time resolution depends on the slope/noise ratio according to Equation 4.1 [START_REF] Iniewski | Electronics for Radiation Detection[END_REF]:
σ t = σ N dV dt (4.1)
where dV/dt is the slope of the signal when its leading edge crosses the threshold, and σ N is the quadratic sum of all non-correlated sources of noise in volt.
The electronic noise causes random fluctuations in the number of collected charges and it is produced by the different components of the read-out electronic system. In general, the higher levels of noise are produced at the beginning of the electronic chain where the signal level is small compared to the noise fluctuations [START_REF] Knoll | Radiation Detection and Measurements[END_REF]. The charge-sensitive preamplifier implies, for example, a major source of electronic noise that is in part filtered by the shaper. In a CSA the total input capacitance C in , represented by the inherent pixel capacitance or detector capacitance (C d ), the parasitic capacitance due to the connexion between the pixels and the preamplifier (C p ) and the capacitance at the input of the electronic chain (C e ) are an important source of noise. C e is the sum of the feedback capacitance C f and all the parasitic capacitances at the input of the CSA and between the grid and the anode. To reduce the amount of electronic noise is, therefore, important to minimize the total input capacitance. To reduce the input capacitance, the connecting line from the pixel to the CSA has to be as short as possible.
In a detector, there are many different sources of electronic noise that can be classified as parallel noise or serial noise depending on its coupling with the output signal [START_REF] Radeka | Low-Noise Techniques in Detectors[END_REF]. For example, detector leakage current shot noise and feedback resistor R f thermal noise are common noise contributions that are in parallel with the detector at the preamplifier input. On the other hand, thermal noise on the first-stage of the preamplifier is a sort of noise that is in series with the signal source. The different noise contributions are generally expressed in terms of the voltage (e n ) or current (i n ) spectral density depending on whether the contribution comes from a voltage (series) or current (parallel) noise source [START_REF] Spieler | Semiconductor Detector Systems[END_REF]. Please note that there are a great variety of noise sources in an electronic circuit and in this section we discuss only the most relevant noise contributions related to the CMOS2 technology. For a more complete overview of the electronic noise please refer to [START_REF] Gillespie | Signal, Noise and Resolution in Nuclear Counter Amplifiers[END_REF].
The electronic noise added by the read-out electronic system is often expressed in terms of the Equivalent Noise Charge (ENC), which is defined as the charge that must be supplied to the input of the system in order to obtain an output signal equal to the root-mean-square (RMS) due only to noise. Commonly, the ENC is expressed in Coulombs, units of electron charges (e -) or equivalent deposited energy (eV).
The Thermal noise also called Johnson-Nyquist noise is generated by thermal agitation of charge carriers within a conductor [START_REF] Nyquist | Thermal agitation of electric charge in conductors[END_REF][START_REF] Johnson | Thermal agitation of electricity in conductors[END_REF]. It is produced in the preamplifier input stage due to the input field effect transistor (FET). The thermal noise can be modeled approximately as white noise in most real electronic systems, since its Power Spectral Density (PSD) is independent of frequency. The spectral density of the thermal noise contribution is given by [START_REF] Radeka | Low-Noise Techniques in Detectors[END_REF]:
e 2 thermal = 8 3 kT g m (V 2 /Hz) (4.2)
where k is the Boltzmann's constant, T is the absolute temperature and g m is the transconductance of the first-stage of the preamplifier. The thermal noise on the preamplifier output due to the first-stage FET can be also expressed in terms of the equivalent noise charge as follows [START_REF] Radeka | Low-Noise Techniques in Detectors[END_REF]:
EN C 2 thermal = 8 3 kT g m C 2 t q 2 τ 0 (4.3)
where q is the electron charge, C in = C d + Cp + C e is the total input capacitance and τ 0 is the shaping time constant. The contribution of this thermal noise to the detector can be significantly reduced by operating at low temperature.
The shot noise is due to random fluctuations of the electric current about its average value and causes fluctuations in the number of charge carriers within a semiconductor or vacuum tube [START_REF] Leach | Fundamentals of Low-Noise Analog Circuit Design[END_REF]. It comes from the sensor leakage current, and like the thermal serial noise, it also has a white spectrum. The power spectral density of the shot noise is proportional to the average current I given by the gate leakage current of the first-stage of the preamplifier and the dark current of detector:
i 2 shot = 2qI (A 2 /Hz) (4.4)
The leakage current shot noise on the preamplifier expressed in terms of the equivalent noise charge is given by the following expressions:
EN C 2 shot = 2qI τ 0 (4.5)
Other significant parallel contributors to the noise is the thermal noise associated with the reset of the preamplifier R f , which current power spectral density and ENC are given by:
i 2 Rf = 4kT R f (A 2 /Hz) (4.6) EN C 2 R f = 4kT τ 0 R f (4.7)
This noise contribution decreases by increasing the resistance value and its power density is also independent of frequency. However, larger values of R f implies longer time constants and hence, longer pulse tails that may lead to an increase of pulse pile-up. The ENC of the thermal noise caused by feedback resistance is proportional to the shaping time τ 0 but independent of the input capacitance C i . Substituting the feedback resistor by a PMOS transistor is a good alternative to reduce this kind of noise.
Flicker noise is another source of inherent noise in a detector. Its contribution also comes from the preamplifier input transistor and may be explained by the charge carriers which are trapped in the impurities or imperfections of the medium, and then released after a characteristic lifetime, producing fluctuations on the number of produced charge carriers. The PSD of the flicker noise is inversely proportional to frequency, and that is why is also called 1/f noise. The spectrum of the noise depends on the ratio of the upper to lower cutoff frequencies, rather than the absolute bandwidth unlike the thermal serial noise and shot noise. The contribution of the flicker noise is given by:
e 2 1/f = K f I m ∆f f (V 2 /Hz) (4.8)
where I is the current, K f is the flicker-noise coefficient, m is the flicker noise coefficient and ∆f is the bandwidth in hertz over which the noise is measured [START_REF] Leach | Fundamentals of Low-Noise Analog Circuit Design[END_REF]. The ENC of the 1/f noise depends on the input capacitance of the detector (∝ C in ) and it is independent of the shaping time τ 0 [START_REF] Radeka | Low-Noise Techniques in Detectors[END_REF]. Since all these contributions are uncorrelated noise sources, the total electronic noise is given by the square root of the quadratic sum of the different noise sources:
EN C 2 total = EN C 2 parallel + EN C 2 series + EN C 2 1/f (4.9)
The total electronic noise of a detector can be reduced by choosing the appropriate amplifier shaping time. Figure 4.7 shows the ENC as a function of the shaping time. In general, the series noise contribution such as the thermal noise at the input stage of the preamplifier dominates at short shaping times, while the parallel noise component (shot noise and thermal noise at the preamplifier feedback resistor) increases with the shaping time. Therefore, the selection of the time components of the read-out electronics should be taken under careful consideration in order to minimize the noise contribution. In theory, the minimum noise is obtained when both contributions, series and parallel noise, are equal. The flicker noise on the other hand, is independent of the shaping time and it tends to dominate at low frequencies. However, its contribution does depend on the input capacitance. The relative contribution of series noise also increases with detector capacitance, while the parallel noise does not depend on C in . Consequently, the minimum noise is usually achieved for detectors with long shaping times and small input capacitances.
Effect of the readout electronics on the measurement of the induced ionization signal
As discussed in Section 4.1, in order to measure the full amplitude of the induced signal on a pixel, the shaping time must be larger than the temporal width of the signal. Therefore, for high energy resolution detectors, the best method to measure the energy of an interaction is to use a charge-sensitive amplifier with large shaping times. Timing measurements, on the other hand, require short shaping times to reduce the SNR and the pulse pile-up. When both requirements are necessary, as in the case of a TPC, the choice of the parameters of the front-end electronic requires a thoughtful study. In this section, we study the shape of the output signal of the IDeF-X HD LXe ASIC for an injected test pulse and different peaking times (T peak3 ). Results are compared to the experimental output pulse for 511 keV γ-rays. Moreover, charge loss as a function of the shaping time and the linearity of the chip are also discussed in this section.
Study of the shape of the output signals
The response of the front-end electronics used in XEMIS1 has been tested by injecting a test pulse to the input of the preamplifier. Every channels of the anode, i.e. every preamplifier, includes a 50 fF test pulse capacitor that enables to inject well defined test charges to be integrated by the preamplifier. All measurements were performed under realistic experimental conditions with the LXe TPC completely functional. Figure 4.8 and 4.9 show the output signal of the shaper and preamplifier, respectively, for a step input pulse with an amplitude of 60 mV and a 5 ns rise time. The injected test pulse was provided by a standard waveform generator (Agilent 33250A). The presented signals are the result of averaging over 2000 test pulses to reduce statistical fluctuations. The injection was performed on four different pixels of the anode, two from each IDeF-X chip. The preamplifier output signal was extracted from two of the injected channels. The gain and peaking time of the amplifier were set to 200 mV/fC and 1.39 µs respectively. Signal (a.u.)
1.8 -
1.7 - 1.6 - 1.5 - 1.4 - 1.3 - 1.2 - 1.1 - 1 - s) µ Time ( 6 - 5.5 - 5 - 4.5 - 4 - 3.5 - 3 - Signal (a.u.) 1.8 - 1.7 - 1.6 - 1.5 - 1.4 - 1.3 - 1.2 - 1.1 - 1 - Figure 4
.9 -Output signal of the preamplifier for a 60 mV injected delta-like pulse with 5 ns rise time.
As expected, the response of the chip to a Dirac-like current signal is a quasi-gaussian pulse that reaches its maximum value in 1.39 µs (from 5 % to 100 % of the amplitude). Similar results were obtained for different peaking times. Figure 4.10 shows the comparison of the signal at the output of the shaper for four different peaking times: 0.73 µs, 1.39 µs, 2.05 µs and 2.72 µs. Since the same time constant is used on both differentiation and integration stages of the shaper, faster rise times and faster decay times are achieved for slower peaking times. The charge loss due to the ballistic effect is discussed in the next section. On the other hand, the shape of the preamplifier signal differs from a delta-like pulse. The preamplifier generates a signal with a rise time of the order of 500 ns when a 5 ns leading edge current pulse is injected. This behavior is due to the fact that a real CSA does not respond instantaneously to an input charge.
In order to reproduce the actual output signal of the front-end electronics when radiation interacts with the LXe, some other aspects besides the shaping time of the amplifier should be taken into account. The rise time of the output signals also depends on the charge collection time. In the case of XEMIS1 and for an ideal Frisch grid, electrons start to induce a signal on the anode from the moment they pass through the grid, so all charges travel the same distance, equal to the gap, before being collected. The effect of the inefficiency of the Frisch grid on the pulse shape is discussed in Chapter 5.
During this thesis two different gaps of 500 µm and 1 mm have been tested. In the LXe, the electron drift velocity for an electric field of 1 kV/cm is around 2 mm/µs, whereas for field strengths higher than ∼3 kV/cm it saturates at around ∼2.3 kV/cm. Although the electric field in the gap is between five to six times the electric drift field (6 kV/cm) (see Section 5.5.1), the total electron drift time was approximated as 250 ns and 500 ns for a 500 µm and a 1 mm gap respectively. The evolution of the preamplifier output signal for three injected delta-like pulses with different rise times of 5 ns, 250 ns and 500 ns, that simulates three different grid-anode distances is presented in Figure 4.11. As the collection time increases, i.e. the slope of the injected step increases, the rise time of the signal at the output of the preamplifier also increases. This implies that, in general, higher gaps require larger shaping times in order to integrate the total charge. However, some discrepancies have been observed on the experimental data. Figure 4.12 shows the comparison between the average output signal for 511 keV events obtained with a 100 LPI metallic woven mesh placed at 500 µm from the anode and a 6 cm long TPC (see Chapter 6, Section 6.2), with a 250 ns slope injected pulse, i.e. ∼ 500 µm of gap. The peaking time on both signals was set to 1.39 µs. For the experimental pulse only single-cluster events, i.e. clusters with only one fired pixel, for interactions that took place between 2.6 cm and 6 cm from the grid were selected. Likewise, the signal was averaged over a large enough number of events to increase the SNR. The experimental signal shows a larger peaking time of 1.52 µs compared to the injected pulse (T peak = 1.46 µs). The decay time of the experimental pulse is estimated to be 3.3 µs, obtained between the 100 % to 5 % of the maximum. The shaped pulse from an injected 60 mV step signal with a rise time of 500 ns corresponding to grid-anode distances of approximately 1 mm is also depicted in the figure. The results show that the convolution of a delta-like charge signal with a gap of 500 µm does not mimic the shape of the experimental signal, but instead a gap of 1 mm better reproduces the rise time of the pulse. A difference of ∼80 ns is measured at 5 % of the signal's maximum between the experimental pulse and a 250 ns rise time injected pulse, while no difference is measured for the 500 ns slope pulse. The comparison of the experimental output signal for two different gaps is presented in Figure 4.13. A large pulse was expected for a higher gap distance due to the added time to the electrons drift. However, we observed that the rise time of the signals does not change when the gap varies by a factor of two, despite of the slow rising tail that it is attributed to the inefficiency of the Frisch grid as discussed in Section 5.5. These results show that the peaking time of the amplifier and the collection time of electrons in the gap cannot explain the shape of the output signals, and denote the presence of an additional effect that affects the charge collection in the anode. This effect seems also independent of the physical characteristics of the Frisch grid since a similar rise time was found for a 50.29 LPI mesh (see Figure 4.14). In fact, it points out that the collected charge depends on the position of the interaction with respect to the anode. This effect is further discussed in Chapter 5.
Ballistic deficit
In a radiation detector the amplitude of the measured signals should be proportional to charge produced after an ionizing particles interacts with the medium. In addition to that, the measured charge should be independent of the charge collecting time within the detector. This means that the maximum amplitude of the preamplifier signal must be preserved after the shaper. This is possible if the shaping time constants of the pulse-shaping amplifier are large compared with the preamplifier pulse rise time. However, if the shaping time constants are not long enough, part of the collected charge would be lost in the shaping process. This effect is called ballistic deficit. The amplitude loss, together with the electronic noise and the statistical fluctuations associated with the charge production process in the detector are the major source that limits the energy resolution on a detector. The ballistic deficit can be corrected by an adequate selection of the shaping time of the linear amplifier. However, the choice of the peaking time cannot be made arbitrarily, but it should be made depending on the detector requirements. For example, on high rate experiments short shaping times are needed in order to minimize pulse pile-up. Equally, short shaping times are necessary to spatially resolve multiple interactions as in a Compton scattering sequence.
Variations in the rise time of the output signals can also affect the amplitude of the shaper output pulse. In an ideal gridded TPC, since all electrons start to induce a signal on the anode from the same point, the shape of the pulse at the output of the shaper is the same regardless the position of the interaction. Thus, the ballistic effect should only depend on the time properties of the combination preamplifier-shaper.
In this section we try to quantify the degree of ballistic deficit as a function of both peaking time and preamplifier output signal rise time. Figure 4.10 shows the average output signal obtained for four different peaking times obtained when a delta-like pulse with a 5 ns slope is injected in the preamplifier input capacitance. The pulses were averaged over 2000 events. When a delta-like current pulse in injected at the input of the preamplifier, no ballistic deficit is expected for an ideal CSA since the rise time of the output signal is, in general, very fast compared to the minimum peaking time provided by the IDeF-X LXe ASIC, which is 0.73 µs. However, as discussed in the previous section, due to the slower rise time of the preamplifier signals which is of the order of 500 ns, a shaping time of 0.73 µs is not enough to integrate all the collected charge. The peak signal deficit increases as the peaking time decreases. Considering that for shaping times larger than 2.72 µs there is not ballistic deficit [START_REF] Michalowska | IDeF-X HD: A low power Multi-Gain CMOS ASIC for the readout of Cd(Zn)Te Detectors[END_REF], a 6 % of charge loss has been measured for a peaking time of 0.73 µs. Similarly, at 1.39 µs, 2.3 % of the total charge is lost just due to the time response of the preamplifier, and less than 1 % is expected for a 2.05 µs of peaking time.
The dependence of the ballistic deficit with the preamplifier pulse rise time is presented in Figure 4.15. Since the rise time of the preamplifier output signal depends on the charge collection time, for a fixed peaking time the maximum amplitude of the shaped pulse should depend on the grid-anode distance. The signals presented in Figure 4.15 were obtained with a constant peaking time of 1.39 µs and four different rise time: 5 ns, 250 ns, 500 ns and 750 ns corresponding to a grid-anode distance of approximately 0 mm, 0.5 mm, 1 mm and 1.5 mm for an electron drift velocity of 2 mm/µs at 1 kV/cm. The signals were also injected through the test input capacitance and averaged over 2000 events to reduce the statistical fluctuations. The results show that the amplitude deficit increases as the rise time increases. A loss of the order of 1.7 % on the maximum collected charge is observed for a gap of 1 mm (red line) compared to a gap of 500 µm (black line). On the other hand, for shaping time of 2.72 µs, a maximum ballistic deficit of the order of 1 % is observed as shown in We can conclude that larger peaking times imply both less electronic noise as discussed in Section 4.2, and less ballistic deficit. However, better timing resolutions are in general related to shorter peaking times, since the time resolution on a detector depends on the slope-to-noise ratio. In addition, we have seen that larger gaps also increase the charge loss due to ballistic deficit. For a peaking time of 1.39 µs and a gap-anode distance of 500 µm, which are the standard experimental conditions in XEMIS1, a ballistic deficit of around 3 % is estimated. This effect adds a systematic uncertainty in the measurement of the charge in the detector. In the next section, a study of the amplitude and time measurement precision as a function of the peaking time is carried out in order to optimize the performances of the analog ASIC that would be used in XEMIS2 for the measurement of the ionization signals.
Charge linearity
The study of the linearity range of the IDeF-X LXe ASIC for a given gain of 200 mV/fC and a peaking time of 1.39 µs has been also performed by injecting a charge through the Frisch grid. Since the pulse shape at the output of the shaper is not well reproduced by an ideal delta-like pulse, we opted to inject the preamplifier average signal presented in Figure 4.17(a) obtained for 511 keV events. In this way, we take into account the different effect on the output signal due to the overall system. The average pulse was parametrized and defined as a piece-wise function in the pulse generator and directly injected to the grid. The measurements were taken with a 100 LPI Frisch grid at 1 mm from the anode, a 12 cm long TPC and a shaping time of 1.39 µs. Figure 4.17 shows the comparison between the experimental average pulse and the theoretical one at the output of both the preamplifier and the shaper. We can see that the injected pulse fairly represents the average experimental signal, and the small difference between the preamplifier signals does not affect the shaped pulse. This implies that the shaper is not very sensitive to slight fluctuations in the slope of the preamplifier signal. The amplitude of the injected pulses varied between 100 mV to 3.1 V. For each configuration 1000 events were acquired and analyzed. The maximum amplitude of the signals was determined by a method based on a Constant Fraction Discriminator (CFD) (see Section 4.4.2). Pulse selection was made with a threshold level of 3 times de noise (∼4.5 keV). Figure 4.18 shows the maximum measured amplitude, averaged over the 1000 events, as a function of the injected charge. The data is well described by a first order polynomial and show an excellent linear behavior in all the amplitude range. The dynamic range completely covers the required energy interval for the calibration of the detector, and enables to measure signals of energies up to 1.274 MeV for a gain of 200 mV/fC. The ASIC saturation is observed at ∼ 1.3 MeV. Same study was performed for low measured energies. In this case, the charge of the injected pulses varied between 1 mV to 1 V in steps of 5 mV. Equally, for each configuration 1000 events were acquired and analyzed using a method based on a CFD to measured the maximum of the signals. The data show a good linear behavior in most of the energy internal, which is well fitted by a first order polynomial, as shown in Figure 4.19. A non-linear response was measured, in the other hand, at very low measured charges close to the threshold level (∼4 times the noise). This non linearity, depicted in Figure 4.20, is not related to the response of the electronics, but is due to the electronic noise and the method used to measured the charge. Since the signals are selected with a threshold level set at 3σ noise , only pulses with an amplitude strictly higher than this threshold are registered. The threshold effect generates a bias in the measured charges at low energies. Figure 4.21 shows the ratio between the number of measured signal and the number of injected pulses vs. the measured charge. At amplitudes at around six times the electronic noise (∼9 keV), 100 % of the injected pulses are measured by the CFD method, whereas only 40 % are measured at the threshold level.
Please note that the differences between the two curves for the same injected charge are due to the fact that the injection was performed under different experimental conditions. However, the purpose of this study is to measure the linear behavior of the electronics before saturation and not the gain (no calibration of the injected charges was made).
Equivalent Noise Charge
The noise performances of the IDeF-X LXe ASIC have been also studied during this thesis by measuring the ENC as a function of the peaking time. The ASIC was connected to the detector, that was fully operational and working under standard experimental conditions. The Frisch grid (100 LPI) was biased with a voltage of -300 V. The value of the input capacitance was estimated to be ∼ 15 pF at room temperature [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF], and we can assume that this value does not change at lower temperatures of the order of -60 • C. The injection capacitance of a channel is 50 fF. The ASIC was programmed with a gain of 200 mV/fC and with the minimum available value of the leakage current (20 pA). We used a waveform generator to inject delta-like pulses of 30 mV and 60 mV. The noise was measured at the maximum of the signals, and all signal are treated after pedestal correction (see Chapter 6, Section 6.4). The conversion from volt to electrons was performed using the injection capacitance.
Figure 4.22 shows the ENC values as a function of the shaping time obtained for one of the pixels of the anode. The system shows excellent noise performances with a minimum of ∼ 100 e -for a shaping time of 2.05 µs. This level of noise is adequate for 3γ imaging applications. In Figure 4.22 the values of ENC have been fitted to separate the different noise contributions described in Section 4.2 (see Figure 4.7). The dominant contribution to the ENC in the peaking time range of 1.39 µs to 4.06 µs is the 1/f noise, whereas at higher shaping times the parallel noise seems to dominate. The thermal noise contribution due to the reset of the CSA is very small due to the high value of the equivalent resistance of the PMOS transistor (∼ GΩ). Moreover, since the dark current of the detector is negligible and the injected leakage current is small, of the order of tens of pA, the expected relative contribution of the parallel noise should be also small. As a result, an almost flat distribution limited by the 1/f noise contribution is expected at high peaking times. However, we see in Figure 4.22 that for peaking times higher than 4.06 µs the parallel noise seems to dominate. Since the parallel noise contribution increases as the leakage current i leak increases, the ENC values as a function of the peaking time have been calculated for two different values of i leak . The results are shown in Figure 4. [START_REF] Sauter | Über den atomaren Photoeffekt in der K-Schale nach der relativistischen Wellenmechanik Diracs[END_REF]. We can see that an increase of the leakage current of the CSA has no impact on the series noise contribution (short peaking times). A minimum noise of the order of 100 e -is measured for both i leak currents at a shaping time of 2.05 µs. Moreover, no significant difference in the noise performances is observed at long peaking times between the two i leak values. This suggests that the contribution of a correlated noise, mostly due to capacitive coupling between signal and electronics grounding, cannot be neglected.
Figure 4.24 shows the same measurements after correlated noise correction (see Chapter 6, Section 6.5.2). As expected, for the smallest current, the serial noise increases. This is due to the fact that the main source of series noise is the thermal noise at the input of the CSA, which is oppositely proportional to the transconductance of the input transistor, which in As a conclusion, to minimize the contribution of the electronic noise to the energy and time resolutions, the IDeF-X LXe ASIC should operate at the smallest possible leakage current and at a peaking time of the order of 2 µs. Moreover, a shielding optimization of the chip and the connexion cables will reduce the correlated noise contribution. Results of the ENC as a function of the peaking time at room temperature and as a function of the input capacitance for the IDeF-X LXe ASIC can be found in Lemaire et al. [START_REF] Lemaire | Developement of a Readout Electronic for the Measurement of Ionization in Liquid Xenon Compton Telescope Containing Micro-patterns[END_REF].
Measurement Optimization of the Ionization Signal with a CFD
Excellent energy and spatial resolutions are essential in Compton imaging, where both the deposited energy and the position of each interaction are required for the final reconstruction of the image. As we have discussed in the previous sections, the noise and the ballistic deficit degrade the energy and time resolutions of a detector. However, besides the statistical fluctuations in the number of produced charges, there are other intrinsic effects that may also affect the performances of the detector. For example, the spread of the electronic cloud during the drift process along the TPC due to the electron transverse diffusion, adds some uncertainty in the calculation of the reconstructed interaction position and the final collected charge, which in turn affects the final spatial and energy resolutions.
Because of the electronic cloud diffuses while it drifts towards the anode, due to the long drifting distance, there is a non-negligible probability that the electron cloud would fire multiple neighboring pixels. In this case, the estimated position of the interaction is calculated as the centroid position of the electron cloud, while the final charge is the sum of all the individual charges. Moreover, since not all the neighboring fired pixels must become from the same interaction, an additional condition concerning the drift time should be applied in order to discriminate different interactions that take place very close one from each other. As a result, accuracy on both time and charge measurements are indispensable to ensure good energy and spatial resolutions. The purpose of this study is to determine the optimal method for the measurement of the amplitude and the drift time of the ionization signals in the LXe. Two different methods are presented and compared. To perform the study, a complete Monte Carlo simulation of the output signal of the front-end electronics of XEMIS1 has been performed.
Electronic Noise and Ionization Signal Simulation
As discussed in Section 4.1.1, the read-out electronics of XEMIS1 generates 64 (32 x 2) independent analog signals S(t) corresponding to the 64 pixels of the anode. Each signal is registered over a total period of 102.2 µs and sampled by the FADC at a rate of 12.5 MHz. Consequently, each registered event results in a set of 64 sampled signals S(n), where n = 1,...,N denotes the sampling time index and N is the total number of channels per signal, equal to 1278.
The aim of this section is to accurate simulate the output signal of the IDeF-X ASIC, S(n), by using experimental data. Assuming that the signals can be decomposed as the sum of two statistically independent components, a noiseless signal s(n) and a disturbing noise n(n), the simulation of both contributions was performed individually.
Signal simulation
The shape of the signal s(n) was reconstructed as a result of the parametrization of the average signal obtained over a large enough set of independent experimental events. The goal of using the average signal was to minimize the noise contribution per bin. The experimental data was taken for a 100 LPI mesh with a 1 mm gap, an electric field of 1 kV/cm and a shaping time of 1.39 µs. To reject noise events, a threshold of 10 time the noise (∼ 15 keV ) was set on each individual signal. In this way, we ensured that only events with enough amplitude contributed to the final shape of s(n). The amplitude of s(n) corresponds to the voltage equivalent to an electron. As a result, any amplitude signal (in charge) can be generated by multiplying s(n) by a constant. Figure 4.27 shows the simulated pulse compared to the experimental averaged signal.
Noise simulation
The noise adversely affects the signal characteristics and complicates the data processing. For this reason, its contribution is very important, especially at low energies. The simulation of the noise n(n), unlike the noiseless signal s(n), is a complicated process that requires a thorough study.
The noise at the output of the IDeF-X LXe has a Gaussian amplitude distribution. The power spectrum of the noise obtained for a set of experimental noise events is depicted in Figure 4.28, which represents the distribution of the noise amplitudes. The equivalent value of the noise can be quantitatively characterized in terms of the root-mean-square (RMS) of the distribution. The experimental noise distribution was fitted by a Gaussian function with a standard deviation of σ noise = 85.42 electrons. Time domain representation is useful for noiseless signals. However, a correct simulation of the noise behavior requires the analysis on the frequency domain. This technique allows to extract relevant signal features not perceptible from the time domain representation. A signal can be converted between the time and frequency domains by a mathematical operation call transformation. A common transformation used in data processing is the Fourier transform. For discrete-time and finite-duration signals, the Discrete Fourier Transform (DFT) is commonly used (4.10):
X[k] = N -1 n=0 x[n].e -j.k.ω 0 .n (4.10)
where N is the total number of samples, k = 0,1,...N-1 is the frequency bin index 4 and ω 0 is the fundamental frequency given by ω 0 = 2π N . We assume that Each spectral component of the DFT X[k] is a complex variable that can be expressed as a function of its real and imaginary components:
X[k] = ℜe[k] + iℑm[k]: ℜe[k] = N -1 n=0 x[n]. cos(ω 0 .n.k) (4.11) ℑm[k] = N -1 n=0 x[n]. sin(ω 0 .n.k) (4.
12)
The effect of applying the DFT over a finite time window results in high fluctuations between different events, especially at low frequencies. In fact, the smaller the time interval and the bigger the sampling period, the higher the inexactitude in the results. A well used method to compensate this limitation and minimize its effect, is to average the DFT coefficients for each bin over a large number of events. For this study, we used a set of 12000 events per pixel recorded under standard experimental conditions over a total time window of 77.3 s. The averaged spectra of the real and imaginary parts of the DFT coefficients are reported in Figure 4.29. Only the positive half of the frequency spectrum is displayed due to the symmetry around the DC component. The highest detectable frequency, called Nyquist frequency, is equal to half of the sampling frequency. In our case the Nyquist frequency is 6.25 MHz. Generally, the Fourier transform is expressed in terms of the magnitude (A[k]) and the phase (φ[k]): [k] , which are directly related to the real and imaginary parts of the DFT coefficients through the following formulas:
X[k] = A[k]e iφ
A[k] = ℜe[k] 2 + ℑm[k] 2 (4.13) φ[k] = arctan( ℑm[k] ℜe[k] ) (4.14)
Because of the independence of the real and imaginary parts of the DFT coefficients, the joint probability density function can be expressed as the product of the individual PDFs: P (x, y) = P (x)P (y), where x and y represent the ℜe and ℑm respectively. The index k has been dropped for simplification. In addition, since both parts are identically distributed modeled by a Gaussian density function, the joint PDF is given by equation (4.15),
P (x, y) = P (x)P (y) = 1 2πσ 2 e - x 2 + y2 2σ 2 (4.15)
by using dxdy = AdAdφ, we obtain the joint PDF as a function of the magnitude and the phase. Note that A and φ are also statistically independent.
P (A, φ) = A 2πσ 2 e - A 2 2σ 2 with A ∈ [0, ∞] and φ ∈ [-π, π] (4.16)
The individual PDFs of the magnitude and the phase are therefore:
P (A) = ∞ 0 1 2πσ 2 e - x 2 + y2 2σ 2 dA = A σ 2 e -A 2 2σ 2
(4.17)
P (φ) = π -π 1 2πσ 2 e - x 2 + y2 2σ 2 dφ = 1 2π (4.18)
This result shows that the magnitude obeys a Rayleigh distribution given by Equation (4.17) [START_REF] Kulak | Accuracy and Repeatability of Noise Measurements with a Discrete Fourier Transform[END_REF] while the phase is uniformly distributed over ±π. The power spectrum obtained from experimental data is presented in Figure 4.32, whereas the experimental PDFs of the magnitude and the phase obtained for a given frequency are presented in Figure 4.33(a), which are consistent with the expected distributions. A Monte Carlo simulation is finally performed to reconstruct the noise signals n(n) using these results. A random number is generated for both coefficients (ℜe and ℑ m) for each of the N bins. These random numbers are Gaussian distributed with mean and variance given by the averaged PDFs obtained for each individual bin (see example in The simulated power spectrum of the magnitude of the DFT is presented in Figure 4.35(a). We can verify from Figure 4.35(b) that this method reproduces very well the experimental results. Moreover, the simulated amplitude distribution of noise events is depicted in Figure 4.36. A Gaussian fit to the distribution gives a estimated noise of σ noise = 84.89 ± 0.02 e -which is in good agreement with the value obtained with the experimental data. These results confirm that the simulated noise can reproduce, with an excellent approximation, the electronic noise contribution.
Time and Amplitude measurement optimization
As we have seen the output signals of XEMIS1 consist of a total of 64 × 1278 samples for each registered ionization event resulting, in average, in an output file of around 2 GB for one hour run. This type of data acquisition is not longer recommendable for a detector with a large number of pixels as the case of XEMIS2, where near 24000 pixels will be present (see Section 4.6). To minimize the readout data volume we have proposed some updates in the front-end electronics. Instead of continuous sampling the analog signals, the new readout system uses an analog ASIC called XTRACT, that registers only the value of the amplitude, the time and the pixel address of those signals that trigger the discriminator. This type of recording requires much less time and power than real-time digitization. However, accurate measurements of time and amplitude are a very delicate issue, mainly for very low energy signals where noise contributions become important. For this reason, a Monte Carlo simulation has been performed in order to determine the method that provides the optimum time and amplitude measurement resolutions. The simulation can be divided in two different parts. The first part includes of the generation of the output signal of the IDeF-X, as presented in the previous section 4.4.1. Each event corresponds to the simulation of one signal. The pulses are randomly generated inside a time window of 102.2 µs equivalent to the registration time over one pixel. Amplitudes are simulated between 3σ noise (8 mV) and 20σ noise (54 mV) because we are mostly interested in performing the analysis at low energies.
The second part consists of the simulation of the ASIC XTRACT, where the time and amplitude of the signals are estimated. Two different methods of measurement have been proposed and compared: a Constant Fraction Discriminator (CFD) and a peak-sensing ADC.
In addition, since the shape of the output signals depend greatly on the time-related parameters of the shaping amplifier such as the the peaking time of the shaper, in order to improve the SNR and to optimize the measurements of the amplitude and time of the signal, two different peaking times of 1.39 µs and 2.05 µs have been tested during this analysis.
Time and Amplitude measurement with a Constant Fraction Discriminator
The concept of the constant fraction triggering technique is illustrated in Figure 4.38. The input signal V a is split in two parts. One part is inverted and attenuated to a certain fraction k of the original amplitude V c = -kV a , and the other part is delayed by a time τ d . These two signals are added to form a bipolar pulse V out with a zero-crossing point that is independent of the amplitude of the signal and is placed always at the same point, that corresponds to the optimum fraction of the signal height [START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF]. Our purpose is to accurately determine this zero-crossing point at the moment when the original signal reaches its maximum value. Therefore, with the CFD method, the time and amplitude of the signals are determined at the zero-crossing point. Since the precision is related to the proper selection of the delay τ d and the constant fraction k, different combinations of these two parameters have been considered.
In general, the event selection is done by using a leading-edge discriminator [START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF]. Only those signals with an amplitude higher than a certain threshold value will be registered. In particular, for this study the discriminator level was set on 3σ noise to optimize the background rate (see Section 4.5). A second threshold may be included in the trailing edge of the pulses to reduce the number of noise triggers. Moreover, in the CFD method, the threshold can be set on either the original signal (S1) or directly on the constant-fraction signal (S2) (see Figure 4.39). Both cases are considered in this study. shows the result for a different rise time [START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF]. In (b) S 1 is the original signal and S 2 the CFD signal.
Setting the discriminator threshold on S2 requires, however, an additional study of the noise in order to determine the optimum threshold level. The CFD amplitude distribution of the noise depends on the chosen values of the time delay and the attenuation fraction. For a delay of τ d = 12 channels5 and a factor k = 1.436, an increase of the order of 45.6 % of the noise on the S2 signal was measured compared to S1. For this particular case, the equivalence to the 3σ noise threshold level on S2 was found at 3.1σ ′ noise . This value was estimated from the CFD noise distribution at the point where the number of triggers per second agrees with the counting rate at the 3σ noise level of the S1 noise distribution.
The worsening of the SNR on the CFD signal (S2), implies an important efficiency decrease at low energies compared to the result obtained for a 3σ noise threshold set on S1 (see Figure 4.40). For an input signal with an amplitude equal to 3σ noise the efficiency should be around 50%. However, by setting the discriminator on S2, less that 20 % of the simulated signals are measured. This efficiency loss cannot compensate the slight improvement observed at low energies on the measurement of the time and the amplitude resolutions, as shown in Figures 4.41 and 4.42 respectively. The decrease in the amplitude resolution observed for simulated amplitudes lower than 5σ noise is due to a bias introduced by the method. As the threshold is set to 3σ noise , for example for a signal of amplitude 3σ noise , only half of the amplitude distribution is measured, and therefore, the mean of the distribution is shifted. As a consequence, the CFD method with the discriminator set on the original signal was considered the best option. To discriminate noise events, an additional condition on the zero-crossing point was included. If the zero crossover occurs inside a time window determined by the time interval elapsed from the moment that the input pulse rises above the threshold (beg) and the point where the signal trailing edge of the signal crosses again the threshold (end ), the amplitude and the time of the signal will be measured. If not, the event will be rejected (see Figure 4.39(a)).
Peaking time optimization:
Another aspect to have into account is the role of the peaking time in the time and amplitude resolutions. As we have seen, the minimum ENC value is obtained for a shaping time of the order of 2 µs. However, a better time resolution, is expected at lower values of the peaking time. In this section, we compare two different values of the shaping time: 1.39 µs and 2.05 µs. In both cases, the CFD parameters were selected to obtain the zero-crossing point at the exact moment where the signal reaches its maximum. For a shaping time of 1.39 µs the delay was set to τ d = 9 channels (720 ns) and the attenuation fraction k = 1.5, while for a peaking time of 2.05 µs the delay was τ d = 12 channels and the attenuation fraction k = 1.436. Figure 4.43 shows the time resolution, i.e. the difference between the simulated and measured time, for the two values of the peaking time. Greater accuracy in the measurement of the time is observed for a time peaking of 1.39 µs, even at high amplitudes. On the other hand, non significant difference was observed on neither the measurement of the amplitude nor the efficiency, as shown in Figures 4. [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF]
CFD parameters optimization:
The results are conditioned by the values of the delay and the CFD attenuation fraction. Ideally, this two parameters should be chosen in such a way that the zero-crossing time happens at the point of maximum slope. However, when a non-negligible level of electronic noise is present, fluctuations on the time at which the signal crosses the threshold introduces an uncertainty or jitter in the measurement of the time. Timing uncertainty caused by noise-induced jitter is illustrated in Figure 4.46. This contribution is inversely proportional to the slope of the CFD signal at the zero-crossing point and in general, the greater the slope, the smaller the uncertainty introduced by this effect [START_REF] Ortec | Application note AN42[END_REF]. Equation (4.19) shows the relation between the timing error and the slope:
σ t = σ N cf d dV cf d dt t=t 0 (4.19)
where σ n cf d is the RMS value of the noise distribution of the constant-fraction signal (V cf d ) and t 0 is the zero-crossing time. slope was calculated for the ideal case of a noiseless signal with an amplitude of 20σ noise amplitude and a peaking time of 1.39 µs. A maximum has been found for a delay of around 28 and 25 channels for an attenuation fraction of 1 and 1.5 respectively. However, the important noise contribution at low energies distorts the shape of the input signal compensating the minimization of the jitter contribution achievable at maximum slope. As we can see from Figure 4.48(a), as the value of τ d delay increases the noise (RMS value) also increases. In addition, the smallest noise contribution found for smaller delays is associated with a worsen in SNR. The timing uncertainty σ t as a function of the delay is presented in In both cases the zero-crossing point occurs after the input signal reaches its maximum, so an additional delay, called numerical delay, was necessary to ensure that the measurement of the time is performed at the maximum of the signal. The results for a different value of k (k = 1.5) are also presented in the same figures. In this case, the crossover point occurs at the exact moment where the signal reaches its maximum and thus, no additional delay is needed. In all cases, a better precision in the measurement of time and amplitude is achieved for a delay of 9 channels and a gain of 1.5.
Time and Amplitude measurement with a Peak-sensing ADC
The second method to measure the time and the amplitude of the signals is based on a peak-sensing ADC [START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF], for which the amplitude and the time are directly measured from its maximum value as illustrated in Figure 4.51. For simplicity we will refer to this method as Max.
Figure 4.51 -Time and amplitude measurement on a simulated signal using the method of a peak sensing ADC. The leading threshold was set on 3σ noise , whereas the trailing edge threshold was 2σ noise . The maximum was found at 0.18 V (red star) corresponding to a drift time of 53 µs (670 channels).
In Figure 4.52 and 4.53 the comparison between both techniques is presented. The peaking time of the input signal was set to 1.39 µs and the CFD was performed for a delay of 9 channels and an attenuation fraction of 1.5. A better time resolution has been observed for the CFD method for all amplitudes. The best result obtained for the Max technique was around 100 ns, in comparison with a resolution of 50 ns obtained with the CFD. Additionally, a better amplitude resolution has been obtained for the CFD method for the entire amplitude interval, especially at low energies. The worsening in the time and charge resolutions with the Max technique is due to fact that this method depends on the amplitude of the input signal. This effect is most important at low energies and results in a higher detection efficiency are shown in Figure 4.54.
(SNR)
Noise counting rate
The noise not only affects the time and amplitudes resolutions, but also determines the minimum detectable signal threshold. The discriminator threshold should be as low as possible to ensure the best resolution and the maximum detection efficiency, but also it should be compatible with an acceptable noise rate. High background rates may cause besides a huge readout data volume, an important degradation of the quality of the acquired data, as well as a dead time increase. If the time interval between two consecutive triggers is smaller than the time required by the electronics to process the information of the first signal, the second pulse will be ignored. This could lead to the loss of relevant information coming from a real interaction and the addition of noise events. The aim of this section is to determine the optimal threshold level by measuring the electronic noise rate at different discriminator threshold levels.
Since the amplitude distribution of the noise is Gaussian distributed with a standard deviation given by σ noise (see Section 4.4.1), the dependency of the noise counting rate on the threshold can be estimated with the Rice's formula [START_REF] Spieler | Semiconductor Detector Systems[END_REF]203]:
f n = f n0 e - V 2 th 2σ 2 noise , (4.20)
where f n0 represents the counting rate at zero threshold and V th is the threshold level. Assuming a positive threshold, a trigger will registered only when a signal crosses the threshold with positive slope, see example in Figure 4.55. As a result, the noise rate f n0 is half of the frequency at zero threshold f 0 , and its value depends on the timing characteristics of the electronics. For a fast amplifier with peaking time τ 0 , the noise rate at zero threshold can be approximated by the following expression [START_REF] Sorokin | Rice Formula Applicability for Noise Rate Estimation in the CBM and other experiments with self-triggered electronics: comparing the calculation to a measurement on example of N-XYTER CHIP[END_REF]:
f n0 = 1 2τ 0 , (4.21)
The result of the noise counting rate as a function of the threshold is reported in Figure 4.56. The threshold scan was performed over a total period of 7 s. The result shows that the experimental counting rate distribution is in very good agreement with the theoretical prediction and can be described by a Gaussian distribution with excellent precision up to ∼ 5σ.
f n0 = 1 2τ 0 = 1 2 × 1.39µs = 360 kHz, (4.22)
Threshold (SNR) Threshold (mV) Exp Rate (Hz) Rice's formula(Hz) The noise counting rate at very low threshold is too high, which implies the registration of a large number of noise event and an important increment of the dead time. The minimum acceptable threshold level is 3 times the noise, that results in a noise rate of around 4000 triggers/s. This threshold is low enough to have into account almost all physical signals, even those with low amplitude, without saturating the readout electronics.
0 0 365x10 3 365x10 3 1
However, the results obtained for the noise counting rate do not reproduce the real behavior of the discriminator. Figure 4.57 reveals the time difference between two consecutive triggers for a 3σ noise threshold. Because of the time discretization, the minimum acceptable time difference is 2 time channels (160 ns) due to the signal needs to go below the threshold to re-trigger the discriminator. This high frequency comes from the fact that the ADC also discretizes in amplitude. The registered values of the charge depends on the amplitude resolution of the ADC. The higher the resolution, the higher the fluctuation in the amplitude. To compensate for this effect we introduced a lower threshold on the trailing edge of the signal. The difference between these two threshold is called hysteresis [START_REF] Riegler | MDT Resolution Simulation Frontend Electronics Requirements[END_REF].
Taking into account that one ADC channels corresponds to ∼ 0.2σ noise6 , the minimum trailing edge threshold should be set at (V th -0.4)σ noise , where V th is the leading edge threshold. The noise counting rate obtained by including this second threshold is presented in Figure 4.58. Note that in this case the noise rate distribution is not longer modeled by a Gaussian distribution. The shape of the distribution is due to the asymmetric threshold, which makes that the discriminator is never centered around zero. By including a second threshold level, the time interval between two consecutive triggers for a threshold of 3σ noise is, in general, higher than τ 0 (see Figure 4.59). The noise rate for a leading edge threshold of 3 times the noise is now given by ∼ 3300 triggers/s. This value is significantly reduced by using the CFD method described in the previous section. For an asymmetric threshold of 3σ noise -2σ noise , the number of noise events is reduced by ∼10 % with the CFD (∼ 1790 triggers/s), compared to the standard peak sensing ADC. In the following, the trailing edge threshold is set at (V th -1)σ noise .
XTRACT: A New Front-End Electronics for XEMIS2
The continuous sampling of the analog signal performed in XEMIS1 for the data acquisition, provides a complete picture of what happens at every moment of time, allowing a robust event identification, pedestal and common noise correction and other analysis-related capabilities. However, in XEMIS2, due to the large amount of channels used to collect the ionization signal, the data output from the the whole system would be difficult to manage in real time.
Assuming that each signal is sampled at a frequency of 12.5 MHz with a resolution of 12 bits, a trigger rate of 2 Hz and a total amount of 24000 pixels, the readout system would produce a data collection rate of around 10 Tb for a 20 min medical imaging exam. This enormous volume of data is impossible to handle with a standard computer.
For detectors with a large number of readout channels, the solution is to reduce down to the minimum the information necessary to describe an interaction inside the chamber. In addition, when a large number of channels is present, efficient power supply and data traffic management are also required. In this section, we introduce a new circuit called XTRACT (Xenon TPC Readout for extrAction of Charge and Time), that has been developed for the data acquisition of XEMIS2, with the goal of extracting the amplitude and time information of each signal derived from the IDeF-X LXe ASIC. Moreover, this new front-end electronics allows to reduce the number of connexions between the inside and the outside of the chamber, which is of crucial importance on a system in which the thermal losses may affect the performances of the detector, and the risk of a leak from the outside through the vacuum contained is elevated.
XTRACT is a low power 32-channel front-end ASIC design to withstand temperatures of the order of -80 • C. This new ASIC is coupled to the 32 outputs channels of the IDeF-X LXe chip through a standard connector. This implies that in XEMIS2 around 700 XTRACTs will be present inside the chamber (350 chips per anode). Each XTRACT provides only timing and charge information of those signals delivered by the IDeF-X that exceeds a certain threshold level, and the pixel address in which the charge is collected. The value of this threshold can be externally fixed for each individual pixel via a slow control interface, which allows the configuration of each ASIC independently. Other parameters such as the DC offset are also accessible via the slow control protocol. Individual selection of the 32 channels of a certain XTRACT is also possible. The channel 31 of each ASIC includes analogical and digital test points to verify the proper functioning of the chip, in addition to provide information of its temperature. The XTRACs are grouped together in blocks of 8 ASICs through a PU card, which is responsible of reading-out the data. A total of 46 PU boards are used per anode. Time, amplitude and pixel address for each detected event is extracted from the PU card to the outside of the cryostat via high-speed LVDS lines at a rate of 96 Mb/s, to be directly stored on a disk. Figure 4.60 shows a diagram of the XEMIS2 data acquisition system. This design was made to optimize power consumption and communication schemes by reducing the number of output channels from 24000 to 94 digital outputs. As discussed in Section 4.4.2, a CFD is the best option to determine the amplitude and time of the collected ionization signals. The CFD block was designed in order to accomplish for the time and amplitude resolutions reported in Section 4.4.2. The output signals of the shaper located inside the IDeF-X LXe chip are fed to the input of the CFD module. The shaper is set with a peaking time of 1.39 µs. An example of the CFD method is illustrated in Figure 4.62. The analog signal is split and send, on one side to a voltage comparator that verifies whether the pulse triggers the discriminator threshold level, and on the other side, the analog signal is delayed by a second order Bessel filter and attenuated a 30 % of its maximum amplitude by a voltage divider. The delay introduced by the filter is around 600 ns to achieve a timing resolution of 300 ns at 3σ noise . If the input signal crosses the threshold, the outputs of both the voltage comparator and filter are send to a zero-crossing detector circuit (ZCD). The ZCD comparator subtracts both input signals and produces an output the moment the generated bipolar pulse crosses a reference value. The DC offset of the ZCD circuit is set by a 6 bits digital-to-analog converter (DAC) adjustable between ±43 mV via the slow control. A different value of this offset can be set per pixel. A second threshold, normally at (n -1) times the noise where n is the leading edge discriminator, is added to avoid a new CFD at the same pixel outside the interval of interest. A veto is included to lock the CFD output, so the amplitude and time of a signal can be directly registered independently of the CFD. If the CFD module detects a pulse, the flag signal of this particular pixels changes from 0 to 1, and a trigger is sent to the rest to the electronic chain to inform that a pulse have been detected. At this point, a voltage ramp with constant slope is generated. The ramp generator (T2AC7 in Figure 4.61) is based on a capacitor that is charged linearly through a constant current circuit until the input current ends. The output of the ramp generator is then a voltage pulse, which provides information of the arrival time of the events inside the detector relative to the first detected event. The time t 0 is, therefore, the arrival time of the first event or trigger event. The ramp returns to its zero value the moment the trigger signal ends. The ramp lasts between 7 to 10 µs, and its duration can be regulated with the slow control interface.
Equally, at the moment the CFD block detects a signal, the time and amplitude informations are stored in an analog memory. Each analog memory contains two different cell, one for the time and another one for the charge. The value of the amplitude corresponds to the analog value of the charge at the zero-crossing point, whereas the time is the analog value delivered by the voltage ramp. Each pixel has its own analog memory.
Every time a flag signal from a pixel states 1, it is sent and stored in the derandomizer. This block handles the arrival of information in case several pixels are fired, preventing a loss of information in the case a new flag arrives during the reading procedure. The derandomizer also provides addressing information to the control unit. This module performs a logical OR operation between the received flags and sends a trigger signal to the PU card presented in Figure 4.63. When the PU card is ready, one of the eight XTRACTS driven by the same board is selected by a signal Chip Select (CS). Afterwards, a reading order is sent to recuperate the information stored in the analog memory via a multiplexer. When a pixel is read, the control unit sends a reset to the derandomizer, and the flag of the pixel in question changes from 1 to 0. In the example, three different pixels, corresponding to the channels 22, 3 and 4, collect a signal. When the first arriving pulse is detected by the CFD module, a trigger signal is generated and the flag of the pixel 22 states 1. At this moment the voltage ramp generator also starts. If another pulse is detected while the ramp generator is still on (pixels 3 and 4 in Figure 4.64), their arrival times are obtained from the analog value of the ramp at the zero-crossing point. This time value is not the absolute time of the interaction inside the TPC but it is relative to the moment the first pulse is detected (trigger). The flag signals of the three fired pixels are also sent to the control unit to later be read by the PU card. The trigger warns the PU board that at least an event has been detected. When the card is ready, a CS is sent in order to start reading the information stored in the analog memory. The derandomizer selects the reading sequence, that can be different from the detecting one, i.e. the pixels may be read-out in a different order than that of detection. In the example presented in Figure 4.64 channels are read-out starting from pixel 3 to pixel 22. When a pixel is read by the PU card, its flag signal states 0. A new reading order is sent from the PU board to the derandomizer until the last flag changes to 0. At this moment, the trigger and CS signal return to its zero value and the PU board is free to start reading another XTRACT ASIC. A reset signal is also sent to the ramp generator which remains at zero until a new event is detected. As discussed in the previous chapters, a very low threshold level of at least three times the electronic noise is required in order to perform the Compton sequence reconstruction and triangulate the position of the radioactive source. A low threshold, however, is accompanied by a high noise trigger rate that should be handle by the read-out system. The design of the data acquisition system of XEMIS2 was made to support a noise rate of around 4096 trigger per second and per pixel at 3σ noise . At this trigger rate, each XTRACT will receive, in average, an event every 7.6 µs. Since the ASICs are grouped in sets of 8 XTRACTs, the PU board will be receive, in average, a noise trigged every 950 ns. With a 3 MHz reading rate, XEMIS2 is capable of reading without dead time 11700 events per second and per pixel. In average, ∼7400 of the registered events will come from an interaction of an ionizing particle inside the TPC.
Conclusions Chapter 4
In this chapter we have presented the performances of the readout front-end electronics used in XEMIS1 for the measurement of the ionization signal. In order to optimize the signal extraction, we have carried out a detailed study of the electronics response. The study of the influence of the charge sensitive pre-amplifier and shaper on the shape of the output signals has been reported in Section 4.3, as function of both the shaping time of the linear amplifier and the collection time of electrons in the TPC. Charge linearity and electronic noise contributions have been also discussed in this section.
In a large number of applications, precise information of the arrival time of the electrons in the detector is of particular interest. When timing information is the major goal, pulses are often handled differently than when accurate charge measurement is the purpose. The accuracy with which timing and amplitude measurements can be performed depends both on the properties of the detector and on the performances of the electronics used to process the signal. In 3γ imaging both good energy and spatial resolutions are required in order to reconstruct the 3D distribution of a radioactive source. Consequently, the measurement of the charge and time on the detected signals must be optimized. In this chapter, we have presented a complete study of the measurement of the timing and amplitude information of the ionization signals based on a Monte Carlo simulation that reproduces the output signal of the IDeF-X LXe front-end electronics. Special attention has been paid to the simulation of the noise. The study has been performed for two different methods based on a Constant Fraction Discriminator (CFD) and a Peak-sensing ADC. The obtained results with the CFD method, for an optimized selection of the CFD parameters, showed a clear improvement of the time and amplitude resolutions in comparison with a peak-sensing ADC. We have showed that the Max method is not useful to measure the amplitude of low energy signals due to the important amplitude fluctuations caused by the electronic noise. Moreover, we have also found that a small peaking time provides an improvement of the time and amplitude resolution regardless the slightly higher ENC noise. Taking into account these results, together with the electronic limitations, we found that the best results are obtained for a delay of 9 channels (720 ns) and a attenuation fraction of 1.5.
The results obtained with this study have contributed to the development of a specific acquisition system for the measurement of the ionization signal in XEMIS2. A description of the main characteristics of this new analog ASIC called XTRACT has been presented in Section 4.6.
Chapter 5
Study of the Performances of a Frisch Grid W hen radiation pass through the LXe, it ionizes the medium producing a track of electrons-ion pairs. The resulting charge carriers rapidly recombine unless an external electric field is applied. In this case, electrons and ions immediately drift in opposite directions by the action of the electric field. In order to determine the energy deposited by the incoming radiation, the detector should be sensitive to the produced charge carriers in the interaction. Ionization detectors, such as ionization chambers, Geiger-Müller tubes and proportional counters have been commonly used since the first half of the 20th century to detect ionizing particles [START_REF] Sauli | Gaseous Radiation Detectors. Fundamentals and Applications[END_REF]. In an ionization detector, the formation of a signal is caused by the charge induced in one or more electrodes. The induced signal is in fact produced by the displacement of electrons and positive ions through the medium. Considering the simplest structure of an ionization detector, which is based on two parallel electrodes separated by a distance d and immersed in a dielectric medium, the total induced charge depends on the distance travelled by the charges before being collected. This implies that the collected signal depends on the position of the interaction in the active zone of the detector with respect to the collecting electrode. To overcome this pulse-amplitude dependence with the position of the interaction, a third electrode known as Frisch grid is usually incorporated between the two electrodes [START_REF] Frisch | Isotope analysis of uranium samples by means of their α-ray groups[END_REF]. Gridded ionization chambers are commonly used in nuclear and particle physics to measure ionization radiation. More sophisticated designs including strip electrodes or pixelated anodes make this kind of ionization detectors interesting not only for γ spectroscopy but also for position determination. The XEMIS camera is based on the principle of a Frisch grid ionization chamber.
In this chapter, we discuss the basic theory of signal induction in a parallel plate ionization chamber. The advantages of using a gridded ionization chamber and the basic principle of signal formation are discussed in Section 5.1. The signal generated in an ionization detector depends on the transport of the charge carriers through the active volume. For this reason, when using a Frisch grid ionization chamber the properties of the collected signal also depend on the characteristics of the grid. In order to improve the performances of future devices, during this work, we have tried to understand and study the main effects associated with a gridded ionization chamber. Electron transparency and charge collection efficiency of a Frisch grid are discussed in Section 5.2 and 5.3, respectively. Charge sharing between neighboring pixels is discussed in Section 5.4. The theoretical discussion is supported by experimental results and by simulation. The results obtained with XEMIS1 for different Frisch grids are reported in Section 5.5.
Theoretical background
The phenomenon of charge induction by moving charges in an ionized medium is well-described in the literature [START_REF] Knoll | Radiation Detection and Measurements[END_REF][START_REF] Spieler | Semiconductor Detector Systems[END_REF][START_REF] Rossi | Ionization Chambers And Counters[END_REF][START_REF] Blum | Particle Detection with Drift Chambers[END_REF]. However, the commonly used term charge collection instead of charge induction sometimes leads to a misinterpretation. In presence of an electric field, the displacement of electrons and ions in a detector induces a current signal on the readout electrodes. This implies that the signal on an electrode is formed from the moment the charges start to move, and not from the actual collection of the charges when they arrive to the electrode.
In order to better understand the physics of signal formation on an electrode, a general overview of the theory of charge induction due to the motion of charge porters in an ionization detector is given in this section. A brief introduction to the Shockley-Ramo theorem is presented in Section 5.1.1. Furthermore, more specific examples of charge induction on a parallel plate ionization chamber and a Frisch grid ionization chamber are discussed in Sections 5.1.2 and 5.1.3 respectively.
Charge induction and the Shockley-Ramo theorem
The current induced on an electrode by a single charge produced in a detector can be determined from the Shockley-Ramo theorem [START_REF] Shockley | Currents to Conductors Induced by a Moving Point Charge[END_REF][START_REF] Ramo | Currents induced by Electron Motion[END_REF]. This theorem was first developed for charge induction in vacuum tubes but it has been demonstrated that it can be applied to any detector configuration, from gas ionization chambers to semiconductor detectors. The Shockley-Ramo theorem states that the instant current induced on a single electrode can be evaluated as follows:
i = -q . v . E w (5.1)
where v is the velocity of the charge q in the medium and E w is the weighting field at the position of q. Similarly, the charge Q induced on an electrode due to the movement of a point charge q can be determined by the following expression [START_REF] Shockley | Currents to Conductors Induced by a Moving Point Charge[END_REF]:
Q = -q . ∆ϕ w (5.2)
where ∆ϕ w represents the weighting potential difference. E w and ϕ w are, respectively, the electric field and the electric potential at the position of q when the electrode that collects the charge is set to a potential of 1 V, all other electrodes are grounded and all charges are removed [START_REF] Blum | Particle Detection with Drift Chambers[END_REF][START_REF] He | Review of the Shockley-Ramo theorem and its application in semiconductor gamma-ray detectors[END_REF]. The weighting field and weighting potential only depend on the geometry of the detector and in general, they differ from the actual applied electric field and electric potential except for the basic case of a two infinite parallel plate ionization chamber.
According to the Shockley-Ramo theorem, while the trajectory of the charge q follows the real electric field lines (if diffusion is neglected), the charge Q induced on the electrode of interest can be obtained from the weighting potential. The distribution of the weighting potential does not influence the motion of the charge but it represents the electrostatic coupling between the moving charge and the collecting electrodes. The weighting potential is determined by solving the Laplace equation ∇ 2 ϕ w = 0 with spatial boundary conditions that only depend on the detector's geometry. As a result, the value of the charge induced by the motion of q does not depend on the potential applied to the electrodes, but it only depends on the position of q with respect to the collecting electrode.
Principle of a parallel plate ionization chamber
One of the simplest methods to measure the charge produced in a liquefied noble gas detector is by using a parallel plate ionization chamber. Its basic design consist of two parallel plane electrodes (anode and cathode) separated by a certain distance d and filled with a suitable medium, normally a gas or a liquid. The electrodes are maintained at a potential difference V b in order to create an electric field between them. The distance between the electrodes should be small with respect to the length and width of the electrodes to generate an uniform electric field. A schematic drawing of a parallel plate ionization chamber is illustrated in Figure 5.1. The cathode is generally kept at a potential of -V b , while the anode or collecting electrode is grounded through a resistance R. Electrons and ions generated after the passage of radiation through the medium are immediately drift apart by the action of the electric field. The current induced on the anode is converted into an electrical pulse with an external electronic chain usually composed by a charge sensitive preamplifier. The Shockley-Ramo theorem states that the weighting field of the anode can be calculated by setting the anode at unity potential and the cathode to ground. The result of the Laplace equation for a two parallel plate detector with this boundary conditions results on a weighting field, E w , that varies linearly with the distance from the anode over the drift length d and it is zero otherwise [START_REF] Spieler | Semiconductor Detector Systems[END_REF]:
E w = 1 d z (5.3)
where z represents the drift direction of charges in the detector. Applying Equation 5.1, the current induced on the anode by a single electron of charge q is given by:
i = -q . v - d (5.4)
where v -is the electron velocity in the medium. Equation 5.4 shows that the current induced on the anode only depends on the real applied electric field and the distance between electrodes.
If we now consider a certain number N -of drifting electrons produced by ionization after the passage of radiation through the space between the plates, Equation 5.4 results in:
i -(t) = -q . N -. v -(t) d (5.5)
Assuming that all charges are produced at the same position, both the electric and the weighting fields are uniform between the electrodes and electrons move at constant velocity, Equation 5.5 implies that the current induced on the anode by the displacement of N - electrons in the drift region is constant during the time the electrons drift towards the anode, and becomes zero the moment the electrons reach the electrode.
After a γ-ray of energy E γ interacts with the LXe, a number N of electron-ion pairs is produced according to: N = E 0 W , where W is the average energy required to produce an electron-ion pair and E 0 is the energy deposited in the interaction. Same as for the electrons, the current induced on the anode due to the displacement of N + positive ions can be expressed as:
i + (t) = -q . N + . v + (t) d (5.6)
where v + is the velocity of ions in the medium. Since ions also drift with constant speed in a homogeneous electric field, the current induced on the anode by the displacement of positive ions between the two plates is also constant during the drift time and it ceases the moment the charges reach the cathode. Considering that the same number of ions and electrons is produced with the same absolute charge, the total induced current by the movement of both electrons and ions can be approximated as:
I(t) = i -(t) + i + (t) = q . N . v -(t) + v + (t) d (5.7)
If the interaction occurs at a position z from the anode (Figure 5.1), the electrons will drift a distance z before being collected, while the positive ions will travel a distance (d -z) to the cathode. Likewise, the traveled distances can be represented as z = v -. t -and (d -z) = v + . t + where t -and t + are the drift time or collecting time of electrons and positive ions respectively, which represents the time required by the charges to reach the electrodes from the point of interaction. The induced charge is then obtained by integrating the induced current over the collecting time as presented in Equations 5.8 and 5.9:
Q -(t) = t - 0 i -(t) . dt = -q . N -. z d (5.8) Q + (t) = t + 0 i + (t) . dt = -q . N + . 1 - z d (5.9)
The sum of these two contributions gives the total charge induced on the anode: q(t) = Q -(t) + Q + (t), and the corresponding voltage difference between the anode and the cathode is given by V (t) = q(t) C where C is the capacitance between both electrodes.
Signal shape:
The slope of the induced signal depends on the drift velocity of the charges, and the signal duration depends on the position of the interaction. Since the mobility of ions is about three orders of magnitude slower than that of electrons, all electrons reach the anode in a short time compared to ions and therefore, their motion can be almost neglected while electrons drift towards the anode. As a result, the induced signal has a double contribution with a fast rise time due to the deriving electrons, and a slower component that comes from the fact that the ions are still traveling after all electrons have been collected. An example of the current and voltage induced on a parallel plate ionization chamber as a function of time is illustrated in Figure 5.2. Even though both electrons and positive ions induce a signal on the collecting electrode during the electrons' drift time t -, the fast rise time observed in the output signal is mostly due to the migration of electrons since the slow drift of the positive ions provides an almost negligible contribution to the signal. Once all electrons are collected, a more gradual rise is observed in the output signal due to the ions still induce a voltage qN dC t(v + + z) during a time t + , which is much longer than t -. Finally, when all charge carriers are collected, the maximum induced charge on the anode is achieved:
Q = q . N . (d -z) + z d Q = q . N (5.10)
The total induced charge on the anode is independent of the interaction depth and only depends on the number of electron-ion pairs, which is proportional to the deposited energy. However, in real experimental conditions, only the contribution to the induced signal due to electron motion is registered, whereas the contribution of positive ions is generally suppressed by the integration time of the external electronic readout system. As a result, the total induced charge becomes dependent on the position of the interaction with respect to the anode according to Equation 5.11:
Q ≈ q . N . z (5.11)
Depending on where the electron-ion pairs are created, the amplitude of the induced pulse will vary from 0 to V max ≈ q . N C . This variations of the signal amplitude with the position of the interaction severely degrade the detector's energy resolution [START_REF] He | Review of the Shockley-Ramo theorem and its application in semiconductor gamma-ray detectors[END_REF]. To avoid this z-dependence of the induced signals, the incorporation of a third electrode between the cathode and the anode was proposed by Frisch [START_REF] Frisch | Isotope analysis of uranium samples by means of their α-ray groups[END_REF]. The principle of a Frisch grid ionization chamber is presented in the following section.
Frisch Grid Ionization Chamber
In order to improve the performances of an ionization chamber and remove the position dependence of the induced signal due to the motion of charge carriers in the medium, a gridded electrode is incorporated between the cathode and the anode. This method was first proposed by O. Frisch in 1944 for a gas ionization chamber [START_REF] Frisch | Isotope analysis of uranium samples by means of their α-ray groups[END_REF]. Figure 5.6 shows a schematic diagram on a Frisch grid ionization chamber. This third electrode, known as Frisch grid, is placed at a distance p close to the anode and set at an intermediate potential between the potentials of the cathode and the anode. Under this conditions, the grid shields the anode from the induction of charges generated in the region between the cathode and the grid. Ideally, this configuration divides the active area of the detector in two independent chambers: the drift region and the gap. Since the gap is small compared to the drift region, most of the interactions take place between the cathode and the grid. Electrons and ions created in this region migrate in opposite directions by the action of the electric field. While the positive ions induce a current on the cathode in the same way as explained in the previous section, the electrons, on the other hand, pass through the grid. The actual signal induced on the anode starts from the moment the electrons crosses the grid and stops when they reach the collecting electrode. Positive ions on the other hand, are shielded by the grid and hence, they induced no current on the anode. The charge induced on the anode can also be determined from the Shockley-Ramo theorem. The weighting potential of the anode is obtained by applying a potential of 1 V to the electrode and setting both the cathode and the Frisch grid to ground. An illustration of the weighting potential for an ideal Frisch grid ionization chamber as a function of the distance between electrodes is presented in Figure 5.4. The weighting potential is zero in the region between the cathode and the grid, and varies linearly to 1 between the grid and the anode. As a result, the total induced charge does not depend on the position of the interaction because now, all the electrons travel the same distance p within the detector. Figure 5.5 shows an example of the induced voltage on a Frisch grid ionization chamber. Compared to Figure 5.2, the induced signal is zero while electrons migrate towards the grid, followed by a fast rise time from the moment the electrons pass through the grid. The amplitude of the output signal is now proportional to the number of collected electrons, since carriers, that are created at any position in the cathode-grid region, induce a maximum signal on the anode as long as they all pass through the grid. Moreover, the signal is independent of whether or not the positive ions are collected [START_REF] Luke | Unipolar Charge Sensing with Coplanar Electrodes Application to Semiconductor Detectors[END_REF]. Since the slow moving ions do not affect the output signal, the rise time of the induced pulse only depends on the distance between the Frisch grid and the anode and on the electronics used to integrate the induced current. In general, for the same electronics system, smaller gap distances result in faster rise times. Taking this into account, with an ideal Frisch grid placed between the cathode and the anode, the amplitude of the induced signal is directly proportional to the deposited energy in the detector, although the total induced charge still depends on the number of collected electrons. The fraction of electrons that reaches the anode depends on many factors such as detector design, the purity of the medium and the applied electric field. Under real experimental conditions, a certain number of electrons may be collected by the grid reducing the total number of collected charges. This effect is related to the electron collection efficiency of the grid. In addition, the shielding of the Frisch grid to the movement of charges in the drift region is not perfect. In fact, electrons induce a current on the anode before they pass through the grid. This effect referred as inefficiency of the Frisch grid affects both the total induced charge and the shape of the output signals. A more detailed description of the electron transparency and inefficiency of the Frisch grid are presented in Sections 5.2 and 5.3 respectively.
Electrons collection by the Frisch Grid
Electron transparency of a grid refers to the fraction of electrons produced after the interaction of an ionizing particle with the medium that passes through the grid under the influence of an electric field. If the transport properties of the grid are good, no electrons are trapped during migration between the point of interaction and the grid and hence, the amplitude of the measured signal at the anode does not depend on the position of the interaction. On the other hand, if a fraction of electrons are collected before reaching the anode, the shape and amplitude of the output signals would be reduced degrading the energy resolution of the detector [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF]. Electron transparency of a grid depends mostly on the choice of the potentials applied to the electrodes in the detector, but it also depends on the gap and the geometrical characteristics of the grid as is discussed in this section. The reduction on the number of drifting electrons due to recombination is omitted in this chapter.
The grid is maintained at an intermediate potential of those of the cathode and the anode. With an appropriate biasing of the electrodes, the overall electric field within the chamber remain substantially uniform so electrons can pass through the grid with high efficiency. For this to happen, the electric field in the gap, E gap , should be higher than the electric field in the drift region, E drif t . When the electric field ratio Egap E drif t is large enough, all field lines will pass through the grid and electrons (if diffusion is neglected), that in fact drift along the electric field line, will arrive to the anode without being collected by the grid. On the other hand, if the ratio is not enough, some electric field lines may terminate on the grid and a fraction of electrons would be collected before they actually cross the grid. Bunemann et al. [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF] established a minimum bias condition that should be satisfied in a Frisch grid ionization chamber in order to avoid electron collection:
E g E d ≥ 1 + 2πr a 1 -2πr a (5.12)
where r is the grid wire radius and a is grid pitch, i.e., the center-to-center distance between adjacent wires. The geometry of a Frisch grid ionization chamber is shown schematically in Figure 5.6. It should be noticed that Equation 5.12 is valid for a specific kind of grid based on 1D parallel wire grid. In our LXe TPC, the Frisch grid is a mesh wire grid that consists of a set of parallel and perpendicular wires. Hence, although this condition cannot be directly applied to our detector, it gives an idea of the direct relationship between the electric fields required to avoid electron collection by the grid and the characteristics of the experimental setup and the physical properties of the grid. Equation 5.12 relates, in fact, to the fraction of field lines that ends on the collecting electrode where E g and E d do not directly represent the electric field in the gap and drift region, but they represent the number of lines per unit area that ends and leave the grid respectively. Substituting E g and E d by the actual potential difference on the electrodes, Equation 5.12 results in:
V anode -V grid V grid -V cathode ≥ p + pρ + 2lρ d -dρ -2lρ (5.13)
where p is the grid-anode distance, ρ = 2πr a , l = a 2π ( 1 4 ρ 2 -log ρ) and V cathode , V anode and V grid are the voltages applied to the cathode, anode and Frisch grid respectively [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF]. The local distribution of the electric field lines that pass through the grid and terminates on the anode depends, therefore, on the geometry of the grid. By decreasing the wire radius or increasing the pitch of the grid a smaller electric field ratio is required to achieve a maximum electron collection efficiency.
Moreover, according to Equation 5.13, for the same applied electric drift field and same Frisch grid, halving the gap implies a reduction by a factor of two on the potential applied to the grid in order to have the same electron transparency. However, smaller gaps imply some severe mechanical constraints. Although a mesh grid is in general mechanically resistant, the handling of the grid to achieve good flatness and parallelism with respect to the anode-cathode plane may be quite challenging. When an intense potential is applied to the grid, the electric field pulls down the mesh. In order to maintain the flatness of the grid and hence, the gap distance between the grid and the anode, the grid should be uniformly stretched over a frame. A small deflection of the grid with respect to the anode plane may cause capacitance variations, and thereby produce an increase of the electronic noise. Moreover, small gap distances and high electric fields imply large capacitances per unit area that makes the grid very sensitive to mechanical vibrations. A small variation of the distance between the grid and the anode produced by a vibration may create a large transitory fluctuation in the capacitance producing a charge pulse spike at the input of the readout electronics. In a worst-case scenario, if the grid-anode distance is too small and the grid is not well stretched, a mechanical vibration may cause the direct contact between electrodes producing irreversible damages to the electronics.
In addition, the possibility of sparks in the gap when working at high voltages and small gaps should be taken into account. This effect limits the maximum potential that can be applied to the Frisch grid and hence, it limits the maximum electric drift field.
Frisch Grid Inefficiency
As discussed in Section 5.1.3, the Frisch grid should act as an electrostatic shielding between the drift region and the gap, removing the position dependence from the charge induced on the collecting electrode. If the shielding is perfect, electrons will induce a current on the anode from the moment they pass through the grid while no signal is induced by the positive ions. However, in real experimental conditions, a slightly position-dependence on the induced signals is always present due to the limited shielding of the grid. This effect known as inefficiency of the Frisch grid implies that electrons start inducing a current on the anode before they actually pass through the grid. If so, the weighting potential of the anode is not strictly zero in the drift region as presented in Section 5.1.3. Numerical calculations of the weighting potential on a Frisch grid ionization chamber have been reported in [START_REF] Göök | Application of the Shockley-Ramo theorem on the grid inefficiency of Frisch grid ionization chambers[END_REF].
Authors show that the weighting potential of the anode at distances far from the grid, i.e. z > a where a represents the grid pitch, is not zero but it increases slowly as the distance from the grid decreases. In the vicinity of the grid its value deviates abruptly from zero and rapidly increases as the distance to the anode decreases. Figure 5.7 shows an example of the weighting potential reported by [START_REF] Göök | Application of the Shockley-Ramo theorem on the grid inefficiency of Frisch grid ionization chambers[END_REF] for a parallel wire grid. The weighting potential on the anode can be approximated as a function of z according to Equation 5.14:
ϕ anode = σ 1 - z d , if p < z < d σ + 1 -σ p -z p , if 0 < z < p (5.14)
The inefficiency of the Frisch grid is determined by a linear extrapolation of the obtained weighting potential distribution at the position of the grid. These results describe fairly well the weighting potential in the regions between electrodes although they do not reproduce the inhomogeneities observed around the Frisch grid. Equation 5.14 assumes that the weighting potential only varies with depth. However, in the proximity of the grid the weighting potential lightly fluctuates in the region between wires. As shown in Figure 5.7 this y-dependency disappears almost completely at distances higher than the grid pitch (x > p). However, these fluctuations strongly depends on the gap, increasing as the gap decreases. This approximation is therefore valid for detectors with large gaps compared to the pitch of the grid, and otherwise the lateral dependence of the weighting potential should be taken into account on the induced signal.
In Equation 5.14 σ is the inefficiency factor for a parallel-wired grid calculated by Bunemann et al.. The inefficiency parameter σ depends on the grid-anode distance p, the wire radius r and the pith between wires a according to:
σ ≈ a 2πp log a 2πr
(5.15)
Values of the grid inefficiency factor σ determined experimentally for a parallel wire grid and a mesh grid are also reported in [START_REF] Göök | Application of the Shockley-Ramo theorem on the grid inefficiency of Frisch grid ionization chambers[END_REF]. The results are in good agreement with the values of σ obtained by numerical calculations and also predicted by Equation 5.15 for the case of a parallel wire grid. Authors show that for both kind of grids, the inefficiency factor σ does not vary linearly with the ratio a p . For a constant value of r and a, when the gap is large compared to the grid pitch, the grid inefficiency increases by almost a factor of two when the gap is reduced from 10 mm to 6 mm as expected from Equation 5.15. However, when the gap distance becomes comparable to the pitch, a larger increase of σ is observed for the same gap variation. Nevertheless, this increase is less important for a mesh of crossed wires than that of a parallel wire grid, although the variation is smaller than the one expected in the case of both set of parallel and perpendicular wires would act as independent shields to the anode. Based on the results obtained by Göök et al., for a metallic woven mesh with 254 µm pitch, bar thickness of 25 µm and placed at 500 µm from the anode, an inefficiency of around 5 % is estimated ( a p ≈ 0.5). On the other hand, an inefficiency factor of the order of 2 % is estimated for the same grid but with a 1 mm gap. The choice of a Frisch grid and the gap distance on a gridded ionization chamber should be a compromise between the improve of the grid efficiency and the requirements related to a good electron collection efficiency as discussed in Section 5.2. In Section 5.5 a detailed 5.4. Charge induction on a pixelated anode study of the effect of the inefficiency of the Frisch grid on the shape of the output signals for different type of grid geometries and grid-anode distances is presented.
Charge induction on a pixelated anode
So far we have discussed the charge induction on a planar electrode so the total charge collected by the anode is always equal to the charge produced in the interaction (if electrons are not collected by the grid or trapped by the impurities of the medium). However, in our LXe TPC the collecting electrode is based on a segmented anode divided in 3.125 x 3.125 mm 2 pixels. The advantage of using a pixelated anode is that it provides two-dimensional information of the position of the interaction in the detector and therefore, we can measure both the energy and the position. However, pixelated electrodes are also sensitive to other effects such charge sharing between neighboring pixels or indirect charge induction, that affects the performances of the detector.
Charge sharing between adjacent pixels is mainly related to the transverse diffusion of electrons in the medium, which determines the size of the electronic cloud. If the pixel size is smaller that the size of electron cloud, the charge will be always collected by several pixels. Otherwise, the charge sharing between pixels will depend on the relative position of the cloud with respect to the center of the pixel. X-ray emission after a photoelectric interaction and the mean free path of the primary electrons may also contribute to charge sharing between neighboring pixels. Either way, charge sharing increases the number of fired pixels per interaction. Multiple-pixel events, i.e. events with at least one cluster composed by more than one fired pixel, can be used to improve the spatial resolution by measuring the center of gravity of the interaction. However, multiple-pixel events also have poorer energy resolution. Charge sharing implies lower amplitude signals collected per pixel, which makes the device more sensitive to the noise discrimination level. Very low threshold levels are therefore required, in order to collect all the charge from a single electron cloud shared between several pixels. The number of triggered pixels depends on the pixel size (see Section 7.3). In this respect, larger pixel dimensions are recommended for better energy measurement performances.
In LXe, the spread of the electron cloud due to transverse diffusion is ∼ 200 µm √ cm for an applied electric field of 1 kV/cm. The simulated lateral extension of the electron cloud at 511 keV was found to be around 200 µm, and the mean free path of K-shell X-rays is ∼ 400 µm. Note that the last two effects are small compared to a 3.1 × 3.1 mm 2 pixel size, so its contribution can be neglected. The lateral spread of the electron cloud is only significant if the distance between the point of interaction and the adjacent pixel boundary is smaller than 3 times the transverse diffusion coefficient. Other effects such as electronic noise or charge induction on the neighboring pixels may also generate multiple-pixel events even though the charge produced in the interaction is collected by a single pixel of the anode. These two effects degrade the energy resolution of the detector since an additional charge is added to the real charge produced by the ionizing particle. The effect of the electronic noise is discussed in Section 6.4. In this section, we study the charge induced on a pixel that neighbors a direct signal collecting electrode.
The charge induced on a pixelated anode by a moving charge q in general differs from the one induced on a planar electrode. Figure 5.8 shows the weighting potential distribution for a parallel plate electrode and for two different pixel pitch. As we have seen, on a two parallel plate detector the weighting potential is a linear function of the distance from the electrode surface. However, on a pixelated anode the weighting potential start to bend in the proximity of the pixel. The inhomogeneities of the weighting potential close to the pixel become more significant as the size of the pixel decreases [START_REF] Rossi | Pixel Detectors: From Fundamentals to Applications[END_REF]. According to the Shockley-Ramo theorem, the weighting potential of a certain pixel of the anode is obtained by solving the Laplace equation ∇ 2 ϕ w = 0 when the pixel of interest is set at unity potential and the rest of pixels, the Frisch grid and the cathode are grounded [START_REF] He | Review of the Shockley-Ramo theorem and its application in semiconductor gamma-ray detectors[END_REF]. If the pixel size is much larger than the gap distance, the weighting potential can be approximated as a linear function of depth so it increases linearly from zero to unity as electrons drift between the grid and the anode. Under this conditions, the same charge is induced independently of position of the interaction [START_REF] Rossi | Pixel Detectors: From Fundamentals to Applications[END_REF]. However, if the pixel size is comparable or smaller than the gap, the weighting potential is no longer linear with distance, but it shows a gradient that becomes steeper in the immediate vicinity of the pixel. The induced current is small when q is far from the pixel, i.e. z > s where s is the size of the pixel, due to the charge sharing among many pixels of the anode and it becomes significant only when the charge is very close to the pixel. This non-linear behavior of the weighting potential on a segmented electrode varies with the pixel size. Smaller pixels with respect to the detector dimensions show a more rough deflection from linearity. This effect is known as small pixel effect [START_REF] He | Review of the Shockley-Ramo theorem and its application in semiconductor gamma-ray detectors[END_REF].
Since the weighting potential is not uniform on a pixelated anode and it bends around the pixel, the weighting potential reaches into the neighboring pixel volume. This leads to the induction of a transient signal on the neighboring pixel, even if the charge is drifting only in the actual collecting pixel through the electric field lines. Since these neighboring pixels do not collect charge, their weighting potential should be zero. However, it rises and shows a maximum value before dropping to zero as illustrated in Figure 5.9 [START_REF] Knoll | Radiation Detection and Measurements[END_REF]. The transient signal is a bipolar pulse, which is positive as the electrons pass close to the non-collecting pixel and then becomes negative as the electrons are collected by the collecting pixel leading to a net zero integral. If the shaper has a long integration time the induced transient signal will give a zero output pulse. However, if the integration time is short, the output signal of the electronics may gives a positive pulse with enough amplitude to trigger the threshold level. Consequently, these transient signals are considered as real deposited charge in the interaction and then added to the total energy of the cluster. In some cases, this transient signal will be mixed with direct collected charge, resulting in a deformed pulses with a biased value of the time and the amplitude. The contribution of the induced charge on a non-collecting electrode depends on the weighting potential. Smaller pixel size lead to stronger charge induction in the neighboring pixels due to the small pixel effect, but also give rise to the multiple-pixel events due to the diffusion and the charge cloud distribution. Moreover, the amplitude of the transient signal induced on a non-collecting electrode depends on the lateral position of the electronic cloud with respect to the pixel center.
Results and Discussion
Measurement of the electron transparency of the Frisch grid
In LXe after the interaction of a 511 keV γ-ray around 27200 electrons are produced in the medium if an electric drift field of 1 kV/cm is applied [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF]. For an ideal Frisch grid and if any of the electrons are collected by the grid i.e. the transparency of the grid is 100 %, the total charge collected by the anode is equal to the number of electrons produced in the interaction. However, if the transport properties of the grid are not good, a fraction of the electrons are collected before passing through the grid and the total induced charge on the anode will be reduced. Since the transparency of a Frisch grid depends on the fraction of electric field lines intercepted by the grid [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF], it can be measured by comparing the 511 keV photoelectric peak for different electric field ratios (
E gap E drif t ).
The electron collection efficiency of a Frisch grid can be thus determined by keeping the potential applied to the cathode constant, i.e. same electric drift field, the anode to ground and by varying the potential applied to the the grid. In the particular case of XEMIS1, to maintain the same electric field along the drift region the voltages applied to the grid, the cathode and the first electric field ring should vary while the anode is kept to ground. The results presented in this section were obtained for a constant electric drift field of 1 kV/cm with a 6 cm TPC. The cathode was biased from -6300 V to a maximum voltage of -7000 V, while the potential applied to the Frisch grid varied from 50 V to 500 V corresponding to electric field ratios from 2 to 8. The Frisch grid used for this study was a 100 LPI metallic woven mesh placed at 500 µm from the anode. The experimental set-up used to have a mono-energetic beam of 511 keV γ-rays and the data acquisition system are introduced in Chapter 6. Figures 5.10 shows two examples of the 511 keV γ-ray spectrum obtained with a low activity 22 N a source at 1 kV/cm. In Figure 5.10(a) the Frisch grid was set to a potential of 50 V corresponding to an electric field ratio of 2, while the spectrum shown in Figure 5.10(b) was obtained for a ratio of 6. In both cases the event selection was performed according to the method presented in Chapter 6. Only those events that take place at least 5 mm from the grid were selected for the analysis. Moreover, the measured energies were corrected by electron attenuation as discussed in Chapter 7 (Section 7.1). The maximum position of the photoelectric peak was obtained by fitting the distribution by a Gaussian function. The mean value of the peak represents the average charge measured by the detector after a 511 keV energy deposit in the TPC. We can observe from Figure 5.10 that the position of the 511 keV peak varies with the voltage applied to the grid. When the ratio is low the mean of the photoelectric peak is around 0.75 V. However, by increasing the electric field in the grid-anode region by a factor of 3 the collected charge increases to 0.91 V. This implies that a fraction of the electrons are lost during motion towards the anode when an electric field of 2 kV/cm is applied in the gap region compared to an electric field of 6 kV/cm.
In order to determine the optimal bias voltage that should be applied to the grid to obtained a 100 % electron transparency, the same study was performed for different electric field ratios. Figure 5.11 shows the evolution of the amplitude of the photoelectric peak as a function of the electric field ratio for a constant electric drift field of 1 kV/cm. The collected charge was normalized to the maximum charge measured for a field ratio of 8. We can see that a maximum electron collection efficiency is already achieved for a ratio of 5.5. Lower electric fields in the gap implies that a fraction of the produced electrons are lost before reaching the anode. For an electric field ratio of 2 the loss is of the order of 18 % and decreases rapidly as the ratio increases. The statistical error, estimated from the Gaussian fit of the photoelectric peak, is too small to be seen in the figure. As discussed in Section 5.2 the optimal bias to the electrodes depend both on the physical properties of the grid and on the distance between the grid and the anode. The results presented in Figure 5.11 were obtained for a 100 LPI metallic woven mesh which has a 50 µm thickness and 254 µm pitch between wires. Figure 5.12 shows the comparison of the electron collection efficiency of this kind of Frisch grid for two different gap sizes as a function of the potential applied to the grid. As expected, for a 1 mm gap, same levels of electron transparency are obtained by increasing by a factor of two the potential applied to the grid. Similarly, the collected charges were normalized to the maximum charge measured at the highest electric field ratio.
This study was performed for each of the grids tested during this thesis. For example, the results obtained for a 50.29 LPI mesh are shown in Figure 5.13. For this mesh, which has a pitch of around 2 times bigger than that of the 100 LPI (505 µm) mesh, the optimal electric field ratio is 3 instead of 6 for the same electric drift field of 1 kV/cm. In this case, the influence of the thickness is assumed to be negligible since the variation is small (50 µm .12 -Collected charge for 511 keV events as a function of V grid , for a constant electric drift field of 1 kV/cm. The results were obtained for a 100 LPI Frisch grid for two different gaps of 500 µm and 1 mm. and 60 µm thick respectively). Therefore, increasing the mesh hole size by a factor of 2, halves the electric field on the gap required to achieve an electron transparency of 100 %. This reduction could be interesting when high electric drift fields are required as in the case of XEMIS2 where electric fields up to 3 kV/cm are expected. .13 -Collected charge for 511 keV events as a function of the ratio between the electric field in the gap and the electric drift field for a constant electric drift field of 1 kV/cm. The results were obtained for a 50.29 LPI Frisch grid located at 1 mm from the segmented anode.
Influence of the Frisch grid inefficiency on the pulse shape
In Section 4.3.1, we performed the study of the shape of the output signal for 511 keV events in order to understand the slower rise time of the signals compared to an ideal step-like pulse injected to the front-end electronics. A rise time1 of 1.52 µs was measured regardless the gap distance between the grid and the anode, while a rise time of 1.39 µs is expected from the shaping time of the electronics. Our hypothesis is that this increase of the pulse rise time is attributed to the inefficiency of the Frisch grid. Moreover, in addition to the larger rise time, a long rising edge is also observed on the pulses that is also attributed to the inefficiency of the grid.
The charge induced on an electrode on an ideal Frisch grid ionization chamber is zero in the drift region and increases linearly as the electrons move from the grid to the anode. In this case, the total collected charge is equal to the number of generated electrons, and the shape of the signal is given by the collection time of the electrons in the detector and the integration time of the electronics. This is the case of an ideal step-like pulse injected into the injection capacitance of the IDeF-X chip as presented in Figure 4.8, where the signal reaches its maximum value in 1.39 µs. However, as reported in Section 5.3, the shielding of the grid to the motion of charges in the drift region is not perfect due to the inefficiency of the grid, so electrons start inducing a signal on the anode before they pass through the grid [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF]. Figure 5.14 shows the comparison between the average output signal for 511 keV photoelectric events measured with a 100 LPI metallic woven grid placed at 500 µm from the anode, and the output signal obtained after the injection of a test pulse to the front-end electronics through the 50 fF test capacitance. To simulate the drift in the gap, the test signal was a step pulse with a slope of 250 ns that corresponds to a pre-amplifier output signal over a gap of 500 µm (for an electron drift velocity of 2 mm/µs and an electric field of 1 kV/cm). The injection was performed on two different pixels of the anode, one from each IDeF-X chip, under the standard experimental conditions. The injected signal presented in Figure 5.14 corresponds to the average pulse over 2000 injected signals to reduce statistical fluctuations. For the case of the 100 LPI output signal, only single-cluster events i.e. events with only one reconstructed cluster, that correspond to an unique signal with an amplitude of 511 keV collected by just one pixel of the anode were selected. Isolated pixels refers to interactions where the electronic cloud drifts just above the center of the pixel that collects the charge. Likewise, the signal was averaged over a sufficient number of events to increase the SNR. In addition, to remove the z-dependence, only signals with a drift time between 2.6 cm and 6 cm from the anode were considered. The 511 keV anode signal presents an early rise time that reaches 5 % of the maximum amplitude. This early rising edge suggests that electrons start to induce a current on the anode around 3.5 µs before they pass through the grid, which is equivalent to a distance of 7 mm (at 1 kV/cm).
By direct extrapolation, the rising edge of the pulse matches with a grid inefficiency of 5 %, compatible with the value of the inefficiency factor σ estimated in Section 5.3. Equation 5.15 and the results obtained by Göök et al., also state that by increasing the ratio a p (ratio grid pitch -gap) the grid inefficiency factor decreases. This means that if the long rising edge is actually due to the inefficiency of the Frisch grid, it should vary with the type .14 -Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm from the anode (red line) and a 60 mV injected step-like pulse with a slope of 250 ns (black line). The peaking time was set to 1.39 µs. of grid. Figure 5.15 shows the average output signal for 511 keV photoelectric events for four different Frisch grids (see Tables 3.2 and 3.3). The grid-anode distance was set to 1 mm in all cases. Different pulse shapes are clearly obtained for the different type of meshes. As expected, the smallest inefficiency factor, i.e. the smaller rising edge, is obtained for the 500 LPI micro-mesh that presents the smallest pitch of 50.8 µm ( a p = 0.05). On the other hand, the worst efficiency, of the order of 10 %, is estimated for the 70 LPI electroformed micro-mesh, which has a large pitch of 362 µm with a small thickness of 5 µm. The 50.29 LPI has the largest pitch (505 µm) but it presents an intermediate inefficiency factor of 8 % due to its high thickness. Consequently, unlike for electron collection efficiency, which increases as the pitch of the mesh increases, the best Frisch grid in terms of shielding efficiency is the one with the smallest pitch. Please note that the inefficiency factors are just an estimation, since our experimental set-up is not optimal for inefficiency measurements. The differences in the trailing edge of the pulses is attributed to the different coupling AC -DC selected during the consecutive data taking periods.
As discussed in Section 5.3, the inefficiency of a Frisch grid not only depends on the physical properties of the grid but it also depends on the grid-anode distance. Figure 5.15 also shows the comparison of the output signal for the 100 LPI mesh for two different gaps. As expected, the inefficiency of the grid increases with decreasing the gap. In addition, by comparing the 100 LPI at 500 µm from the anode with the 50.29 LPI at 1 mm, we can state that the inefficiency loss due to a reduction of the gap by a factor of 2 is less significant that the loss due to an increase by the same factor of the pitch size, as expected.
Frisch grid inefficiency as a function of the distance from the anode
In an ionization chamber, the induced signals on the anode show a strong dependency on the z-position. amount of induced current also decreases due to the weighting potential. In Figure 5.16, the shaped signals of charge clouds for different z-intervals above the pixelated anode are shown. These results are obtained for a 100 LPI Frisch grid located at 500 µm from the anode. The rise time increases with increasing the z-position. At positions close to the Frisch grid, the shape of the pulse matches with the injected pulse convolved with a gap of 500 µm. As the distance from the anode increases, the shape of the signals differs from the ideal pulse. Non significant variation is, however, observed from distances of around 7 mm from the grid. The early rise time also varies with the position. Close to the grid, no early rising edge is observed, while it increases as the z-position increases. These results support the hypothesis that both the larger pulse width and the early rise time are due to a defective shielding of the grid to the electrons.
The resulting signals can be integrated over time to obtain the total induced charge. To minimize the charge loss at high distances due to diffusion, the pulses showed in Figure 5.16 were integrated over a virtual cluster formed by 9 pixels. We considered only events with a total charge of 511 keV collected by the 3 × 3 virtual electrode. The presence of the parasitic signal reported in Section 6.4.2 was corrected to avoid a bias due to the non-collecting pixels. Figure 5.17 shows the pulse integral as a function of drift time for different z-intervals. The total integrated charge as a function of the average position of the z-distribution obtained for each position interval is depicted in Figure 5.18. The maximum of the induced charge is a function of the interaction depth, due to the dependency of the weighting potential with the position. The collected charge increases with increasing the position from the anode, and it tends to saturate at a distance of the order of 7 mm. This is most likely due to the ballistic deficit produced by the induced charge before the grid. Therefore, assuming a 100 % electron transparency, we can state that the difference in the pulse shape, i.e. in the collected charge, with the z-position can be associated to the inefficiency of the grid. [3,[START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF] mm z= [START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF][START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF] mm z= [START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF][START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF] mm z= [START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF][START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF] mm z= [START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF][START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF] mm z= [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF][START_REF] Gornea | Double beta decay in liquid xenon[END_REF]
Charge sharing and charge induction between neighboring pixels
In a segmented anode, multiple-pixel events are produced by either charge sharing from a single electron cloud or from multiple γ-ray interactions in the detector. Due to the nature of the interactions in the LXe, for 511 keV γ-rays 2.1 pixels are triggered, in average, per cluster for a energy threshold level of 3σ noise . If these multiple-pixel events are actually produced from a single electron cloud the arrival time of the electrons onto the pixels, determined from the maximum of the shaped signals, should be the same. In our case, the maximum of the signals is determined from the time of CFD as reported in Section 4.4.2. The study of the time of CFD of the signal from pixels that form a same cluster is, in fact, an useful tool to determine the minimum timing separation to resolve two different interaction that occurred close in space, such as Compton scattering followed by a photoelectric effect. This timing condition, also called cluster time window, takes into account the precision in the time measurement, which can depend on the collected charge.
Charge induction study with a 1 mm gap
The simulation of the output signal of the IDeF-X LXe chip presented in Section 4.4 showed a timing resolution of the other of 260 ns for an amplitude of 3 times the electronic noise (σ noise ). This result was obtained with the CFD method and an optimal configuration of the CFD parameters; time delay and attenuation fraction. For the same measurement conditions, higher time differences have been measured, however, from real data between the pixels of the same cluster. Figure 5.19 shows the total time difference distribution between the pixels of the same cluster for a total measured charge of 511 keV. The cluster time window was fixed to 2 µs, i.e. if two neighboring signals have a drift time difference smaller than 2 µs, both signals are grouped into the same cluster. The value of ∆t was obtained as the difference between the time of CFD of the pixel with maximum amplitude and the time of the CFD of the rest of neighboring pixels of a cluster. The distribution is non-symmetric and does not agree with to a normal distribution. The distribution shows a sort of double-peak structure centered at ∆t ≈ 0 with a plateau at positive time differences and a second peak at around 9 time channels. To better understand this result, we obtained the time difference ∆t distribution as a function of the amplitude of the neighboring pixel (see Figure 5.20).
For simplicity, we will refer to the amplitude of the neighboring pixels as A neighbor . For signal amplitudes 3 times the electronic noise, an average time difference of 720 ns (9 time channels) is observed, which is around 3 times higher than the value estimated by simulation. Moreover, a cluster of events is also visible around this value at higher amplitudes. The distribution of average values of ∆t is depicted in the bottom part of Figure 5.20. The mean value of ∆t was measured by dividing the scatter plot presented in the top part of Figure 5.20 in slices of 1σ noise up to to an amplitude of 18σ noise , and in slices of 10σ noise for higher amplitudes to increases the statistics per charge interval. Each of the charge distributions are fitted by a double-Gaussian function with a constant background (see Figure 5.21). The value of ∆t is directly deduced from the mean of the fit. The experimental time difference between adjacent pixels decreases exponentially as the amplitude of the signals increases and tends to zero for a value of A neighbor higher than 50σ noise . On the other hand and according to the simulation results, a time delay of less than 80 ns was expected at amplitudes higher than 10σ noise . The observed decay was well fitted by a triple exponential function (blue line) that can be used to corrected the systematic time shift at low energy signals. The accumulation of events observed at low energies at around 9 time channels is due to a double-peak distribution observed at very low amplitude signals. In fact, for amplitudes up to ∼10 σ noise , a fraction of the detected signals are delayed by around 720 ns from the pixels of reference. This result was not observed on the simulated data. Figure 5.21 shows two ∆t distributions for constant amplitudes of 3 and 6 times σ noise . For amplitudes of 3σ noise , an unique peak with a mean value of 9 channels (720 ns) stands out from the background. On the other hand, at 6σ noise the mean value of the distribution is shifted to 7 channels, while a constant peak with mean at 9 channels remains present. This constant peak is still present at higher amplitudes, while the mean of the distribution moves towards zero. As the amplitude of the neighboring signals increases, the fraction of delayed events by 720 ns decreases until the peak is no longer visible in the ∆t distribution. This important difference between the electron arrival time to neighboring pixels cannot be explained by neither the presence of the electronic noise nor the size of the electron cloud nor the fluorescence X-ray emission. A time difference of 720 ns represents a separation of almost 1.5 mm for an electric field of 1 kV/cm. The time difference between triggered pixel of the same cluster also varies with the depth of interaction. We observed that the time delay decreased as the distance from the anode increased. Figure 5.22 shows the average time delay as a function of A neighbor for three different slices of 1 cm each along the drift length. This effect is consistent with the increase of the spread of the electron cloud due to the lateral diffusion, and suggests that as the direct collected charge per pixel increases, the delay between adjacent pixels decreases. As discussed bellow, this effect is due to the indirect current induction between neighboring pixels. On the contrary, no significant variations were observed as a function of the total charge per cluster.
Simulation of charge induction in a segmented anode
In a real detector, the impact of charge sharing is difficult to isolate from other contributions. This makes simulation an excellent choice for studying these effects. In this section we present a simulation study performed with the aim of better understand charge sharing effects and charge induction between neighboring pixels in a detector with the same characteristics as XEMIS. It is important to note that the simulation study reported in this section does not attempt a detailed simulation of the charge induction in the anode but to provide a better understanding of the charge sharing effects and the indirect charge induction in the adjacent pixels. Thanks to this study, some questions related to the experimental setup of XEMIS1 have been exposed, which have contributed to the optimization of the future detector XEMIS2.
For the simulation, we defined the geometry of 9 adjacent pixels, each of them defined as a unit cell with an area of 3.1 × 3.1 mm 2 . The cathode was defined as a plane electrode of 9.3 × 9.3 mm 2 placed at 0.5 mm or 1 mm from de anode surface. Since the goal of the simulation is to study the effect of charge induction in a pixelated anode, the Frisch grid was not included. The geometry definition and the 3D finite element grid generation was performed using Gmsh [START_REF] Geuzaine | Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities[END_REF] (see Figure 5.23). According to the Shockley-Ramo theorem, charge induction in a pixelated anode can be determined from the weighting potential distribution along the drift length. The maps of the electric, potential and weighing fields were calculated with a finite element electrostatic solver using Elmer [215]. The drift electric field was fixed at 1.0 kV/cm. To calculate the weighting field and weighting potential, the central pixel was set to unity potential while the rest of the pixels and the cathode were grounded. The electric field and the weighting potential are identical for every pixel with identical geometry, so the mapping fields of the rest of the detector surfaces are directly determined by symmetry.
The electric field and weighting potential maps for the central pixel were exported to Garfield++ [216]. The transport of the electrons through the LXe and the induced current signals caused by the drifting electrons in the gap were finally calculated by a homemade simulation based on Garfield++. For every simulated charge, the instantaneous induced current in the central pixel is calculated as the scalar product of the charge and the weighting field in every time step until the charge reaches the anode according to Equation 5.1. The resulting pulse corresponds to the current signal at the entrance of the preamplifier. The total induced charge is finally determined as the integral of the induced current along the total drift length for all the simulated charges, assuming a perfect current integration of the front-end electronics. This signal is then convolved with the transfer function of the shaper used in the IDeF-X LXe ASIC to generate the signal at the output of the front-end electronics.
In a segmented anode with pixels much smaller than the gap, the weighting potential becomes denser closer to the electrode. The smaller the pixels are, the closer the weighting potential lines and thus, moving charges far from the anode has nearly no influence on the induced signal. This means that most of the signal is induced by the charges moving close to the electrode. However, this also means that the weighting potential lines extend far from the pixel's area and reach into the adjacent pixel. Figure 5.24 shows the weighting potential distribution in the x-z plane (y = 0), for two different gap distances of 0.5 mm and 1 mm obtained with Elmer. For the larger gap, we can see that the inhomogeneities of the weighting potential in the vicinity of the electrode are more important, and the potential lines reach the surface of the two nearest neighboring pixel. Weighting potential cross-talk between pixels leads to charge induction in the adjacent pixels, even if the charge carriers only drift in the central pixel. This effect almost disappears for a gap of 0.5 mm. The ratio between the pixel size and the gap is therefore crucial in the shape of the weighting potential and hence, in the charge induction in the non-collecting electrodes. To better understand the effect of charge induction in the neighboring pixels, we calculated the amplitude of the induced signals as a function of the interaction position along the x-axis (y = 0). The electron cloud was placed at different positions inside the pixel volume starting at the center of the pixel (x = 0). Since moving charges can also induce a current in the neighboring pixels, the maximum distance was chosen to be equal to the pixel dimensions (x = 3.1 mm). In order to correctly simulate the charge sharing, a lateral electron diffusion of 200 µm was included in the simulation. The range of the primary electrons was also taken into account. Each electron cloud consisted of 10000 ionization electrons uniformly distributed within a sphere of radius < 300 µm. The simulation was performed over 1000 events.
In Figure 5.25, the induced signal in the central pixel is plotted as a function of the distance to the pixel center (x = 0). The amplitude was determined at the maximum of the signal. The impact of the weighting potential cross-talk between adjacent pixels increases as the charge is closer to the pixel boundary. When the electron cloud is produced at the center of a pixel, the charge is fully collected by the electrode, and no charge is induced on the neighboring pixels. However, as the electron cloud moves towards the border of the pixel, the collected charge decreases. Half of the charge is collected by the central pixel when the cloud is located between two pixel, which means that the other half is induced on the nearest neighboring pixels. In the most extreme case, when the charge carriers drift into the neighboring pixel (x > 1.55 mm), some charge is still induced in the central pixel. The amplitude of the induced signal decreases as the electron cloud moves towards the center of the neighboring pixel. This effect is more significant for a bigger gap. The transient signal induced in the pixel should result in a zero net charge. However, due to the fast shaping time of the amplifier, the induced signal may result in an additional charge that bias the total measured charge per cluster. This effect is almost negligible for a gap of 0.5 mm.
The shape of the resulting signal at the output of the front-end electronics depends on the amount of real collected charge with respect of the transient amplitude of the transient induced signal. The higher the collected charge the smaller the contribution of the induced signal. To study the effect of charge induction in the collected signal, we took as reference the signal produced by an interaction produced at the center of a pixel. Figure 5.26 shows the difference between the time of the induced signals and the time of the signal of reference as a function of the relative amplitude. The value of the time was measured at the maximum of the induced signal. This time represents the collection time of the electrons onto the anode. When the amplitude of the transient signal is small and all the measured charge is due to direct collection, i.e. the electron cloud drift into the central pixel surface, no discrepancies are observed between the measured and the reference times. However, as the position of the interaction approaches the boundary of the pixel, the time difference increases. A maximum time difference of 800 ns (10 time channels) is obtained when almost no direct charge is collected by the central pixel. This means that the integration of both direct and indirect induced charges by the front-end electronics, deforms the shape of the output signal with respect to the ideal pulse. In order to estimate the contribution from lateral diffusion to the induced signal, the simulation was repeated for a transverse diffusion coefficient of 300 µm. The value of the lateral diffusion is directly related to the position of the interaction with respect to the anode. In LXe, the transverse diffusion coefficient is σ x,y ≈ 200 µm √ cm. Figures 5.27 and 5.28
shows the amplitude and time of the induced signals as a function of the interaction position along the X-axis. In this case, the amplitude of the induced signal decreases more rapidly as the electron cloud approaches the pixel's boundary. On the contrary, the time difference decreases slowly with the position of the interaction. Both effects are directly related to the charge sharing between neighboring pixels due to diffusion. As the amount of real collected charge increases, the effect of charge induction in the shape of the collected signals decreases. The results of the simulation showed that charge sharing and weighting potential cross-talk between neighboring pixels affect the performances of a pixelated detector. The induced signal in a pixel depends on the position of the electron cloud with respect to the center of the pixel. As the charge carriers move away from the pixel's surface, the amount of collected charge decreases and the induced charge in the neighboring pixels increases. The induction of a transient signal on a pixel affects the shape of the output signal and introduces a bias in the measured time and amplitude of the signals. The simulation showed a maximum time delay of 800 ns which is consistent with the results obtained with real data. Both the time difference and the induced signal increases as the ratio gap-pixel size decreases. Moreover, we have seen that the charge sharing between adjacent pixels due to lateral diffusion reduces the effect of charge induction in a non-collecting electrode. This result is also consistent with the time difference variation with the depth of interaction observed in the experimental data.
Charge induction study with a 500 µm gap
According to the simulation, a significant reduction on the time difference between the pixels of a cluster should be observed for a gap of 0.5 mm with respect to the results obtained for a 1 mm gap (see Figure 6.42). To corroborate these results we installed in XEMIS1 the 100 LPI Frisch grid at 0.5 mm from the segmented anode. Figure 5.29 shows the time difference ∆t between the pixel of reference and the rest of the pixel of a cluster as a function of the amplitude of the neighboring pixel A neighbor , for a total measured charge of 511 keV. The population of events at around 9 time channels (720 ns) is no longer present but instead, for small amplitudes the value of ∆t becomes negative due to the presence of the parasitic signal presented in Section 6.4.2. This negative signal explains the lack of events observed at positive values of ∆t. This result implies that for a gap of 0.5 mm, the effect of charge induction in the neighboring pixels has been significantly reduced with respect to a gap of 1 mm, and other effects such as baseline fluctuations come to light at very low energies. At amplitudes higher than 10σ noise , a time delay of less than 80 ns is observed instead of 240 ns measured with a gap of 1 mm. The comparison between the mean value of ∆t as a function of A neighbor for the two different gaps is shown in the right part in Figure 5.29.
Since the parasitic signal is only present in the same IDeF-X LXe chip, to remove its impact from the distribution we selected only those clusters with at least two triggered pixel, each of them in a different chip. As we can see in Figure 5.30, the negative values of ∆t for low amplitudes become positive, the lack of statistic at low energies and positive time differences disappears, and the ∆t distribution follows an exponential function with a value of 240 ns (3 time channels) for an amplitude of 4σ noise . The comparison between clusters which pixels are shared by the two IDeF-X LXe ASICs and cluster which all pixels belong to the right side chip is shown in Figure 5.31. These results suggest that for low energy signals the contribution of the parasitic signal dominates over indirect charge induction. No significant variation has been observed, on the other hand, with a gap of 1 mm (see Figure 5.32). Consequently, with a gap of 0.5 mm and no contribution of the parasitic signal, the time difference distribution between the pixels of the same clusters is less than 1 time channel at amplitudes of the neighboring pixel higher than 10 times the electronic noise. These results agrees with the results obtained with the simulation (see Section 4.4).
Conclusions Chapter 5
Understanding of signal formation in a detector is crucial to optimize the measurement of the time, energy and position of the detected signals. In this chapter we discussed the principle of signal induction in a detector. We have seen that the total charge induced on an electrode by a moving charge can be determined from the Shockley-Ramo theorem, which depends on two crucial parameters, the weighting field and the weighting potential. In a parallel-plate ionization chamber the total induced charge depends on the drift distance of the electrons. This position-dependence can be efficiently removed by including a third electrode, called Frisch grid, between the cathode and the anode. In this chapter, we focused in the principle of a gridded ionization chamber and the effects of the Frisch grid on the collected signals. We have seen that the properties of the induced signals depend on the characteristics of the grid. For example, electron transparency of a Frisch grid is directly related to the number of collected charges per interaction and affects the energy resolution of the detector. An optimized set-up requires an adequate biasing of the electrodes in order maximize the charge collection, which indeed depends on the geometrical properties of the grid. Detailed measurements for two different type of grids are presented in this chapter. In XEMIS1 and for a 100 LPI mesh, the electric field in the gap should be at least five times the electric drift field. The electric field ratio depends on the gap distance and on the pitch and thickness of the grid. Larger gaps require higher bias voltages to obtain the same electron transparency conditions. On the contrary, a smaller electric field ratio is necessary with a more open grid.
Another aspect that affects the performances of a gridded ionization chamber is the inefficiency of the grid. An ideal Frisch grid shields the anode from the movement of positive ions in the space between the cathode and the grid, and removes the position-dependence of the induced signals. Theoretically, a charge is induced on the anode from the moment the electrons start passing through the grid. As a result the slope of the signals only depends on the grid-anode distance and on the timing characteristics of the front-end electronics used to process the information. However, under real experimental conditions, the electrons start inducing a signal on the anode before they actually pass through the grid. Although our experimental set-up is not optimal for the determination of the inefficiency of a Frisch grid, its impact is clearly visible on the shape of the output signals. To investigate this effect we compared the shape of the output signals for a set of different meshes and different gap-pitch ratios. The results presented in this chapter imply experimental confirmation of the inefficiency of the Frisch grid on the induced signals. A rough estimation of the inefficiency of the Frisch grid has been performed based on the results of other authors [START_REF] Bunemann | Design of Grid Ionization Chambers[END_REF][START_REF] Göök | Application of the Shockley-Ramo theorem on the grid inefficiency of Frisch grid ionization chambers[END_REF]. A grid inefficiency of the order of a few percent is expected for all the kind of meshes tested during this thesis.
Charge sharing between several pixels due to lateral diffusion, the size of the primary electron cloud or the emission of fluorescence X-rays is useful to improve the spatial resolution of a pixelated detector. However, multiple-pixel events degrade the energy resolution. The probability of charge sharing depends on many factors such as the pixel size and the electronic noise. To better understand the physics of charge induction on a pixelated anode, we presented a simulation study. We have shown that the shape of the output signals changes depending on the relative position of the electron cloud inside the pixel due to the weighting potential cross-talk between neighboring pixels. Charge clouds produced close to the boundaries of a pixel may induce a signal on the neighboring pixel even if all the ionization electrons are collected by the central pixels. The induced transient signal on the adjacent pixels introduces a bias on the amplitude and time of the shaped signals. These effects become more important for smaller pixel sizes compared to the gap distance. Moreover, the results obtained by simulation confirm the time difference observed between the triggered pixels of the same cluster observed on the real data.
The results presented in this chapter have been essential to understand the effect of signal formation on a gridded ionization chamber based on a segmented anode for charge collection. These results are crucial to improve the performances of the future Compton camera XEMIS2. T he small dimension prototype XEMIS1 was developed with the purpose of testing the feasibility of the 3γ imaging technique with a LXe Compton telescope. The design of XEMIS1 is not optimal for tracking Compton, but it provides relevant information of the potential of a LXe Compton camera for 3γ imaging. In this chapter, we present the experimental set-up used to detect the 511 keV γ-rays generated after the annihilation of a β + with an electron. In order to carry out the characterization of XEMIS a low activity 22 N a source was used. The data acquisition and trigger systems used to register the data are presented in Section 6.2. The efficiency of the trigger for 511 keV events as a function of the characteristics of the TPC is also studied in this section. Furthermore, a detailed analysis and calibration of the noise for each individual pixel is discussed in Section 6.4. The results obtained from this analysis are used to correct the raw data and to set a threshold level for event selection. Finally, the off-line method used for data analysis and clustering is presented in Section 6.5.
Experimental setup
In order to test the feasibility of using a LXe TPC for 3γ imaging we have been carried out the performance characterization of XEMIS1 with a 511 keV γ-ray source. A schematic view of the experimental setup is shown in Figure 6.1. This includes the XEMIS1 TPC described in Section 3.1, the charge and light detection systems and an external PMT used for triggering. The detector is calibrated using a low activity 22 N a source of about 10 kBq. The 22 N a is a 3γ emitter radionuclide with similar characteristics to those of the 44 Sc, that emits a γ-ray of 1.274 MeV and a positron in quasi-coincidence. The source is encapsulated in a plastic casing of 1 mm thick with a diameter of 2 cm and placed inside a 15 mm diameter stainless steel hollow tube. The tube, visible in Figure 6.1, is located outside the vacuum enclosure and in front of the cryostat entrance. To minimize the γ-ray attenuation before entering into the TPC, the entrance flange consists of a 1 mm aluminium wall that is placed at around 15 cm from the anode. The position of the source with respect to the TPC, located inside the inner vessel, can vary thanks to a sample holder within the external tube. The detector energy response and time resolution was first calibrated using the two 511 keV γ-rays produced after the annihilation of a β + with an electron. A coincidence trigger between the TPC and a BaF 2 scintillation crystal coupled to a PMT was set to trigger on the 2 back-to-back 511 keV γ-rays events. The PMT is set to a high voltage of -850 V. Both the BaF 2 crystal and PMT are located inside the stainless steel tube after the source holder. Two collimators made of lead and antimony with an external diameter of 4 cm and a total length of 3.5 cm are placed between the BaF 2 crystal and the source. The collimator coupled to the BaF 2 crystal is a cone shaped hole collimator with a top diameter of 1 mm and a cone base diameter of 1.5 mm. On the other hand, the one coupled to the 22 N a source is a parallel hole collimator with an internal diameter of 2 mm. The collimators are used to optimize the solid angle of the beam source covered by the BaF 2 crystal. Moreover, the ensemble BaF 2 , collimators and source is positioned in such a way that the solid angle subtended by the beam completely covers the active area of the TPC, so for almost the 100 % of the 511 keV γ-rays detected by the BaF 2 , the other 511 keV photon deposits its energy inside the TPC. With respect to the anode, the beam is centered in the x-y plane to maximize the charge collection. To prevent bad energy reconstruction only the 36 central pixels of the segmented anode are considered during data analysis. For this reason, to minimize event loss by border rejection a calibration of the position of the beam with respect to the anode is made at the beginning of each data-taking period. Additionally, the TPC has been calibrated for different electric fields varying between 0.25 kV/cm to 2.5 kV/cm. Higher electric fields result in better energy resolution [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF]. However, it also implies that greater voltages must be applied to the Frisch grid to ensure a 100 % electron transparency. As discussed in Section 5.2, the transparency of a Frisch grid depends on the ratio between the drift field and the collection field, which in turn depends on the characteristics of the grid. A 100 % transparency requires, in some cases, very high bias voltages that in fact can limit the performance of the detector. By decreasing the grid-anode distance we can also reduce the applied potential. Halving the gap implies a reduction by a factor of two on the voltage required by the Frisch grid for the same electric drift field. However, some mechanical constraints come into play when the gap is too small. Consequently, the maximum electric drift field on a LXe TPC is in part limited by the Frisch grid and the mechanical design of the TPC. The cathode, on the other hand, is an electroformed 70 LPI micro-mesh that supports voltages from 0 V up to -24 kV. A more detailed description of the performances of the TPC as a function of the electric drift field for different configurations is reported in Chapter 7.
Data Acquisition and Trigger Description
A diagram of the triggering system used in XEMIS1 for the 511 keV performance characterization is shown in Figure 6.2. Signals from both the BaF 2 scintillation crystal and the LXe PMT are sent to an electronic logic chain where the signals are discriminated and logically combined. A trigger requires the coincidence of both PMT signals within a time window of 10 ns. To select the 511 keV photons a low discriminator threshold is used on both PMTs to suppress noise events. On the other hand, no high threshold has been used to reject high-energy signals that exceed the energy of a 511 keV γ-ray. The slow decay component (45 ns at 173 K) of LXe due to recombination can introduce a considerable time delay between the arriving photoelectrons that form the output signal [START_REF] Hitachi | Effect of ionization density on the time dependence of luminescence from liquid argon and xenon[END_REF]. This results in long decay tail signals as shown in Figure 6.3(b). The impact of this slow decay component depends on the applied electric field, decreasing as the electric field increases [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF][START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF]. For example, for an electric field of 2 kV/cm, the 63 % of the scintillation light is emitted due to direct excitation with a time constant of 2.2 ns or 27 ns depending on whether the emission comes from the de-excitation of the singlet or triplet states respectively [START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF][START_REF] Kubota | Dynamic behavior of free electrons in the recombination process in liquid argon, krypton and xenon[END_REF]. A non negligible 37 % of the scintillation yield comes, however, from recombination and it is emitted with a slower decay time of 45 ns. The distribution of photon arrival times in LXe can be roughly approximated by a triple exponential decay corresponding to the three LXe lifetimes:
A f . exp -t τ f + A s . exp -t τ s + A r . exp -t τ r (6.1)
where A f = 0.03, A s = 0.60 and A r = 0.37 are the relative scintillation rates for the fast, slow and recombination components respectively at an electric field of 2 kV/cm, and τ f , τ s and τ r are the scintillation decay times of the three components. The long tail in the photoelectrons arrival time distribution implies that photons may arrive to the detector with a delay of several hundreds of ns. Since we work at the minimum possible threshold level, single photons from the signal tail may re-trigger the discriminator even though they come from the same interaction. This can increase the number of accidental triggers. For this reason, in order to avoid the possibility of re-triggering, a veto of 50 µs is set on both PMT signals. This 50 µs logic gate completely covers the 45 ns photoelectron extraction.
Since the scintillation light yield due to direct excitation does not depend on the applied electric field, triggering on the fast component of the scintillation light minimizes the electric field dependency of the trigger system, in addition to allow small coincidence time windows of the order of several ns. However, due to the small number of photons that reach the LXe PMT, the trigger window should be large enough to cover almost completely the photoelectron distribution. In order to suppress accidental coincidences after the veto signal, the resulting pulses from both PMTs are transformed into standard logic pulses of 10 ns and 80 ns width for the LXe PMT and BaF 2 respectively, before entering into the coincidence module. The width of the windows was optimized to reduce the number of accidental triggers but preserving a high trigger efficiency. An event is then accepted if the two narrowed pulses arrive within a time window of 10 ns. Figure 6.4 shows the time distribution of the PMT signals after a coincidence. Most of the events are triggered by the fast scintillation component, whereas a small fraction of the coincidences are triggered by the recombination component.
Since the principle of the TPC is to detect both ionization and scintillation signals, an unique light pulse should be associated to each ionization signal. For this reason, to guarantee that only one trigger is accepted during the time the signals are still being recorded by the Data Acquisition Systems (DAQ), a 120 µs logic gate is established after the logical AND of the two PMTs. This window is enough to cover the time needed to record both signals (102 µs for the ionization signal and 2 µs for the scintillation light). The 120 µs logic pulse does not act as a veto but instead, if an event happens while the window is still opened, the event will not only be missed but it will re-open a new 120 µs logic signal ensuring that any new information is registered. This paralyzable behavior maintains the synchronization between charge and light. If a trigger is accepted, signals from both scintillation (PMT) and ionization (anode) processes are recorded. The scintillation light waveforms from the PMT immersed in the LXe are continuously fed into a CAEN v1720 Flash ADC where they are digitized with 12 bit precision at a sampling rate of 250 MHz. Figure 6.5 shows an example of a typical scintillation and ionization waveforms of a 511 keV γ-ray event from the 22 N a source. The digitization pass through a ring buffer where they are sequentially stored. After a trigger a block of data containing 512 samples is automatically transfer from the ring buffer to an event buffer where the event is stored for further processing. The event buffer can store a maximum of 1024 events of 2 µs each before being read. This means that readout and signal processing are in general deadtimeless unless the trigger rate is too high so the entire memory is filled up. An additional 200 ns of baseline presamples are registered to ensure that the event is completely stored regardless the delay added by the electronic chain itself. Besides the scintillation signal, the trigger logic pulse is also stored and digitized every 4 ns in one of the channels of the FADC to correct the jitter introduced by the acquisition systems relative to the trigger start time.
The charges collected by the segmented anode are processed by the two 32 channels IDeF-X ASICs as explained in Section 3.1.3. Output analog signals from 63 pixels are registered during a time window of 102.2 µs by a waveform digitizer (CAEN v1740 Flash ADC) at a sampling rate of 12.5 MHz with a resolution of 8 bits. The registration period is large enough to cover the entire drift length of the TPC that it is of the order of 60 µs for an electron drift velocity of 2 mm/µs (electric field of 1 kV/cm). Same as for the scintillation signal, the charge collection waveforms are continuously digitized and written in a circular memory buffer. When a trigger signal arrives to the FADC, the event is transferred and stored into the event buffer. Equally, data are stored from the time the trigger occurs back to 50 previous samples. The event buffer can hold 1024 event of 1278 samples each that in principle can be read with a negligible dead time, if the trigger rate is less than 20 Hz. Since the charge FADC only has 64 available channels, signals collected by the pixel 29 placed at the border of the anode (see Appendix B) are not registered. Instead a tag signal called TTT (Trigger Time Tag) is stored for further jitter correction during data analysis. The TTT signal keeps track of the timing information of the trigger and is generated by integrating the coincidence output pulse in a preamplifier-shaper module. Trigger Efficiency and Calibration for 511 keV γ-ray events
The trigger system should be able to distinguish physically interesting events from background.
In particular, the discriminator threshold level used to select the 511 keV events has a big impact on the trigger efficiency so its value should be optimized to decrease the number of accidental triggers. For the BaF 2 signal, a discriminator was set to 10 mV equivalent to 12σ noise , where σ noise represents the SNR, in order to trigger on the 511 keV signals. The value of the noise was calculated from the amplitude distribution spectrum of the PMT output signal when no threshold is applied. Amplitude of the LXe PMT signals is, however, dominated by solid angle effects resulting on a small fraction of UV photons reaching the PMT photocathode. A detailed simulation of the geometry of XEMIS1 and the scintillation light collection efficiency is presented in [START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF]. Results show an important decrease of the number of photons detected by the PMT as the distance from the intersection point to the PMT increases. Figure 6.6 shows the distribution of detected photons as a function of the distance of the interaction point with respect to the PMT. Please note that in this simulation the quantum efficiency of the PMT (35 % for the Hamamatsu R7600-06MOD-ASSY PMT) and the optical transparency of the set of meshes placed just before the PMT surface (81 %) are not taken into account. In addition, the results were obtained for a zero electric drift field. For an electric field of 1 kV/cm, 3.5 photons are detected in average by the PMT if the interaction occurs close to the anode (13.5 cm from the PMT). This strong dependency of the light collection efficiency with the position of the interaction inside de TPC requires a very low threshold level in order to ensure a relatively uniform response of the detector along the drift direction. Adequate results were obtained for a threshold of 5 mV (5σ noise ). Figure 6.7 shows the z-dependence of the light yield for 511 keV events for an electric drift field of 1 kV/cm. The amplitude of the PMT signals is calculated at the maximum of the pulse over a time window of 10 ns from the moment the signal crosses a fixed threshold. The TPC has an active area of 12 cm. The value of z = 0 cm on the distribution corresponds to the position of the grid. The end of the fiducial volume is represented at z = 12 cm. The results were obtained for a 100 LPI metallic woven Frisch grid placed at 1 mm from the anode. As expected, the light detection efficiency increases as the distance to the PMT decreases. The shape of the distribution is related to the geometry of the TPC and the position of the source with respect to the PMT, resulting on a decrease of the solid angle of the PMT with increasing z. At low z-position (z ≤ 3 cm) a deterioration of the light collection efficiency is observed. This is because at distances far enough from the PMT the effect of solid angle becomes important and a fraction of the events may be rejected due to the discriminator threshold level.
Trigger efficiency can be improved, on one side by decreasing the threshold level and, on the other side by decreasing the active area of the TPC. Lowering the threshold increases the number of accidental triggers without a significant improvement on the efficiency. Figure 6.8 shows the amplitude distribution of the LXe PMT signals for a threshold level of 5 mV. At lower threshold levels, we observed an increase of the dead time of the detector without a significant increasing of the trigger efficiency.
A better option to improve the efficiency of the trigger system is to reduce the length of the chamber. A rough calculation indicates that halving the size of the TPC may improve the light detection efficiency in a factor of 4. Figure 6.9 shows the light amplitude distribution as The characteristics of the new TPC are exactly the same as the 12 cm long TPC described in Chapter 3, except for the active length of the detector which is reduced from 12 cm to 6 cm. The material and the distance between the rest of the components of the TPC remain unchanged and the light and charge collection systems are also the same. The Frisch grid is also a 100 LPI metallic woven mesh but it is placed at 500 µm from the anode. Figure 6.10 shows the geometry of the new 6 cm TPC. A negligible rate of accidental coincidences (20 accidental triggers in 5 hours) has been measured for a 5 mV threshold. This result is in good agreement with the expected value for a 10 ns coincidence window, obtained from the individual counting rate of both PMTs. A trigger rate of ∼ 120 triggers/s was measured for the BaF 2 (N BaF 2 ) and ∼ 1200/s for the LXe PMT (N LXe ):
N acc = N LXe . N BaF 2 . 10 ns ≃ 1.4 10 -3 triggers/s (6.2)
Data processing
After a valid trigger, data from both ionization and scintillation processes are stored into the event buffer before being stored on disk. Once in the buffer, data is read by the DAQ computer via a fiber optical cable starting from the first registered event. The optical link supports data transfer of 80 MB/s for both FADCs. The DAQ software is written in LabVIEW using the software libraries provided by CAEN. The software can be used as a second level trigger since it was design to perform a rough analysis of the raw data by applying topological cuts to the events. The program includes a simple framework which allows to start/end a run and to apply cuts to the signal waveforms. Charge and light signals can be treated and registered independently. Moreover, rough noise calculations are implemented to control the overall state of the electronics. The noise is determined from the amplitude distribution obtained for the first registered sample (∼ 200 ns before the trigger). Additionally, preliminary baseline suppression can be performed to the individual ionization signals, so a threshold can be used for event selection allowing a reduction of the data volume. The pedestal is calculated by averaging over the first 50 samples before the trigger start time. Cuts on the total charge deposited inside the TPC during an event or individual selection of the pixels that are actually written on disk are also possible. In addition, real time monitoring of the detector is included, that generates plots of charge and light waveforms, hit map of the anode and trigger rate. Data collected by the DAQ software are then saved in two independent binary files (one for the charge and another one for the light). Relevant data acquisition information such as the registration time window, sampling rate, shaper peaking time, etc ... are also registered on the header of the files. The software is programmed to generate files with a maximum size of 2 Gb. The DAQ software is able to readout data at a rate of 20 Hz without adding dead time, enough to support the trigger rate of 2 events/s estimated with the actual trigger system. A one hour run generates 2 Gb of data with around 12000 registered events. This means that around 1 Tb is registered for a 1 month data-taking period.
Noise Analysis and Calibration
On a pixelated detector with individual read-out channels unavoidable differences in noise behavior, baseline and gain between the channels are normal. This variations come from the fabrication process of the electronics and affects the event selection in terms of collected charge and signal threshold. For this reason, an individual characterization of each pixel of the anode is necessary in order to study and, in some cases correct, the influence of these variations before starting the event reconstruction.
To perform a precise estimations of the noise and pedestal per pixel, we used a set of experimental data that contains only noise events. The data acquisition and data processing are performed according to the method reported in this chapter, but with a random external trigger. In the following this kind of data acquisition will be referred as noise run. A complete noise run consists of a 2 Gb binary file with around 12000 events in average. For each event, 63 waveforms corresponding to 63 pixels of the anode are recorder over a time window of 102.2 µs. The pixel 64 is used to register the TTT signal (see Appendix B). The results presented in this section were taken with an electric field of 1 kV/cm and a peaking time of 1.39 µs. Figure 6.11 shows a diagram that illustrates the procedure followed for the noise analysis. To keep a constant alignment of the source with respect to the anode, all the noise runs were taken with the 22 Na source. However, the presence of the source during the noise runs implies that some pulses are registered in coincidence with the random external trigger (see Figure 6.12(a)). These signals would bias the measured noise distribution providing a wrong value of the pedestal and noise per pixel. To remove the contribution from charge generating particles, a initial peak selection is performed. A preliminary estimation of the pedestal and noise value signal per signal is determined from the 50 first and 50 last samples of the signal. A relatively high threshold of around 10 times the noise (∼ 30 mV ) is set for peak selection to ensure that only physical events are rejected. In average, the number of rejected pulses per noise file is less than 2%. Figure 6.12(b) shows the noise events after pulse rejection. As we can see, there is a border effect due to the pulse search method. To avoid this effect, we exclude from the noise distribution study the latest 8 µs of the time window.
Noise Raw Data
Pedestal and Noise determination
We can see from Figure 6.12 that the preamplifier produces an offset of the order of -0.5 V in the recorded signals, with a certain baseline dispersion for the same read-out channel. To correct this baseline shift, the mean value of the pedestal per pixel should be estimated. Figure 6.13 shows a typical amplitude noise distribution of two pixels of the anode. Assuming that the distributions follows a Gaussian probability density distribution, as good approximation, the pedestal is obtained from the mean value of the distribution, whereas the noise is determined from the standard deviation (RMS) of the pedestal data sample. The distribution shown in Figure 6.13 was measured for a random time sample over the ∼12000 raw signals collected in the noise file. By fitting the distribution by a Gaussian function, a mean pedestal values of -0.51 V and -0.50 V, and an electronic noise of 2.59 mV and 2.72 mV were obtained for the two pixels respectively. Pedestal values for the rest of the pixels of the anode are presented in Figure 6.14 and 6.15. The offset is not uniform over the anode. A maximum pixel-to-pixel dispersion of the order of 15 mV was measured. The mean pedestal values for the two IDeF-X ASICs separately are shown in Figure 6.17. The relative position of the pixels with respect to the read-out electronic channels is illustrated in Figure 6.16, where each IDeF-X is connected to 32 different pixels. The baseline is almost uniform among the pixels of the left side IDeF-X chip with a dispersion of less than 5 mV. On the other hand, a higher baseline dispersion of the order of 8 mV was obtained for the right-side chip. The pedestal dispersion among the two different chips varies within 7%, which is mostly associated to the connector. Figure 6.17 -Pedestal value per pixel for the two IDeF-X LXe ASICs. Each chip is coupled to 32 pixels of the anode. The ASICs are identified according to the configuration shows in Figure 6.16.
In principle, pedestal subtraction can be made by either using a constant value per pixel or by using the same average value per IDeF-X. However, since the pedestal dispersion between the different pixels of the same IDeF-X is bigger than the final electronic noise measured once the pedestal correction is applied, the correction should be made by using the mean pedestal value per pixel.
The noise map is presented in Figure 6.18. Higher values of the noise are obtained at the edges of the anode and between the two connectors. This noise distribution is due to the length of the electronic tracks with respect to the pixels. Longer tracks implies higher electronic noise. In addition, the noise increase observed at the four corners of the anode is due to the position of a ground connector as we can see in Figure 6.16. Figure 6.19 shows the noise distribution for the 63 pixels. The noise is expressed in unit of electrons. The electron-V conversion was made using the 511 keV photoelectric peak as described in Section 7.4. We measured an average value of the noise of ∼ 85 e -.
Rejection of fluctuating baselines
In general, each preamplifier, i.e. each pixel of the anode, has a baseline value that should be constant over time and also independent of temperature. However, thermal fluctuations inside the LXe produce prompt baseline variations (see Section 6.4.1). These pedestal fluctuations distort the average value of the pedestal and noise per pixel. Several signals with the baseline out of the average value are visible in Figure 6.20. To remove those events with a bad pedestal, we calculate the baseline rejection interval per pixel from the distribution of the mean values calculated for all the raw signal registered for each individual pixel. Figure 6.21 shows the mean distribution for two pixels of the anode. Unlike the amplitude noise distribution presented in Figure 6.13, the distribution of the mean pedestal value per signal does not have a Gaussian shape, but instead the distribution can be described by a double Gaussian function that can be defined as follows:
A(x) = A core . exp (x -µ core ) 2 2σ 2 core + A tail . exp (x -µ tail ) 2 2σ 2 tail (6.3)
where µ and σ are the mean and the width of the two Gaussian functions, identified as core and tail (red and green lines in Figure 6.21). As we can see, both distributions have a zero mean value. The mean noise distribution can therefore be described by three parameters σ core , σ tail and the relative share between the two Gaussian functions. This wider tail with a σ tail value of ∼0.7 mV is mostly due to the presence of bad pedestal, but it can be also due to undesirable large signals that were not rejected during peak selection and correlated noise. All these contributions cause fluctuation in the mean value of the pedestal. Figure 6. 22(a) shows the ratio between the width of the two Gaussian functions for all the pixels of the anode. An almost uniform distribution is observed over the anode. Higher values of σ tail are, on the other hand, measured at the upper part of the anode, which implies correlation between these kind of events and the position of the read-out channels. The red line in Figure 6.21 represents the final double Gaussian fit of the mean noise distribution. The results of the fit are also shown in the figure . A fixed criteria of a maximum rejection of 1/1000 in total due to statistical fluctuations is established, that makes ∼ 1.6 10 -3 % of rejected events per pixel. The vertical dashed lines in Figure 6.21 represent the rejection interval due to this cut. The analysis is made in such a way as the integral of the distribution outside these bounds must be larger than the statistical cut. For simplicity in the following, this offline cut is called bad pedestal interval. To reject the bad baselines per event, we calculate the median value of the pedestal per pixel, i.e. per individual signal, and we check whether this value exceeds the value of the bad pedestal interval. If so, the entire event is excluded from the analysis. The cut is applied on the median value of each individual signal to ensure that only pedestals out of range are rejected, and the selection is not biased by the presence of large pulses. In average, less than a 10% of the events were rejected due to pedestal fluctuations. Figure 6. 22(b) shows the number of rejected pedestals per pixel. As we can observe, most of the rejected baselines are at the top of the anode. This result is consistent with the fact that these baseline fluctuations are due to the presence of bubbles in the liquid xenon as reported in Section 6.4.1. Important improvements have been achieved by including MLI insulation around the front flange of the chamber and around the electronics, that have led as to a pedestal rejection of less than 2 % instead of 10 % in absence of MLI. The value of the pedestal per pixel is finally obtained after bad pedestal rejection. This value is then used in the data analysis. In addition, the final value of the noise per pixel is also estimated from the baseline fluctuations after pedestal subtraction. The final charge distribution is described by a Gaussian function with zero mean as shown in Figure 6.23. The standard deviation of the distribution gives a value of the electronic noise of the order of 88 e -and 85 e -is the borders of the anode are excluded. Event selection depends on the threshold level as much as on the shape of the baseline over the time window. Irregularities on the baseline caused by a trigger perturbation may bias the number of selected events. Figure 6.24 shows the mean pedestal for all the pixels. A reduction of the mean value of the pedestal is observed over a time interval of around 16 µs (200 time channels2 ), with a maximum deviation of the order of 0.25 ADC channel3 at 20 µs. This baseline reduction is due to the trigger's time position with respect to the data acquisition window and it varies from pixel to pixel. Although this fluctuations is almost negligible, at very low threshold levels it produces a small reduction on the number of triggered events due to the slightly higher equivalent threshold value.
This noise analysis provides the value of the pedestal and electronic noise per pixel that will be used in the data analysis of XEMIS1 and in the data acquisition of XEMIS2. For this reason, the results should be stable over time. Figure 6.25 shows the values of the pedestal and noise obtained during several months of data acquisition for one of the read-out channels. Noise is quite stable over time, with a maximum variation of around 5 electrons. Furthermore, no significant variations have been observed in the pedestal value. Same results have been obtained for the rest of the pixels of the anode.
Temperature effect on the measured signals
The LXe inside the TPC is in conditions of vapor-liquid equilibrium, which means that the temperature of the liquid inside the cryostat depends on the pressure according to the saturation curve (see Figure 6.26). The amount of heat needed to change from liquid state to gas is given by the specific latent heat L v . In the particular case of Xe, a small latent heat of 95.587 kJ/kg at a pressure of 1 bar is required in order to change the phase [217]. Moreover, vapor bubbles generation inside a liquid is possible if the temperature of the surface that contains the liquid is warmer than the saturation temperature of the medium. For this reason, the generation of vapor bubbles in the LXe is more important than in conventional liquids such as water4 . In XEMIS1 the main heat transfer is due to radiation and thermal conduction between the chamber walls and the LXe. Although the design of XEMIS1 was made in order to minimize the contact surface between the TPC and the rest of the components of the detector, thermal conduction through the electronics used to read-out the ionization signal constitutes an inevitable source of heat flow. The most sensitive part of XEMIS1 to heat transfer is the anode. Ideally the anode, which is in contact with the LXe, is kept at the same temperature as the liquid. However, an external heat flow mainly coming from radiation and in lesser extent, from the electronic connectors towards the xenon, increases the temperature of the anode's surface. A small temperature variation of 0.1 • C is enough to produce spontaneous bubbles formation in LXe.
The presence of bubbles inside the TPC may cause important operational problems that degrade the performances of the detector. Spontaneous bubble formation in the anode's surface may affect the stability of the liquid in the collection region. Presumably, the motion of gas bubbles around the anode causes liquid perturbations that in turn produces transient variations of the detector capacitance that give rise to charge induction in the anode. Since the induced signals have a very long tail due to the slow motion of the bubbles in the liquid, after being integrated by the preamplifier the signals would appear as a long-term baseline fluctuations out of the average value, as presented in Figure 6.20. In the worst case scenario, bubble accumulation between the grid and the anode may be a possible cause of discharges due to the high bias voltages usually applied to the grid. This effect is most likely to occur when working with micro-meshes. Small-pitch grids facilitate bubble accumulation in the gap, since bubbles are prevented from evacuation towards the bulk volume and they require, in general, higher voltages to accomplish for electron transparency requirements. The issue of vapor bubbles formation inside the LXe has been also reported by other authors [START_REF] Denat | Generation of bubbles in liquid argon and nitrogen in divergent electric fields[END_REF][START_REF]Direct observation of bubble-assisted electroluminescence in liquid xenon[END_REF].
The evidence leading to the bubble hypothesis has been verified by studying the response of the TPC to pressure and temperature variations. Changes in the cryostat pressure and temperature can be induced by modifying the temperature of the cold finger (see Section 3.1.4). The dynamics of the bubbles were observed for some tens of minutes. Since the migration of the bubbles inside the LXe is slow, the study should be made over long-time registration periods. For each event the 63 waveforms corresponding to the 63 pixels of the anode are recorded over a time window of 15.36 µs at a frequency of 2.5 kHz. The signals are registered in loops of 1000 events, resulting in 400 ms of data without dead time. The sequence is repeated every 20 s, which is the time necessary to empty the buffer and start a new acquisition process. A total time window of ∼13 minutes is registered on a 2 Gb data file. Due to the limitation of the data acquisition system used in XEMIS1 it is not possible to calculate the fraction of bubbles per event, but instead we could estimated the number of times the average baseline exceed a certain value. The cut interval is obtained from the distribution of the median value of each event, according to the method presented in the previous section. The rejection interval for each pixel was determined under experimental conditions in which the rate of bubble formation is minimum. If the median of each baseline is ±7 times this value of reference, the signal is rejected. Figure 6.27 shows the evolution of the median value of the pedestal as a function of the event number obtained under standard experimental condition for one of the pixels of the anode. The large prompt pulses are important fluctuations of the baseline that occur during several ms, which are associated with the formation of a bubble inside the chamber. These kind of signals populate the tail of the median distribution presented in Figure 6.28. The dashed lines represents the rejection cut interval. 13.5 % of the registered events are excluded due to the baseline rejection cut. The same fraction of bad pedestals has been estimated for different runs under the same experimental condition. The two black dashed lines represents the rejection interval cut obtained from the method presented in Section 6.4.
Figure 6.29 shows the median baseline per event over the total registration time window for the eight pixels of the same column in the anode. Each color represent a given pixel. The vertical axis does not indicate the actual offset of the signals, but a fixed separation of 0.05 V was introduce between them to ease the representation. As we can see from Figure 6.30, the prompt signals follows a clear pattern starting from the bottom of the anode towards the top part. This behavior is consistent with the dynamics of a bubbles inside the LXe. Bubbles created at the bottom of the TPC, detach from the nucleation site and reach the bottom surface of the anode. The bubbles continue ascending until they collapse within a few minutes. In addition, we have also observed that the duration of the baseline fluctuation, represented by the amplitude of the signal, increases by ascending towards the top of the anode. This behavior is also coherent with the enlargement of the bubbles during migration.
Bubble formation can be inhibited by a rapid increase of the cryostat pressure. This can be done by either injecting more xenon inside the cryostat, by increasing the recirculation rate or by changing the system temperature. When the temperature of cold finger increases for example from 164 K to 165 K, the pressure inside the cryostat starts rising immediately in order to maintain the system in thermal equilibrium. The already existing bubbles formed during normal experimental conditions should disappear, whereas the formation of new bubbles is avoided due to the pressure difference between the bottom part of the cryostat and the liquid's surface. Under pressurization conditions, the number of rejected baselines is reduced to 1.3 %. The presence of bubble reappears immediately after the pressure returns to its normal value.
The formation of bubbles, on the other hand, can be enhanced by decreasing the system pressure, i.e. by decreasing the temperature of the cold finger. When the temperature of the PTR decreases by at least 1 K, the pressure inside the cryostat decreases adiabatically below the vapor pressure. The xenon is still in liquid state, but at a pressure where it should be gaseous. Under these thermal condition, a small heat transfer to the LXe may trigger a phase transition from liquid to gas state with the consequent formation of a bubble. shows the median baseline evolution as a function of the event number for the same pixels column, when the temperature of the cold finger was set to 163 K. We can observe an important increase in the number of bubbles in the chamber compared to pressurization conditions (Figure 6.31(b)). At low pressure, a 25 % of the events where rejected due to baseline rejection cut. Since working under pressurization conditions with the aim of reducing the presence of bubbles in the liquid is not an option, we studied the response of the system as a function of the temperature of the anode. Figure 6.32 shows the experimental set-up used to modify and monitor the temperature of the anode. To decrease the temperature we used a continuous liquid nitrogen cooling system. The liquid nitrogen was stored in an external container at atmospheric pressure. A vacuum pump was used in order to pump out the nitrogens toward the cryostat through a stainless steel tube. Once inside the cryostat, the liquid nitrogen circulate through a copper structure that was screwed onto the front flange of the TPC in such a way that it was in direct contact with the anode and the electronics. The temperature of the copper piece and hence, the temperature of the anode, decreased as the the liquid nitrogens passed through the copper tube placed inside the piece. A sheet of indium metal was included between the copper and the stainless steel to improve the thermal conductivity between the surfaces. A temperature sensor located between the copper piece and the flange allowed for temperature monitoring. An external temperature regulator was used to fix and keep the temperature of the anode at a certain value. A valve placed before the pump controlled the flow of nitrogen towards the chamber. The valve closed the moment the temperature of the anode decreased below the fixed value. Equally, the valve re-opened once the temperature increased above the set point.
This study showed that the presence of strong baseline fluctuations varies with the pressure and temperature conditions of the system, that points out to the formation of bubbles inside the LXe. The number of bubbles inside the TPC decreased as the temperature of the anode decreased. A reduction by a factor of two on the number of rejected events due to baseline out of range rejection was observed when the temperature of the anode was reduced to 169 K compared to standard experimental conditions (172 K). Moreover, only a 1.6 % of the events were rejected at a temperature of 168 K. Due to the limitations of the set-up the temperature could not be reduce under this value. However, we can presume from these results that the complete inhibition of bubble generation inside the TPC can be achieve by keeping the surface of the anode at the temperature of the liquid. Very satisfactory results were obtained with the electronics completely immersed inside the LXe and by a more complete insulation of the front part of the TPC by using a Multilayer insulation (MLI). Figure 6.32 -Front view of the XEMIS1 TPC. The copper structure place around the electronic cards was installed to reduced the temperature of the anode and electronics thanks to a liquid nitrogen circuit.
Charge-induced perturbations of the baseline
An additional effect has also been identified on the baseline when a relative high charge is deposited in one of the readout chips. As shown in Figure 6.33, a parasitic signal is observed in all the pixels of the same ASIC after an energy deposition. If all the charge is collected by one of the two IDeF-X LXe chips, no effect has been observed on the other ASIC.
This parasitic signal has a bipolar shape. The negative pole has an average amplitude which is of the order of 0.7 % of the total collected charge by a given chip. A variation of the average amplitude of around 30 % has, however, been observed between the two IDeF-X LXe ASICs. The minimum of the negative pulse arrives around 560 ns before the maximum of the real collected signal. The positive pole, on the other hand, shows a much smaller amplitude, which has almost no effect on the measured signals. The shape and amplitude of the perturbation depends on the peaking time, increasing as the peaking time decreases.
We have noticed that only at low signal amplitudes, of the order of 3 to 5 times de electronic noise, the presence of the parasitic signal results in an appreciable loss of the collected charge, decreasing the number of triggered events. In addition, we realized a distortion on the shape of the measured signal, which affects the measured drift time. At high amplitudes the effect of the perturbation is small and it can be neglected A complete simulation of the response of the IDeF-X LXe front-end electronics showed that this baseline perturbation is associated to the amplifier that fixes the gain conversion, which is common to all the channels of the same chip. It is important to notice that this baseline perturbation has been observed exclusively in the IDeF-X LXe version of the chip. This effect is currently in the adjustment phase, so it will no longer present in the future versions.
Data Analysis
This section describes the data analysis steps performed before event reconstruction. A schematic diagram of the data analysis procedure is illustrated in Figure 6.34. The method consists of four main steps: pedestal subtraction, gain correction, pulse selection and clusterization. We use the pedestal and noise results obtained per pixel from the noise analysis to correct the pedestal per signal and to find the pulses. In general, each run with data from the radioactive source is alternated with a noise run, in order to monitor possible variations in pixel pedestal and noise. A threshold level is established for pulse discrimination. The amplitude and time of the signals are calculated using the method of the CFD. After pulse finding, the detected signals from the same interaction are recombine together to form a cluster. Several topological cuts are finally applied to these cluster in order to make the final data selection.
Baseline subtraction
As presented in Section 6.4 every signal suffers from an offset, which in general comes from the front-end electronics. Each pixel of the anode can be characterized by a constant pedestal value which is used to correct the data. The pedestal per pixel is obtained from a noise run by averaging over a sufficient number of events. Afterwards, in order to estimate the charge collected in a given pixel, the pedestal is subtracted from the raw signal to compensate from this constant offset. If the probability of the presence of bubbles inside the chamber is high, baseline exclusion can be performed using the rejection interval calculated from the noise events. The median of each individual signal is determined from the first 50 samples and the last 50 samples, and compared to the baseline rejection limits. If the median is above or below three times the value of the rejection cut, the entire event is eliminated from the data analysis. Thanks to an adequate insulation of the detector, we have seen that the formation of bubbles inside the LXe can be neglected and thus this pedestal cut is no longer necessary.
Common noise correction
A common baseline variation may occur in all pixels at the same time, typically due to an imperfect shielding of the components of the detector. Unlike pedestal which is stable over long time periods, common mode noise might vary between consecutive events. Therefore, the second step is the common noise correction event per event. To estimate its contribution, the median value of each sample over all the pixels of the anode is determined event per event. These values are then subtracted from the raw signals of each pixels. The correction is made sample by sample.
The presence of this kind of correlated noise is very sensitive to the grounding of the components as much as the Frisch grid and the first field ring filtering. An important improvement during the course of the thesis has been carried out, so the implementation of this corrections is no longer necessary. Figure 6.35 shows the common noise contribution compared to raw noise and after pedestal correction before the improvement of the system's filtering.
Gain Correction
So far we have seen that all pixels of the anode do not have the same behavior in terms of DC offset and baseline fluctuations. These variations are mostly due to production differences during the fabrication process where not all the components of the front-end electronics are exactly the same. Equally, pixel to pixel variations of the gain are usually present. In this section, the gain of each individual pixel is calibrated for further correction of the measured amplitude per signal. To obtain an accurate value of the gain of each pixels of the anode an uniform beam is required. To do so, one of the lead collimators placed between the BaF 2 scintillation crystal and the 22 N a source was removed in order to increase the solid angle coverage seen by the anode. In addition, to reduce the relative error on the mean value of the amplitude distribution, a high amount of collected events per pixel was necessary.
We obtained the energy spectrum of each pixel. Only clusters with a total measured energy of 511 keV on which 95 % of the charge is collected by a specific pixel were considered. The measured charge per cluster was corrected by electron attenuation to remove the z-position dependence (see Section 7.1). The photoelectric peak obtained per pixel was fitted by a Gaussian function, so the gain of each individual pixels was obtain from the mean value of the Gaussian, which corresponds to the amplitude measured for an energy deposit of 511 keV. The standard deviation, on the other hand, shows the energy resolution per pixel for 511 keV events at 1 kV/cm. Figure 6.36 shows the mean value of the photoelectric peak for the 35 central pixels of the anode. The two different IDeF-X ASICs are represented by a different color. The edges of the anode where removed from the analysis to avoid a possible bias in the collected charge. A maximum pixel-to-pixel difference of 4 % has been measured, whereas a maximum dispersion of 2 % and 2.3 % has been estimated between the pixels of the same IDeF-X LXe chip, respectively. A constant fit per IDeF-X gives an average value of 0.915 V and 0.930 V respectively resulting in a difference of the order of 1.6 %.
The position signal distribution averaged over all the pixels of the same column has been also measured in order to study the contribution of the parasitic signal (see Section 6.4.2). The parasitic signal is proportional to the charge deposited per chip and therefore, its contribution depends on the position of the pixels of a cluster with respect to the maximum energy deposition. For a cluster which pixels are shared between both ASICs, we may expect a non-uniform response of the measured energy. The gain as a function of the column is shown in Figure 6.37. No significant deviation of the columns 3 and 4, i.e. columns at the edge between both chips, is observed with respect of the rest of the columns of the same IDeF-X. A constant fit per IDeF-X shows the same dispersion between ASIC as the one obtained by fitting the value of the gain per pixel. Gain correction can be applied either pixel by pixel or chip by chip depending on the dispersion between the pixels of the same ASIC. Although a pixel wise correction seem more optimal, it requires a very high statistics per pixel to ensure a correct calibration. To obtain the same precision in the measurement of the gain as the one measured in XEMIS1, we need at least 1000 entries in the photoelectric peak per pixel. This is hardly achievable in XEMIS2 where 24000 electronic channels are present. For this reason, we opted to perform an ASIC wise correction using as average value the one obtained by the fit of the two IDeF-X independently. The measured energy per digit is then normalized by 0.914 V and 0.928 V before clusterization depending of whether the charge is deposited on the left or right side IDeF-X respectively.
Signal selection and Clustering
The next step in the data analysis after pedestal (and common noise) correction is signal finding. The goal is to determine the number of interactions per event detected above an energy threshold, which are spatially resolved. A digit is found if the amplitude of the pulse is higher than this threshold. In our case, the signal over noise ratio should be higher than SN R > 3σ noise (or 4σ noise ). Such a low threshold is necessary because of the charge spread between adjacent pixels mainly due to electron transverse diffusion and the size of the primary electron cloud. An hysteresis of nσ noise to (n -1)σ noise was used to determine the time of CFD.
The energy threshold implies an unavoidable loss of detection efficiency. The amount of shared charge depends both on the position of the electronic cloud with respect to the pixel edge, and on the distance of the interaction in the active volume with respect to the anode. In most cases, a high charge is collected by a certain pixel while a very small amount of the total charge is deposited on one or more neighboring pixels. A low threshold level is then required in order to recuperate all the deposited charge shared between the different pixels. During this thesis two different thresholds of 3 and 4 times the noise have been considered as the best option between charge recovery and noise counting rate (see Section 4.5). A 3σ noise threshold implies that at least an energy deposit of around 4.5 keV inside the fiducial volume of the detector is necessary to be measured. Higher thresholds, on the other hand, would reduce the amount of collected charge per interaction, which would deteriorate both the energy and transverse spatial resolutions of the detector, especially for low energy deposits. Figure 6.38 shows the position of the photoelectric peak as a function of the energy selection threshold. A reduction of around 0.3 % of the collected charge has been calculated with an energy threshold of 10σ noise . The measured charge at 3 and 4 σ noise is compatible within the statistical uncertainties. Each of the selected digits contain the information of the time (z-coordinate), charge, x-y position, pixel address and time over threshold (TOT). The amplitude and time of the signals are calculated using the method of the CFD with a delay of 9 channels and a gain of 1.5. The zero-crossing point of the CFD signal gives directly the position of the maximum amplitude. The jitter introduced in the time measurement by the data acquisition system is corrected by using the TTT signal. Moreover, the amplitude of the signals is corrected by gain linearity (see Section 6.5.3). The TOT is calculated between the nσ noise threshold and a second threshold at (n -1)σ noise . Only pulses with a calculated time using the CFD method inside this double threshold interval are kept. A 2 % of the signals are lost due to the hysteresis cut. No further assumptions are considered for the peak selection.
Clustering
In the next step, digits coming from the same interaction should be aggregated in the same cluster. Pixel clusters are formed by combining neighboring pixels with charge above the pixel threshold. Both side and corner adjacent pixels are included in the cluster. The list of digit is sorted by signal amplitude in descending order. Starting from the first digit, i.e. higher amplitude, a list of neighboring fired pixels is created. The pixel with the maximum amplitude is now considered as the pixel of reference. Only pixels which are closest neighbors by side or by the corner to the pixel of reference are possible candidates to the same cluster. The clusterization algorithm loops over all the detected digits and it searches for unclustered digits within a pre-defined time window until no more digits are found in the vicinity of the pixel of reference. The time condition between adjacent pixel is necessary since not all the neighboring fired pixels must come from the same interaction. The selection of this cluster time window is explained below. After a cluster is formed, the digits associated to this cluster are removed from the list for the remaining cluster search in the same event.
The next maximum digit in the list become the new pixel of reference of a new cluster and the search for possible neighboring pixel re-starts. The procedure continues until no digits are left in the list. Figure 6.39 shows an example of a photoelectric effect, where the total charge is shared by four different pixels. All the signals are collected on the anode after the same drift time of the order of 43 µs (∼540 time channels). The waveforms are presented in Figure 6.40. An almost uniform charge sharing is obtained between the four pixels, that implies that the charge cloud should be located close to the intersection point between the four pixels. The total charge of a cluster is then calculated as the sum of all the individual charges Q i . The time of the interaction, i.e. the z coordinate, is given by the time measured by the CFD for the pixel of reference. The x and y positions of the interaction, on the other hand, depend on the number of pixels inside the cluster, also called multiplicity. If the cluster has multiplicity equal to one pixel, the position of the interaction is measured at the center of the pixel. However, if the charge is shared by more than one pixel, the position is calculated, as first approximation, as the centroid position of the electron cloud:
x rec = i Q i . x i i Q i (6.4)
y rec = i Q i . y i i Q i (6.5)
where x i and y i are the x and y coordinates of the center of each pixel. The center of gravity method is not a good estimator of the true position since the pixel size is larger than the electron cloud. A correction is then necessary in order to obtain a more realistic position distribution. A more detailed study of the x-y position reconstruction is performed in Section 7.6.
After clustering, interactions are classified in single-cluster events, double-cluster events, etc . . . or more generally, multi-cluster events, according to the number of clusters per event. This classification is essential for further Compton sequence reconstruction.
Cluster Time Window
Clustering is one of the most delicate parts of the data analysis given the complexity and variety of possible event topologies. A missassociation of digits inside the clusters leads to a degradation of both energy and spatial resolutions.
The adjacency condition between pixels is not always enough to distinguish two interactions produced by the same ionizing particle. For this reason, an additional matching based on the drift time is applied. Two digits measured in two adjacent pixels belong to the same cluster, if the difference between the time of CFD of each of the pulses is smaller than a certain value. If a cluster candidate has more than two pixels, the time difference is always calculated with respect to the pixel of reference, i.e. the pixel with the maximum amplitude. Clustering with respect to the pixel of reference, is an efficient way to avoid random noise digit to be regrouped with real charge deposits, in addition to enhance the separation of clusters that come from two different interactions produced close in time and space.
To determine the optimum value of the cluster time window, we performed a preliminary study where a large fixed window of 2 µs is set, i.e. if two adjacent digits have a drift time difference smaller than 2 µs, both signals are assumed to come from the same interaction point. The results presented in this section were obtained with the 6 cm TPC, a 100 LPI Frisch grid placed at 500 µm from the anode and an electric drift field of 1 kV/cm. Only single-cluster events with a total energy of 511 keV are selected. No additional condition is set on the clusters multiplicity. The cumulated time difference distribution between two pixels of the same cluster is shown in Figure 6.41. The value of ∆t is calculated as the difference between the time of CFD of the pixel of reference, minus the time of CFD of a neighboring pixel of the cluster. The sharp peak centered at ∆t = 0 is due to the well reconstructed digits and less likely due to two different interactions at the same z-position. A constant background is observed at the tail of the distribution caused by noise digits. The cumulated time difference distribution is presented in Figure 6.42 as a function of the neighbors amplitude expressed in unit of SNR. For simplicity, we will refer to the amplitude of the neighboring pixels as A neighbor . For a A neighbor higher than 20σ noise the distribution seems uniform with a time difference between digits of less than 1 time channel (one channels is equivalent to 80 ns). The dispersion from zero is due to the limited precision because of the time sampling. At amplitudes smaller than 20σ noise , the time difference between adjacent pixels increases as the amplitude of the neighboring pixel decreases. At very low A neighbor , of the order of the pulse selection threshold, the distribution is dominated by noise digits. The lack of events at ∆t > 0 is probably due to the parasitic signal reported in Section 6.4.2. Figure 6.43(a) shows the time difference distribution for an amplitude of 4σ noise . A peak with a delay of ∼-4 time channels (∼320 s) with respect to ∆t = 0 is clearly visible, which exceeds from a constant background. The peak was fitted with a Gaussian function plus a background. The time delay of 4 channels disappears at higher amplitudes as presented in Figure 6. 43(b). Under this experimental conditions, this time delay is more probably due to the presence of the parasitic signal (see Section 6.4.2). Other possibles causes of signal delay such as indirect signal induction are discussed in Chapter 5. The average time difference as a function A neighbor is depicted in Figure 6.42. The value of ∆t was obtained by fitting the time difference distribution for each value of A neighbor by a Gaussian function. Moreover, the timing resolution as a function of the deposited energy was deduced from the standard deviation of the Gaussian fit (see Figure 6.44). The higher time dispersion observed at low amplitudes is due to the precision of the CFD method due to the influence of the electronic noise. For small energy signals, a resolution of the order of 400 ns was calculated. The time window of clusterization can be set either using a fixed value (σ mean ≈ 0.5 channels) given by the average standard deviation of the ∆t distribution as in [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF], or a variable value which depends on the amplitude of the digits of a cluster. Due to the important amplitude-dependency of the precision of the time measurement, a variable time window seems the best option. The dependence of the timing resolution as a function of the amplitude is obtained by fitting the distribution presented in Figure 6.44 by a triple exponential. The cluster time window is then given by the relation: T W = ±3σ T W , where σ T W is given by Equation 6.6:
σ T W = σ 2 Pr + σ 2 Pa (6.6)
where σ Pr and σ Pa are the values of the time resolution obtained from the triple exponential fit for the pixel of reference and the adjacent pixel respectively. Since the ∆t distribution is well described with a Gaussian distribution a ±3σ time window accounts for 99.7 % of the events for a given collected charge. The comparison between two different time windows is presented in Figure 6.45. For a constant time window of ±3σ mean , the photoelectric peak is less symmetrical in the region close to the Compton edge. This asymmetry is in part due to a bad clusterization of two different interactions with similar energies that are regrouped together in the same cluster. Moreover, the number of single-cluster events is reduced by a ∼ 5 % when a fixed cluster time window of 1.5 time channels (120 ns) is used. A missassociation of the interaction points have detrimental effects on the tracking Compton reconstruction, where the first and second interaction points must be well defined. For this reason, the results presented in this thesis have been obtained by using a variable cluster time window according to the dependency presented in Figure 6.44
Off-line Analysis: event selection
Border Rejection
A series of topological cuts were added to the data analysis in order to make an adequate selection of the reconstructed clusters. First of all, clusters which center of gravity is calculated at one of the pixels situated at the borders of the anode are excluded from the analysis. The borders consist of the first and last rows and columns of the segmented anode. This cut is necessary because for these kind of events we cannot conclude that all the charge was deposited inside the active zone of the detector. Similarly, clusters with a reconstructed position around the right bottom corner of the anode are also rejected. The pixel placed at this position is used to register the TTT signal and hence, no real charge is collected. In average, less than 2 % of the measured cluster are removed by these two cuts.
Event Topology and Energy Cut
The number of cluster per events is related to the nature of the interactions inside the detector. For 511 keV events, the probability that a γ-ray undergoes a photoelectric effect is ∼20 %, while the probability of Compton scattering is ∼80 %. In some cases, when photons interact within the LXe, they may undergo multiple Compton scattering before they are fully absorbed by the medium. Consequently, we expect to reconstruct, in average, around two clusters per event.
The number of detected clusters per event may be affected by both an incomplete clustering algorithm and the presence of electronic noise. Moreover, not every scattering point is necessarily registered by the detector. This effect is directly related to the size of the active area of the detector. When two interactions are spatially close, the clusterization algorithm may not be able to separate them, and a single cluster would be register instead. A bad clusterization affects the measurement of the position and energy of the γ-ray interactions. In addition, if an interaction point do not deposit sufficient energy to trigger the minimum energy detection threshold, it would be treated as electronic noise and the interaction will not be recorded. All these factors affects the event topologies and had the effect of reducing the actual number of cluster per event. On the contrary, a low threshold level will increases the number of clusters per event due to the incorporation of noise clusters. The blue distribution in Figure 6. 46(a) shows the number of clusters per event for a charge detection threshold of 3σ noise . In average, 14 clusters are reconstructed after the interaction of a 511 keV γ-ray inside the detector, with no single-cluster events which prevents the calibration of the detector. The shape of the distribution is consistent with the number of noise triggers per event. If we consider that for the CFD method, the number of noise triggers per second and per pixel for a 3σ noise threshold is around 2200, the average value of the total number of noise triggers in the anode during the 102µs time window is ∼14. The number of clusters per event is highly reduced for a 4σ noise threshold as shown in the blue distribution of Figure 6.46(b).
To reduce the presence of noise events due to the low threshold level, a cut on the clusters energy is necessary. Isolated clusters with an amplitude smaller than 20 keV are excluded. The limit of 20 keV was chosen due to the important loss of angular resolution at low energies. This energy limitation is crucial for tracking Compton. However, to avoid the loss of true events, a more complex cut concerning the multiplicity of clusters is applied. Figure 6.47(a)
shows the energy distribution of those clusters rejected due to the energy cut for a threshold of 3σ noise . As we can observe, three clear peaks are present in the distribution. These peaks correspond mostly to clusters with one, two and three pixels per cluster respectively. The multiplicity distribution for the three peaks is depicted in Figure 6.47(b). For the events with energies comprised in the third peak, around 65 % of the clusters have three triggered pixels, whereas 25 % have multiplicity one. Those clusters with around 20 keV deposited on a single pixel cannot be considered as noise clusters and should not be rejected. Therefore, an additional condition concerning the number of pixels per clusters was included to improve the quality of the selection. If the 20 keV of energy are collected by a single pixel the cluster is conserved. Otherwise, the cluster is considered as noise and it is removed from the analysis. For a 3σ noise threshold, 94 % of the reconstructed clusters are rejected due to the energy cut, while for a threshold of 4σ noise , the clusters rejection is reduced by almost a factor of 2 (55 %). Figure 6.48 shows the number of rejected cluster due to the energy cut as a function of electron drift time. As expected, since most of the rejected cluster are isolated noise triggers, the clusters are randomly distributed along the time window. The z-dependence of rejected events is almost flat with a slight fluctuation due to the presence of correlated noise.
A small reduction of the number of trigger is observed at the beginning of time window, which is consistent with the position of the trigger as reported in Section 6.4. Finally, after border and energy cuts, a 6 % of the registered events are, in average, removed from the analysis, while an almost 96 % of the clusters are dropped for a 3σ noise threshold level. On the other hand, for a 4σ noise threshold level only 60 % of the clusters are removed from the analysis. The green distribution in Figure 6.46 shows the number of cluster per event after the different selection cuts. In average, we expect a number of clusters per event of 1.3 for both threshold levels, which is more consistent with the nature of the 511 keV γ-ray interactions.
Ionization and Scintillation Light Correlation
The correlation between light and ionization can be used to reject 511 keV event which are not related to the charge deposited inside the TPC after a trigger. Those events are present outside the dimension of the TPC and outside the average distribution in the light charge distribution as shown in Figure 6.49. A cut along the scintillation light profile illustrated by the red dashed line in the figure can be used to exclude these events from the data analysis. The red dashed line represents a cut to reject uncorrelated events with the ionization charge as well as noise events. All those events outside the cut are excluded from the analysis.
Conclusions Chapter 6
In this chapter we have presented the experimental set-up used to measured the 511 keV γ-ray for the performance characterization of XEMIS1. A coincidental trigger between the TPC and a BaF 2 scintillation crystal coupled to a PMT is used to trigger on the two back-to-back 511 keV γ-rays events emitted by a low activity 22 N a source. The optimal coincidence time window and threshold level for the LXe PMT has been studied in order to maximize the trigger efficiency. We have confirmed that the trigger efficiency for 511 keV events improves when the active length of the TPC is reduced by a factor of two from 12 cm to 6 cm, due to the geometry of the chamber with respect to the position of the PMT inside the LXe. This new drift length of 6 cm is used from now on for the calibration of the detector.
We have presented a precise study of the noise with the aim of correcting the raw data from the DC offset and set the optimal threshold level for pulse finding. The noise and pedestal calibration is made pixel per pixel. An average noise of around 80 e -is measured thanks to the low-noise front-end electronics used to collect the ionization signal. Each run with data from the radioactive source was alternated with a noise run, in order to monitor possible variations in pixel baseline and noise, and to calculate the pedestal. A method to reject the pedestal out of range due to the presence of bubbles inside the LXe has been also introduced. We have shown that an adequate insulation of the detector reduces significantly the fraction of bubbles inside the liquid which corroborates the hypothesis that the origin of these bad pedestal is heat transfer from the outside towards the LXe.
A very low threshold level of the order of 3 to 4 times the value of the noise is set on the pedestal-corrected signals to measure a very small charge deposited in the detector. The amplitude and time of the signal that triggers the discriminator are measured by using the CFD method with a constant delay of 720 ns and a gain of 1.5. Since the deposited charge can be collected by more than one pixel, an event reconstruction algorithm is required. The selected pulses are then clustered in order to regrouped digits that come from the same interaction point. A cluster is therefore a set of neighboring pixels that have collected an amount of charge higher than a certain threshold. The total charge collected by all the pixels in a cluster, is proportional to the deposited energy. The optimal time window to match two different signals in the same cluster was deduced from the difference time distribution between the pixels of the cluster. An amplitude variable time window is the best option to avoid the missassotiation of digits that leads to energy and spatial resolutions degradations. Additional topological cuts on the selected events were included in the data analysis to perform a more accurate characterization of the TPC. In Chapter 7 the results obtained using the experimental set-up and methods presented in this chapter are presented.
I n this chapter, the results obtained from the calibration of the response of XEMIS1 are presented and discussed. These results are a compilation of the work performed during this thesis. All aspects related to the characterization of a LXe Compton camera have been identified and addressed. Energy, timing, position and angular resolutions are studied with a monochromatic beam of 511 keV γ-rays emitted from a low activity 22 Na source. The evolution of the energy resolution and ionization charge yield with the applied electric field and the drift length are analyzed in detail. Transport properties of electrons in LXe such as electron drift velocity and cluster multiplicity have also been studied. Finally, in Section 7.5, we present a Monte Carlo simulation that helped to understand the effect of charge sharing on a pixelated detector, and to estimate the position resolution.
The results presented in this chapter were obtained with the last experimental configuration of XEMIS1 based on a 6 cm long TPC to increase the trigger efficiency. We used a 100 LPI metallic woven Frisch grid with an electric field ratio of 6 to achieve a 100 % electron transparency and a gap of 500 µm. For any other kind of configuration it will be explicitly indicated. Most of the results were obtained for an applied electric field of 1 kV/cm. Measurements at different electric fields are expressly mentioned. Noise determination and pedestal subtraction per pixel are performed according to the method reported in Section 6.4. Signals are selected with a 4σ noise energy threshold equivalent to around 6 keV. The deposited energy and the drift time per interaction is determined with the CFD method with a delay of 720 ns (9 time channels) and an attenuation fraction of 1.5. The measured charge is corrected by gain variations as a function of the IDeF-X LXe readout electronics. Cluster formation is performed with the algorithm presented in Section 6.5.4, with a cluster time window which depends on the energy deposited per pixel. Good events are selected according to the offline selection cuts described in Section 6.5.5. Only single-cluster events are selected for the analysis unless otherwise is mentioned. Due to the inhomogeneities close to the Frisch grid, the first 5 mm after the grid are excluded from the analysis. Finally, the collected charge is corrected by electron attenuation using the method presented in Section 7.1.
Measurement of the liquid xenon purity and attenuation length determination
As discussed in Chapter 2, the purity of the LXe is an important factor that affects the charge collection measurement. The attachment of the ionization electrons to electronegative impurities may result in an important reduction of the collected charge, which deteriorate the energy resolution of the detector. The electron lifetime, τ , is proportional to the concentration of the electronegative impurities present in the medium [START_REF] Bakale | Rate constant for Electron Attachment to O2, N2O and SF6 in LXe at 165K[END_REF]. Therefore, the measurement of the electron lifetime or electron attenuation length, λ, gives an idea of the purity of the LXe during the data taking. Electron attenuation length determination is also essential to correct the measured charges as a function of the position of the interaction. If the charges are not corrected, even a few percent dependence of the collected charge on the distance would degrade the energy resolution. XEMIS is designed to reduce the concentration of electronegative impurities to less than 1 ppb O 2 equivalent, to ensure the detection of very low energy deposits regardless the interaction point inside the fiducial volume. A detector with a maximum drift distance of the order of 12 cm requires an attenuation length of drifting electrons longer than 1 m. We achieve a charge loss smaller than 10 %. An attenuation length of 1 m is equivalent to an electron lifetime of the order of 500 µs at 1 kV/cm. That is ∼8 times greater than the full drift time between the cathode and the Frisch grid.
To measure the electron attenuation length in our short TPC, we used the ionization signal produced by 511 keV γ-rays emitted from a 22 Na source. The goal of this study is to measure the evolution of the 511 keV photoelectric peak as a function of the distance from the anode. The amplitude distribution as a function of the drift time (i.e. z ) was divided into 20 time intervals of equal number of events. The collected charge, Q, is then calculated by fitting the 511 keV photoelectric peak by a Gaussian function on each of the different time slices. In presence of electronegative impurities the charge Q induced at a certain distance from the anode, follows an exponential decay as a function of the drift time according to the relation:
Q(t) = Q 0 e -t τ (7.1)
where Q 0 is the total deposited charge at time t = 0. The attenuation length is directly calculated by fitting the photopeak vs. z distribution with the Equation 7.1.
To correct the possible attenuation inside each time interval, an iterative algorithm was used. A first value of τ was obtained from the position of the photoelectric peak as a function of the drift time. This value was used to correct the measured charge per bin, providing a second value of the attenuation length:
Q(t) = Q 0 e -(t-<t>) τ (7.2)
where <t> is the average value of each slice weighted by the exponential shape of γ-ray attenuation. This process is repeated until convergence is achieved. The converged value is usually reached after three iterations, with a variation of less than 5 % between the first and last attenuation lengths. Attenuation lengths higher than 1 m have been determined after one week of circulation with the current purification system at 1 kV/cm. The scatter plot of the measured amplitude for single-cluster events as a function of the drift time at 1 kV/cm is shown in Figure 7.1. We can see that the 511 keV photoelectric peak moves to lower pulse heights for longer drift times. The black points represent the mean position of the photoelectric peak per slice and the red line is the resulting fit. The best fit results give an attenuation length of 1742.11 ± 97.58 mm at 1 kV/cm, which ensures less than 3 % of charge loss from events occurring close to the cathode.
To remove the dependence of the signal amplitude on the distance from the anode, the correction due to electron attachment to electronegative impurities is applied to the cluster energy during event reconstruction. This correction significantly improves the spectral performance of the detector. The technique reported in this section provides an accurate method for estimation of the attenuation length in a LXe TPC. The precision on the value of the attenuation length is limited by fluctuations on the measurement of the electron charge, which is related to the signal amplitude.
Drift time distribution and measurement of the electron drift velocity
The drift time distribution of 511 keV photoelectric events at 1 kV/cm is shown in Figure 7.2. The majority of the interactions are produced near the anode due to the position of the radioactive source with respect to the TPC. The observed time-dependent rate follows an exponential decay consistent with the absorption of γ-ray in the medium. An exponential fit to the distribution gives a mean free path of 3.23 ± 0.24 cm, which is in good agreement with the theoretical value of λ = 3.5 cm for 511 keV γ-rays in LXe at a temperature of 168 K [13].
We emphasize that both edges of the TPC are sharply distributed, with some random events present outside the boundaries of the chamber. The beginning of the TPC is systematically corrected by using the absolute position of the anode with respect to the trigger start time.
The z-coordinate of each interaction point is determined from the electron drift time measurement referred to the light trigger. Therefore, an accurate knowledge of both electron drift time and electron drift velocity are essential for event reconstruction. The measurement of the drift time is mainly limited by the measurement method based on a CFD, and the SNR. The timing resolution worsen for low energy signals. According to the results presented in Section 6.5.4, a maximum timing resolution of 400 ns is expected for signals with an amplitude of 4σ noise , whereas for pulse heights larger that 20σ noise the precision on the measurement of the time of CFD is less than 80 ns.
The electron drift velocity can be directly inferred from the drift time distribution depicted in Figure 7.2. By measuring the beginning and the end of the TPC we can calculate the total collection time needed by the electrons to drift all along the drift length of the detector. The size of the TPC divided by the total collection time gives an estimation of the electron drift velocity at a given electric field. The mechanical length of the TPC between the Frisch grid and the cathode is well known, 60.8 ± 0.5 mm. The error was directly calculated from the uncertainty in the measurement of the different components of the TPC (see Table 3.1). Moreover, a size variation of the order of ∼80 µm has been estimated for the 6 cm long TPC due to the thermal contraction of the materials at 168 K in comparison with the length at room temperature. This value has been obtained from the thermal expansion coefficients of the different material.
The position of the beginning and the end of the TPC was measured by fitting the electron drift time distribution for 511 keV photoelectric events by an Error (Erf ) function:
p 0 2 (1 + Erf ( t -p 1 √ 2 p 2
)), as shown in Figure 7.3. The position of the anode and the cathode was found to be at -0.085 µs and 29.19 µs with a precision of the order of 40 ns and 100 ns respectively. The determination of the beginning of the TPC is also limited by the start-time correction due to the offset induced by the DAQ. The standard deviation of both fits gives also an estimated value of the timing resolution for 511 keV signals of around 44.4 ± 3.0 ns obtained as the weighted arithmetic mean between the two measurements. This value is independent of the applied electric field.
Two effects have been, however, identified that may bias the measurement of the total drift length. As seen in Section 2.3.1, the range of the primary electrons in LXe introduces a shift of around 100 µm, for an incoming energy of 511 keV, in the position of the electron cloud with respect to the interaction vertex. This is an indication that the measured position is systematically shifted from the actual interaction point, which introduces an error in the determination of the electron drift time. The difference between the real and measured position depends on the energy of the ejected electron and the incoming angle of the γ-ray. In addition, a border effect due to the extension of the electron cloud is expected at both ends of the TPC. Indeed, the simulation of the extension of the primary electron cloud reported in Section 2.3.1, also shows that part of the ionization cloud is not collected by the anode if the interaction occurs very close to the cathode. Some of the electrons may escape the drift region towards the gap between the cathode and the PMT. Figure 7.4 shows the collected charge as a function of the z-position obtained by simulation for the last 2 mm of the TPC. The TPC was simulated with a total length of 6 cm between the anode and the cathode. The collected charge is normalized to the total charge at infinite electric field for an incident energy of 511 keV (Q 0 = E γ /W ). At 511 keV, the measured position at the end of the TPC is always smaller than 6 cm. Only at small deposited energies, when the electron mean free path is small enough, we can measure interactions exactly at the position of the cathode. Including the error in the measured position due to the shift in the barycenter of the charge cloud, a difference of 200 µm is estimated between the measured end of the TPC and the real position of the cathode for an energy of 511 keV. Likewise, a similar effect is observed close to the Frisch grid. Comparing the experimental data to the simulation results we can deduce the absolute position of the Frisch grid. Both effects have been taken into account when calculating the electron drift velocity. The inhomogeneities in the region close to the Frisch grid due the mesh inefficiency, electron transparency and the indirect charge induction were not included in the simulation. Nevertheless, these results can be considered as a good approximation for the determination of the size of the detector.
The ratio between the length of the TPC and the time difference between the end and the beginning of the TPC results in a electron drift velocity of 2.07 ± 0.01 mm/µs at an electric field of 1 kV/cm. This value is consistent with the results previously reported by other authors [START_REF] Aprile | Measurement of the lifetime of conduction electrons in liquid xenon[END_REF][START_REF] Ichige | Measurement of attenuation length of drifting electrons in liquid xenon[END_REF]. Figure 7.5 shows the electron drift velocity for different electric fields from 0.25 kV/cm up to 2.5 kV/cm. The statistical error bars are too small to be visible. As expected, the electron drift velocity increases as the electric field increases, with a slower growth from an electric field of the order to 2 kV/cm. The results reported in this section where taken at a LXe temperature of 168 K. Slight variations of the electron drift velocity are expected with temperature [158].
Event topology and cluster multiplicity
LXe has a high atomic number [START_REF] Liu | The XMASS 800 kg detector[END_REF] and high density (∼ 2.92g/cm 3 at 168 K) which makes it very efficient for stopping penetrating radiations. The total mass attenuation coefficient for 511 keV γ-rays in LXe is 0.097 cm 2 /g [13], corresponding to a mean-free-path of around 3.5 cm. With a detector drift length of 6 cm, the probability that a 511 keV γ-ray interacts, at least once, inside the active volume of the detector is around 82 %. Compton scattering is the dominant effect for 511 keV γ-rays in LXe with a probability of interaction of 60 %.
Due to the experimental configuration of XEMIS1, all γ-rays from the source enters the TPC with approximately the same incoming angle. At 511 keV, Compton interactions with small scattering angles are most likely (< 45 • ), which means that the deflected photon moves forward along the direction of the incoming particle. If the scatter photon deposits its energy in the detector bulk, both interaction points will be close in the xy-plane. Moreover, due to the small mean free path of the 511 keV γ-rays in LXe, timing information is not always enough to separate both interactions. Transverse diffusion increases the lateral spread of the charge cloud during the drift. This means that the number of triggered pixels per interaction increases with the electron drift time, which hinders the separation of vertex. The complexity of the event topology requires a high granularity collecting anode. In addition, a very good spatial resolution along the three directions is necessary in order to fully reconstruct multiple-site events with several fired pixels per cluster. The number of pixels per reconstructed cluster also called cluster multiplicity is strongly dependent on the energy selection threshold and on the cluster reconstruction algorithm. A low energy threshold results in large cluster sizes since pixels with small charge deposits are also included in the cluster.
The cluster multiplicity along the y-coordinate as a function of drift length for 511 keV events at 1 kV/cm is shown in Figure 7.6. The slices of variable size along the drift length to keep constant statistics, where selected in the same way as reported in Section 7.1. To avoid confusion, Figure 7.7 shows an schematic design of the definition of multiplicity in the XY transverse plane. When photons interact very close to the Frisch grid, the generated electrons are mostly collected by an unique pixel, resulting in a single-pixel cluster. The size of the reconstructed clusters increases as the electron drift time increases. This result is consistent with the spread of the electron cloud as the electrons drift towards the anode due to the transverse diffusion. In average, the number of pixels per cluster along both directions (x and y) with a signal selection threshold of 4σ noise , is around 1.5 with a total fraction of single-pixel clusters of 70 %. Close to the Frisch grid the total fraction of clusters with only one pixel per cluster is of the order of 50 % as we can see from Figure 7.8, and it decreases as the distance from the anode increases due to the lateral diffusion of the ionization cloud.
Measurement of the ionization charge yield and the energy resolution
The pulse height spectrum of 511 keV γ-rays from the 22 Na source has been measured at different electric drift fields between 0.25 to 2.5 kV/cm. Figure 7.9 shows the results obtained at two different field values. The photoelectric peak at 511 keV can be seen clearly, and the backscattered peak and the Compton edge are also strongly identified. The charge yield and peak width are obtained by fitting the photoelectric peak by a Gaussian function. Due to the difficulty of performing an absolute calibration of the measured charge, we assumed that the number of electrons per energy deposit for 511 keV γ-rays at 1 kV/cm is 27200 electrons [START_REF] Aprile | Performance of a liquid xenon ionization chamber irradiated with electrons and gamma-rays[END_REF]. This V-to-electron conversion has been used during the course of this work. The noise contribution, which is directly inferred from the analysis of a noise run, is negligible (about 80 electrons) and then ignored from the energy resolution determination. The contribution of the inefficiency of the Frisch grid and the pulse rise time dependence with the position of the interaction are substantially reduced by rejecting those interactions that take place close We can conclude that the Thomas and Imel model does not reproduce our experimental data. The hypothesis that recombination limits the energy resolution in LXe seems realistic since it can better reproduce the experimental results and it points in the right direction to explain the electric field and energy dependence of the charge and light yields in LXe.
Discrepancies between the existing recombination models and the experimental data have been pointed out by many authors [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF][START_REF] Aprile | Performance of a liquid xenon ionization chamber irradiated with electrons and gamma-rays[END_REF][START_REF] Szydagis | NEST: A Comprehensive Model for Scintillation Yield in Liquid Xenon[END_REF], which has forced a reinterpretation of the models. For example, Dahl [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF] proposed a new recombination model that attempt to reproduce the results obtained for both electronic and nuclear recoils at low energies in LXe. In addition, the Noble Element Simulation Technique (NEST) has been recently developed to provide a complete modeling of the response of liquid noble gases detectors, in particular LXe, with a comprehensible explanation of both ionization and scintillation yields, based on the study carried out by Dahl [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF] [START_REF] Szydagis | NEST: A Comprehensive Model for Scintillation Yield in Liquid Xenon[END_REF][START_REF] Lenardo | A Global Analysis of Light and Charge Yields in Liquid Xenon[END_REF]. The fundamentals of NEST is that it considers two different models to describe the recombination probability depending on energy. For long particle tracks, i.e. in the high energy region, they propose the Doke-Birks approach, which directly depends on the linear energy transfer (LET), dE/dx,. On the other hand, for short tracks, the recombination probability is calculated using the Thomas-Imel box model, which is independent of dE/dx. A more detailed description of the recombination process in LXe can be found in Chapter 2, Section 2.2.1. According to NEST, short tracks are considered when they are shorter than the mean thermalization distance of ionization electron-ion pairs in LXe (4.6 µm). Thus, the Thomas and Imel model dominates at energies below 15 keV.
The probability of recombination is indeed responsible of the non-linear charge and scintillation yields with energy, electric field and type of incident particle observed in LXe. The use of a two-model approach to explain the scintillation and ionization yields in LXe is justified by the experimental results observed in the low energy region. For low-energy deposits (< 10 keV ) the scintillation yield decreases instead of increasing. This suggests that at low energies the probability of recombination becomes independent of the ionization density and the Thomas-Imel model provides a more suitable description of the recombination rate [START_REF] Dahl | The physics of background discrimination in liquid xenon, and first results from XENON10 in the hunt for WIMP Dark Matter[END_REF]. The model provided by NEST can be implemented using a Geant4 simulation, and in the future it can be an excellent tool to test our own experimental data. So far we have assumed no difference between electronic recoil and γ-rays. The relative ionization yield presented in this section was directly obtained by assuming an electron recoil of 511 keV after the interaction of a 511 keV γ-ray with the LXe. However, it is well known that after a photoelectric absorption a 29.8 keV K α 1 X-ray is emitted with a branching ratio of 85 %. This would not be an issue if the probability of the recombination was linear with the energy. However, the non-linear response of the ionization and scintillation yields in LXe with respect to the energy (see Section 7.4.2) results in a variation in the number of collected electrons depending on the type of interaction, i.e. the number of electrons collected after a photoelectric effect with the emission of a X-ray from the K-shell, differs from that measured if no X-ray is emitted. Moreover, this implies that for the same incoming particle and same incident energy, the ionization yield in LXe is different for a photoelectric absorption than for a Compton interaction where the probability of emission of a X-ray is less than 4 %. A future calibration of our detector for electronic recoils especially at low energies is therefore necessary to fully understand the performances of XEMIS1 as Compton camera.
Energy dependence of ionization yield and energy resolution
The charge yield and energy resolution was studied for different γ-ray energies at two different applied electric fields. The radioactive sources used in this study are 137 Cs and 22 Na with lines at 662 keV, 511 keV and 1274 keV, respectively. For the 511 keV and 662 keV energy calibrations, the sources were collimated with a lead collimator and placed at a distance of around 13 cm from the center of the TPC. Unlike for the 511 keV events, the 662 keV γ-rays from the 137 Cs source were triggered using exclusively the scintillation signal in LXe. To avoid the saturation of the DAQ due to the low discriminator threshold level on the PMT signals, a dead time was included to ensure a maximum trigger rate of 2 Hz.
To measure the third photon of energy 1274 keV emitted by the 22 Na source, the BaF 2 scintillation crystal coupled to the PMTs was displaced laterally with respect to the radioactive source as illustrated in Figure 7.12. The source was kept in front of the entrance window without any collimator, at the center of the anode but at one of the sides of the BaF 2 crystal. With this new experimental configuration, if one of the two 511 keV γ-rays is detected by the BaF 2 crystal and since the other 511 keV photon is more likely emitted with an angle of 180 • , we expect that the signal on the LXe PMT is coming from the third γ-ray. The coincidence between the external PMT and the LXe PMT triggers the registration of the signal (see Section 6.2). The pulse height spectra for the γ-ray lines of 511 keV, 662 keV and 1274 keV at 1.5 kV/cm are shown in Figure 7.13. For the 137 Cs, since no coincidental trigger is used, the background was extracted from the charge spectrum. The photoelectric peak is fitted with a Gaussian function. Both pulse height and width are deduced from the results of the fit. The measured charge was corrected by electron attenuation obtained for the 511 keV events.
The measured energy dependence of the energy resolution and the charge yield at two difference applied electric fields is shown in Figure 7.14 and 7.15, respectively. The results are listed in Table 7.1. For higher energy γ-rays, the energy resolution improves. Likewise, a better energy resolution was measured at 1.5 kV/cm with regard to 1 kV/cm, although the improvement as increasing the electric field seems less marked at higher energies.
Assuming that the ionization yield is proportional to the deposited energy, the collected charge for 1274 keV γ-rays is around 60 % larger than that of 511 keV at an electric field of 1.5 kV/cm. This difference is the result of a recombination effect between electrons and ions since it is more significant for an electric field of 1 kV/cm. In fact, the ionization yield decreases with increasing dE/dx due to an increase in the recombination rate. The energy loss dE/dx increases, in general, with decreasing energy for electrons with energies below 1 MeV [START_REF] Szydagis | NEST: A Comprehensive Model for Scintillation Yield in Liquid Xenon[END_REF]. Thus, the ionization yield should decrease with decreasing energy. This is consistent with our results. The statistical errors of less than 0.5 % are deduced from the fit. The collected charge vs. energy was fitted with a first degree polynomial. As expected, the ionization charge yield is non-linear with energy, being more significant at lower field strength.
Amplitude (V)
Energy resolution (σ/E) Source Energy (keV) 1 kV/cm 1.5 kV/cm 1 kV/cm 1.5 kV/cm This non-linear response of the collected charge with the energy was also identified when two-cluster 511 keV events were selected instead of single-cluster events. We observed that the position of the photoelectric peak for 511 keV γ-rays was shifted by around 4 %, which indicated a loss of collected charge (see Figure 7.16). The two populations of events in Figure 7.16 at 0.65 V and 0.32 V, correspond to backscattering events and the following photoelectric effect, respectively. The non-linear energy dependence of the ionization and scintillation yields for both electronic and nuclear recoils in LXe may be disadvantageous, especially at low energies [START_REF] Lin | Scintillation and ionization responses of liquid xenon to low energy electronic and nuclear recoils at drift fields from 236'Äâ'ÄâV/cm to 3.93'Äâ'ÄâkV/cm[END_REF][START_REF] Akimov | Experimental study of ionization yield of liquid xenon for electron recoils in the energy range 2.8-80 keV[END_REF]. At energies of the order of 30 keV, a non-linearity of several tens of percent has been identified by other authors [START_REF] Szydagis | NEST: A Comprehensive Model for Scintillation Yield in Liquid Xenon[END_REF]. Understanding the response of LXe at low energies is of crucial importance in low background experiments such as direct dark matter experiments and neutrino detection. That is why, a precise measurement of the charge and light yields in LXe for low energy electronic recoil has become of great interest in the last few years. The calibration of XEMIS with a lower energy source is also important to extend the study of the response of XEMIS as a Compton camera. As discussed in Section 1.3.6, for an incoming γ-ray of energy 1157 keV, fair values of the angular resolution are obtained for scattering angles between ∼ 10 • to 60 • , which translates into electronic recoils in the energy range of 40 keV to 610 keV, with a maximum in the differential cross section at the scattering angle of 26.4 • . Therefore, the experimental study of the ionization yield and energy resolution of electron recoils with energies below 200 keV should be performed in the future.
Ionization yield and energy resolution as a function of the drift length
The ionization yield and the energy resolution as a function of the drift time for an applied electric field of 1 kV/cm are shown is Figure 7.17 and 7.18, respectively. The drift time (i.e. z ) distribution was divided into 13 time intervals of equal number of events per bin.
The pulse height for a given drift time is obtained from the photoelectric peak of 511 keV γ-rays of each energy spectra in each selected slice. Each pulse height spectra is fitted with a Gaussian function. The collected charge and the energy resolution are directly deduced from the mean and standard deviation of the fit. As we can see, both the charge ionization yield and the energy resolution are constant with the drift length within the uncertainties. Therefore, the attenuation length correction of the experimental data successfully removes the dependency with the drift length. A systematic error of 3 % for the energy resolution and 1 % for the measurement of the yield have been deduced from the accuracy of the Gaussian fit to the full energy peak by changing the fit boundaries around the mean value. The systematic uncertainty is taken as the maximum difference between the most different results.
Charge collection efficiency
Charge collection efficiency can be defined as the ability of the segmented anode to collect all the deposited charge by a single interaction regardless the number of triggered pixels in the anode. To verify that no charge is lost depending on the topology of the events, we calculated the position of the photoelectric peak for 511 keV single-cluster events as a function of the cluster multiplicity. From this study, we obtained a difference of less than 1 % in the collected charge between one and four pixel clusters. A bigger charge loss of 1.5 % was, however, observed for clusters with a multiplicity of 3. This difference is most likely due to a bias caused by the threshold level. Based on this results, no additional correction as a function of the cluster topology is required. In addition, no significant variations in the energy resolution were observed for the different multiplicity configurations.
Monte Carlo simulation of the response of XEMIS1 to 511 keV γ-rays
In order to have a complete understanding of the detector performances, a simulation of the detector response was carried out and compared to experimental data. The simulation is focused on the charge carrier transport in LXe and on the ionization signal formation. The simulation is performed by means of a stand-alone code using ROOT. To reduce the computational time, each interaction point is simulated as a point-like cloud, i.e. the mean free path of the primary electrons is neglected and the energy is assumed to be deposited at the same point inside the detector. The goal of this study is to understand the effect of charge sharing between neighboring pixels, and to estimate the spatial resolution of the detector. A point-like source simulation also allows to study indirectly the effect of the mean free path of the primary electrons and the effect of charge induction in the neighboring pixels.
The active zone of the TPC is just defined as 12 × 2.5 × 2.5 cm 3 , with 64 pixel of 3.125 × 3.125 mm 2 defined to collect the simulated ionization signal. Neither the cathode nor the Frisch grid were included in the simulation. The distribution of the interaction position, z 0 , follows an exponential decay with a characteristic linear attenuation length of 3.4 cm. In order to simulate the electron transport inside the LXe, energy fluctuations, the spread of the electron cloud due to lateral diffusion and electron attenuation are considered.
Electron transport simulation in LXe
A stand-alone code using ROOT has been developed during this thesis, in order to simulate the transport of electrons inside the LXe and the signal formation in the segmented anode. This program takes into account the energy fluctuations due to the charge density fluctuations caused by changes in the production of electrons along the track of the initial electron, which results in variable ionization densities. This is simulated by applying the Thomas and Imel model of charge recombination (see Section 2.2.1).
Each energy deposition is transformed to a charge electron cloud. The energy fluctuations predicted by the Thomas and Imel model are taken into account through Equation 2.7, where σ E represents the energy resolution. The parameters of the model ξ 0 , ξ 1 , a and b were deduced from the combined fit of the charge yield and energy resolution as a function of the electric field, E, for an incident energy of E p = 511 keV . The value of the parameters are: ξ 0 = (4.87 ± 0.51)/E, ξ 1 = (0.098 ± 0.008)/E, a = 0.207 ± 0.012 and b = 1.705 ± 0.04. E 0 was fixed as 4.2 eV and E 1 and E 2 are deduced from the values of a and b, being 4.55 ± 0.73 keV and 16.16 ± 1.06 keV respectively. The number of electrons per interaction is generated as a random number from a Gaussian distribution with a mean value given by the energy deposited in the interaction E p and a standard deviation given by the value of σ E .
The number of ionization electrons generated after the absorption of a 511 keV γ-ray in LXe is around 27200 electrons at 1 kV/cm [START_REF] Aprile | Performance of a liquid xenon ionization chamber irradiated with electrons and gamma-rays[END_REF]. Therefore, charge transport along the TPC electron by electron requires a very high computational time. In order to reduce the time of the simulation, we performed a simplified Monte Carlo simulation. As discussed in Chapter 2, charge carrier transport is governed by the electron drift velocity in the medium and the transverse diffusion, both of them dependent of the applied electric field. Instead of drifting electrons individually towards the anode, by knowing the initial position of the interaction z 0 (t drif t = z 0 v drif t ), the number of triggered pixels is deduced from the direct projection of the electron cloud onto the anode surface according to Equation 7.3:
f (x, y) = 1 4πD T .t drif t exp - x 2 + y 2 4D T .t drif t (7.3)
where D T is the transverse diffusion coefficient and t drif t is the electron drift time (t drif t = z 0 /v drif t ). The longitudinal diffusion along the drift direction is neglected. Each pixel with at least one collected electron is identified from the fraction of the Gaussian distribution that falls into the pixel surface. Statistical fluctuations on the number of electrons per pixel are also ignored. The electron drift time is calculated from the position of the interaction z 0 and the electron drift velocity for a given electric field strength. The number of electrons that reaches the anode is attenuated according to Equation 7.1, in order to reproduce electron attachment to electronegative impurities during the drifting process. The simulation of the electronics and readout system is carried out separately. The electronics simulation is performed in two steps as reported in Section 4.4. In the first step, a 80-electron random noise signal is created for each pixel. Assuming a linear behavior of the electronics response, for each collected electron, the output signal of the readout IDeF-X LXe chip is generated. Moreover the parasitic signal reported in Section 6.4.2 is added to every waveform to account for its effect, especially for low amplitude signals. The position of the maximum of each pulse is defined by the initial simulated position. Thus, the effect of the mean free path of the primary electrons in neglected. The resulting signal of a given pixel is proportional to the number of collected electrons in such pixel.
The final step of the simulation consists of generating a output file with exactly the same characteristics as the experimental file obtained with the DAQ of XEMIS1. The waveform of the 64 pixels is stored into a binary file. Both experimental and simulated data are treated with the same analysis code and with exactly the same initial conditions and offline treatment.
Simulation results and Comparison with experimental Data
In this section we present the results obtained with the simplified Monte Carlo simulation for a point-like 511 keV electron cloud uniformly distributed over the surface of a pixel placed at the center of the anode. The comparison of the experimental drift length distribution with the simulation is depicted in Figure 7.19. An exponential distribution of the charge cloud along the length of the TPC with a slope of 3.4 cm successfully reproduces the evolution of the photoelectric interaction points with the distance obtained with the experimental data. 7.20 shows excellent agreement between the measured and simulated pixel multiplicity as a function of the drift length at 511 keV and 1 kV/cm, for different energy threshold levels. These results were obtained for a transverse diffusion of 230 µm z (cm), compatible with the published values [3]. The transport of electrons along the TPC was attenuated with an attenuation length of 1750 mm, similar to that measured experimentally. During data treatment same attenuation correction was applied to both simulated and measured signals. As expected, when the γ-ray interacts close to the Frisch grid, electrons drift a short distance before being collected by the anode and thus, the contribution of diffusion to charge sharing is small. The simulation also shows a remarkable coincidence of the fraction of single-pixel clusters as a function of the z-position for different threshold levels (see Figure 7.21). Close to the grid and for a pulse selection threshold of 4σ noise , the fraction of full energy peak events collected by a single pixel is 54 %, whereas it increases to 60 % for a 10σ noise threshold.
These results indicate that charge sharing between neighboring pixels can be described by a Gaussian spread that varies with the drift distance as z (cm). Other factors such as the range of the primary electrons and the transport of fluorescence X-ray can be neglected in the simulation, at distances at least 5 mm far from the Frisch grid. Close to the Frisch grid, where the lateral diffusion of carrier electrons become small, the primary charge cloud size may become not negligible, which means that there is a non-zero probability that the electron cloud will be collected by multiple pixels, increasing the clusters multiplicity. A further simulation, where both effects are included, should be performed in the future. The evolution of the time difference distribution (∆t) between the pixel of the same cluster with the collected charge has been also studied with the simulation. The results showed that the simulation of the parasitic signal is essential to reproduce the shape of the ∆t distribution at low amplitude signals when all pixels are collected by the same IDeF-X LXe ASIC (see Figure 5.32). If the baseline variation was not included, a zero time difference was obtained even for amplitudes close to the threshold level. Figure 7.22 shows the comparison of the time difference distribution with the prediction from simulation. The distribution is presented as a function of the neighbors amplitude (A neighbor ), selected with respect to the pixel of reference or pixel with the maximum collected charge in the cluster. The experimental data was measured with a 100 LPI Frisch grid located at 500 µm from the anode. Good agreement between simulated and experimental data is found for low amplitude signals, whereas an underestimated value of the time difference between pixels is measured with the simulation at amplitudes higher than 15 times the electronic noise (∼ 22 keV ). These discrepancies are most likely due to the absence of indirect charge induction in the simulation, since the difference between data and simulation decreases at higher z-values. Similarly, the timing resolution as a function of the measured charge is shown in Figure 7.23. The simulation suggests a better timing resolution for all measured amplitudes, and better agrees with the experimental results for interactions that take place far from the Frisch grid. As reported in Chapter 5, indirect charge induction in non-collecting electrodes due to the weighting potential crosstalk between adjacent pixels, plays an important role in the formation of the ionization signal on a pixelated detector. Charge sharing would affect not only the timing distribution between the pixels of a cluster, but also the clusters multiplicity at distances close to the anode. A detailed simulation that includes all above mentioned effects should be therefore performed in order to fully understand the principle of signal formation in the vicinity of the Frisch grid.
Position reconstruction and Spatial resolution
A cluster is defined as an interaction that is distributed over one or more pixels in the anode. Therefore, the reconstructed position of a cluster depends on the cluster's multiplicity. When a cluster is formed by just one triggered pixel, and assuming a uniform pixel response, the reconstructed position is given by the center of the pixel, and the transverse spatial resolution depends only on the effective pixel size s:
σ x,y = s √ 12 (7.4)
For a pixel of 3.1 × 3.1 mm 2 , the theoretical transverse spatial resolution is about 0.9 mm. If the charge is shared by more than one pixel, the cluster position is reconstructed from the center of gravity. With this method the cluster center is determined by weighting the position of every pixel of the cluster by the fraction of collected charge (Equations 6.4 and 6.5).
Figure 7.24 shows a histogram of the experimental reconstructed y-position with the center of gravity method. The entries at around ±1.56, ±4.69, ±7.81 and ±10.94 mm represent the central position of the pixels, which corresponds to the reconstructed position of single-pixel clusters. Due to the applied energy threshold for hit selection and the size of the electron cloud, a minimum distance between the reconstructed position for single-pixel clusters and multiple-pixel clusters is naturally expected. The position of clusters with more than one triggered pixel should be reconstructed towards the edges of a pixel with a uniform distribution over almost the entire surface. However, with the center of gravity method, the position of multiple-pixel events is mostly distributed around the center of the pixel with a reduced number of entries between two neighboring pixels. This result is a clear example of the fact that the center of gravity method provides a wrong reconstructed position of multiple-pixel clusters. .24 -Reconstructed position along the y-coordinate using the center of gravity method for 511 keV events at 1 kV/cm and for all drift times.
In order to study the non-linearity effect observed for multiple-pixel clusters, and to estimate the transverse spatial resolution, we used the simulated data reported in the previous section. A point-like energy deposit is uniformly distributed between the four central pixel of the anode, with a total surface equal to the size of a pixel. The position of the interaction is uniformly simulated along the length of the TPC with a transverse diffusion of 230 µm z (cm).
Figure 7.25 shows the residual distribution, defined as the difference between the measured position using the center of gravity method and the real position of the interaction (simulated position), y cog -y 0 , for an energy deposit of 511 keV and four different initial positions along the drift length of the TPC (z 0 ). The double peak structure is due to multiple-pixel clusters. For interactions close to the anode (z 0 = 10 cm), the fraction of multiple-pixel clusters is small and the distribution of the residuals is almost uniform due to the dominant contribution of single-pixel clusters. As z increases, the fraction of clusters with more than one triggered pixel increases due to diffusion and thus, the spatial resolution increases, i.e. the residuals tend to zero. This effect becomes more clear in the residual distribution as a function of the reconstructed position presented in Figure 7.26. The two vertical lines at y = -1.56 mm and y = 1.56 mm correspond to clusters with only one triggered pixels. The position y = 0 corresponds to the center of the anode, positioned in the middle between the two central pixels. The pixels have a size of 3.125 × 3.125 mm 2 . As we can observe, as the distance from the anode increases, the deviation from zero of the residuals decreases due to the increase of the charge sharing between adjacent pixels. However, even for interactions close to the cathode the reconstructed position differs from the simulated position. This supports the hypothesis that the estimation of the transverse position as the centroid of the charged pixels is not accurate enough. Moreover, this effect depends on the electron cloud to pixel size ratio, being smaller for smaller pixel sizes. To improve the calculation of the position of multiple-pixel clusters along the x and y coordinates, we studied two possible optimization methods.
Gaussian Correction
The center of gravity method, which improves the position reconstruction by weighting the position with the charge collected per pixel, assumes that the position can be obtained from a linear interpolation of the charge distribution between two neighboring pixels, i.e. the charge sharing between adjacent pixels is assumed to be uniform, and the collected charge per pixel is uniformly distributed over the surface of a pixel. However, due to the transverse diffusion that causes the spread of the electron cloud, the charge distribution is more accurately described by a Gaussian probability distribution. The correction from a linear distribution between adjacent pixels to a Gaussian distribution is applied to the experimental data. This correction is exclusively applied to those clusters with a maximum of two pixels in the x or y directions. Multiple-pixel clusters with more than two pixels along the same direction are not corrected and its positions is determined by the center of gravity method. The position of single-pixel clusters is given by the center of the pixel.
The first step in the correction method consists of calculating the total collected charge per column or row depending on whether the correction is applied along the x or y direction respectively. The pixels of the anode are grouped together in strips along the perpendicular direction from which the correction is applied. The total charge per strip is calculated as the sum of all the individual charges per pixel above the energy threshold. A pixel hit contains the coordinate of the pixel in which the charge is collected. Assuming a linear system, where the probability of the highest amplitude pixel being to the left or to the right of the edge between two neighboring pixels should be equal, the center of the left pixel is taken as position of reference. The charge collected per cluster is approximated by a Gaussian distribution with mean value at the center of the pixel of reference, and a standard deviation given by the lateral diffusion coefficient which depends on the position of the interaction. The charge in each pixel can be therefore estimated from how much of the Gaussian distribution falls into each of the pixels (see Figure 7.27). The correction factor depends on the fraction of collected charge per pixel:
η = Q right Q right + Q lef t (7.5)
The probability density function is calculated by integrating a Gaussian distribution over the size of the pixel, given by the Error function Erf. The η distribution goes from 0 when all the charge is collected by the left pixel, to 1 in the case when no charge is collected by the left pixel and the whole signal is induced in the right pixel. Assuming that the size of the electron cloud is much smaller than the pixel size, the η distribution can be expressed in terms of the probability density function according to the following expression:
η = 1 + Erf (x) 2 (7.6)
In addition, the correction function can be expressed in terms of the equivalent number of sigmas of the Gaussian distribution that falls in the left pixel:
n σ = Erf -1 (2η -1) (7.7)
Finally, the corrected positions x η and y η are given by Equation 7.8, where x IP is the inter-pixel position:
x η = x IP - √ 2 n σ .σ dif f usion (7.8)
In Equation 7.8, σ dif f usion represents the transverse diffusion estimated as 230 µm z (cm). Figure 7.28 shows the experimental reconstructed y-position after the η correction. No entries are reconstructed at a certain distance from the center of the pixels due to the energy threshold effect. Moreover, the position distribution appears more uniform between pixels. This shows that charge sharing mainly occurs across the pixel borders, whereas for reconstructed positions at the pixel center no charge sharing occurs. The reconstructed position after correction is in better agreement with the expected positions. Even though Figure 7.28 is for the y-coordinate, the correction procedure is also applied to the x-coordinate. The accuracy of the Gaussian correction can be studied by means of the simulation. The residuals distribution as a function of the y-position is depicted in Figure 7.29. Compared to Figure 7.26, we can see a clear improvement of the reconstructed position with a flatter distribution around zero. This translates in an improvement of the position resolution. A small non-linear effect is however present between pixels, most likely due to those events with more than two pixels along the same direction which are not corrected.
Polynomial Correction
The Gaussian correction improves the reconstruction of the cluster positions with respect to the center of gravity method. However, this technique is only applicable to clusters with a multiplicity of two along the same direction. In an attempt to improve the spatial resolution, another correction method has been tested. Figure 7.30 shows the mean of the residuals obtained by simulation, as a function of the reconstructed position using the center of gravity method. The error bars represent the RMS value. The distribution is fitted by a three degree polynomial, so the cluster position can be directly deduced from Equation 7.9, where p 0 , p 1 and p 2 are the parameters of the fit:
x c = x cog -p 0 x -p 1 x 2 -p 2 x 3 (7.9)
The effectiveness of this correction method is illustrated in Figure 7.31. A zero value of the residuals is obtained for multiple-pixel clusters, which charge distribution falls close to the inter-pixel position. However, when the charge ratio between pixels is important, i.e. most of the charge is collected by one of the pixels, while almost no signal is collected by the other pixels of the cluster, this method is not able to properly reconstruct the position of the cluster. Nevertheless, a better spatial resolution is expected with respect to the center of gravity method.
Because the transverse spatial resolution depends on the energy (the higher the energy the higher the number of electrons in the electronic cloud and thus, the probability of firing more than one pixel increases) and the source position (the transverse diffusion is proportional to the square root of the distance between the interaction point and the anode), we studied the polynomial correction for different positions of the source inside the chamber and different energies. Figure 7.32 shows the mean values of the center of gravity residual distribution as a function of the measured position obtained for multiple-pixel clusters and six different z positions along the drift length. As we can see, a flatter distribution is obtained with increasing z. This effect is directly related to the spread of the electron cloud due to the lateral diffusion. However, even at high z-values the center of gravity method does not well reconstruct the position of the clusters. A different correction if therefore required as a function of the position of the interaction with respect to the anode. On the other hand, no significant variations were observed as a function of the energy (see Figure 7.33).
Position Resolution
A non-linear interpolation between two adjacent pixels seems a more accurate way to reconstruct the position of an interaction inside the chamber. The spatial resolution is defined as the RMS of the 1D distribution of the residuals. Figure 7.34 shows the results obtained with the center of gravity method and the two correction methods presented in this chapter along the transverse y-axis. The results were obtained for a pixel size of 3.125 × 3.125 mm 2 . The position resolution is significantly improved after the correction of the reconstructed position of multiple-pixel clusters. Moreover, even though the polynomial correction gives a flatter distribution of the residuals around zero, since it is not able to well reconstruct those interaction with the barycenter position close the center of a pixel, the Gaussian correction provides, in average, a better position resolution. In addition, since the lateral diffusion term already takes into account the position of the interaction along the drift length, the Gaussian correction is easier to apply that the polynomial correction. After correction, a spatial resolution of the order 500 µm is obtained for interaction at 10 mm from the anode, whereas 100 µm of resolution is deduced close to the cathode. The effective pixel size becomes approximately 1.73 mm and 350µm, respectively, for multiple-pixel clusters. Please note that not all possible effect, such as the contribution of the electron range, are included in the simulation. These results show that larger cluster sizes provide a better position resolution since the charge sharing between pixels allows a more accurate determination of the cluster position.
Drift Length (mm) Figure 7.35 shows the same results for different energy values after the Gaussian correction. The position resolution increases as the energy increases, since the fraction of multiple-pixel clusters increases with the number of generated electrons. A RMS value of 700 µm was measured for a 50 keV energy deposit close to the anode, compared to 500 µm measured for the same energy but at 6 cm. We can conclude that for a pixel size of 3.125 × 3.125 mm 2 a spatial resolution of less than 1 mm, along both x and y direction, is expected regardless the deposit energy and the position of the interaction.
During the course of this thesis, two different pixel sizes have been tested: 3.125 × 3.125 mm 2 and 3.5 × 3.5 mm 2 . As the pixel pitch increases, the position resolution also increases. For a pith of 3.5 mm we measured a resolution of ∼200 µm for a simulated energy of 511 keV close to the cathode (6 cm), which is around 50 % times worse that the one obtained for a 3.125 µm pixel size. This improvement decreases as the distance from the anode decreases, being of the order of 10 % at 1 cm.
LOR Reconstruction
To proof the feasibility of the 3γ imaging reconstruction algorithm with XEMIS1, we modified the experimental set-up presented in Chapter 6. To measure the third photon of energy 1274 keV emitted by the 22 Na source we used the experimental set-up described in Section 7.4.2.
The BaF 2 crystal gives temporal information of the 511 keV events but they do not provide information about the position of the interaction. For this reason, we reconstruct the point of intersection using a virtual LOR traced at the position of the radioactive source, which is located at about 13 cm of the center of the TPC. For each event, a virtual LOR is generated with fixed angle θ and a random angle φ to enhance the intersection between the LOR and the cone. For the LOR reconstruction in XEMIS2 please refer to [START_REF] Grignon | Étude et développement d'un télescope Compton au xénon liquide dédié à l'imagerie médicale fonctionnelle[END_REF][START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF].
Cone Reconstruction
The next step in the 3γ reconstruction process is to build a cone with the 1274 keV γ-ray. A cone is defined only for those events with at least two measured clusters. In the case of XEMIS1 and for an energy selection threshold of 4σ noise (6 keV), most of the events ends up with only one interaction event. This result is mostly related to the small dimension of the active zone of the detector. A large fraction of the multi-site events, i.e. Compton scattering followed by a photoelectric effect results in one of the interactions depositing its energy outside the fiducial volume. This leaves to a reduce statistics for the final position reconstruction. Among the conserved multiple-cluster events, the fraction of the events with three or more interaction points is very small.
As discussed in Chapter 1 (Section 1.3.6), the aperture angle of the cone is obtained from the Compton kinematics (Equation 1.18), whereas the axis of the cone is defined from the intersection between the first and second interaction points of the 1274 keV photon (see Figure 7.37). One of the major difficulties of the method is in fact, the tracking sequence identification. Both the aperture angle and the apex of the cone depend on the deposited energy and position of the first interaction vertex, respectively, which means that a correct identification between the first and second interactions is mandatory to perform the Compton image reconstruction. Due to the complexity of the interaction point identification, as first approximation, two different cones are reconstructed for each selected group composed by two different clusters. If the group is composed by more than two clusters, all possible combinations of two hits are considered. Some criteria based on the Compton kinematics can be also applied for hit identification. A cone is accepted if the Compton kinematics is respected. Otherwise, the cone is directly rejected from the reconstruction process. If more than one combination is a valid candidate, they can be evaluated through a statistical hypothesis test based on a Chi-square, χ 2 , test. The most likely first hit in a tracking sequence is the one that gives by the smaller χ 2 value. However, there are some limitations directly associated with the Compton kinematics. In certain cases, the two possible sequences of the first and second interactions are equally valid producing an ambiguity in the hit selection. The aperture angle for which a tracking ambiguity may exist depends in fact on the energy of the incoming γ-ray. In order to minimize the presence of ambiguities in the Compton sequence reconstruction we require a very good energy and spatial resolutions.
Cone-LOR Intersection
The position of the radioactive source is finally calculated from the intersection point between the Compton cone and the LOR. A schematic diagram of the cone-LOR intersection process is presented in Figure 7.38. For each event, every intersection between the LOR and all candidate cones is calculated. Cone-LOR intersection is also useful for cone selection. Those cones that do not intersect the LOR are directly rejected. Moreover, those intersection points that are outside a defined FOV are also removed from the final 3D position reconstruction. Finally, only events with one validated intersection point are kept. Otherwise, the event is dismissed.
One of the advantages of the 3γ imaging technique compared to other functional imaging techniques is the rejection of those events in which at least one of the emitted photons undergo a diffusion process before arriving into the detector. Photon attenuation is in fact one of the major drawbacks of PET imaging. In the 3γ imaging technique, the cone-LOR reconstruction is a natural method to reject an important fraction of these diffused events.
Angular Resolution
As discussed in Section 1.3.6, the precision on the intersection point (∆L in Figure 7.38) depends on many factors such as the spatial resolution of the two first interaction points, the distance between the two vertex, the energy resolution of the first interaction point and the angle between the LOR and the cone surface. Certain selection cuts may be directly applied to these parameters in order to increase the resolution along the LOR. For example, a cut on the scattering angle can be implemented to enhance the angular resolution. As showed in Figure 1.25, a good angular resolution is expected for scattering angles between ∼ 10 • to 60 • , which translates into an interval of the deposited energy of the first hit between 40 keV to 610 keV. Moreover, a minimum separation in 3D between the vertex of about 1.5 cm can be imposed in order to reduce the contribution of the spatial resolution to ∆L.
A more sophisticated estimation of the error in the intersection point localization, σ I , can numerically computed using error propagation. Every contribution to the error is evaluated by infinitesimal variation of each measured parameter:
σ I = i ∆I(v i ) ∆v i σ v i 2 (7.10)
where I represents the intersection point and the index 1 and 2 represents the first and second interactions of the Compton sequence respectively and v i ∈ {x i , y i , z i , E i }. A new cone is reconstructed and a new intersection point is therefore calculated. No correlation between the measured parameters is assumed in the calculation. Moreover, we assumed that the uncertainty in the LOR reconstruction is negligible. A detailed study of the spatial resolution along the LOR was performed in [START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF] using simulated data. The results showed that σ I can be indeed considered as a good estimator of the spatial resolution along the LOR, ∆L, with an almost linear relationship between both parameters. The resolution along the LOR is directly related to the angular resolution, which in turn is one of the most important parameter of the performances of the 3γ imaging technique. The difference between the real position of the source, given by the known incident direction of the γ-ray, and the reconstructed point defines an angle α (see Figure 1.24). This angle is directly related to the resolution along the LOR. A first estimation of the angular resolution has been performed with XEMIS1. Figure 7.39 shows the α-distribution obtained for an electric field of 0.75 kV/cm. An angular resolution of 4 • has been measured, which implies a resolution along the LOR smaller than 10 mm for a 5 cm distant source. Unfortunately, the design of XEMIS1 is not optimal to perform an accurate measurement of the angular resolution. Due to the small dimensions of the active zone of the chamber, the number of events with an adequate topology for the Compton sequence reconstruction is extremely reduced. Moreover, for those events with at least two interaction inside the fiducial volume, the angular resolution is limited by the distance between interactions. For this reason, no significant variations have been observed in the angular resolution with the applied electric field. A detailed simulation of the response of XEMIS2 shows very promising results for the angular resolution and sensitivity of the detector [START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF]. Therefore, considering the excellent performances of the detector in terms of energy and spatial resolution, we expect to obtain a very good angular resolution with XEMIS2.
Conclusions Chapter 7
In this chapter we have presented and discussed the results obtained with XEMIS1 for the performances characterization of the detector. We have addressed all main aspects related to the development of a Compton camera dedicated to 3γ medical imaging. A full understanding of the properties of LXe as γ-ray detector medium is essential for the development of a bigger scale detector, where some additional challenges will be present. For this reason, in this chapter, we have tried to study the main features related to electron transport and ionization signal extraction on a LXe TPC.
A fundamental requirement for a LXe TPC is that the electrons produced by ionization must travel undisturbed over relative long distances inside the detector. For this reason, the presence of electronegative impurity diluted in the medium should be reduced to very low levels. With the purification and circulation systems used in XEMIS1, attenuation lengths higher than 1 m are achieved after one week of circulation. The correction of the collected charges by attenuation is essential to improve the spectral performance of the detector. To this extent, we showed that after correction a constant collected charge is measured along the length of the detector within the uncertainties.
A timing resolution of 44.4 ± 3.0 ns for 511 keV photoelectric events has been estimated from the drift length distribution, equivalent to a spatial resolution of 100 µm along the z-axis. To determine the beginning and the end of the TPC two effects has been identified and taken into account. The range of the primary electrons in the LXe introduces a shift in the measured position along the z-axis. A Monte Carlo simulation showed that this deviation is of the order of 100 µm for 511 keV, which varies with the energy and the incoming angle of the γ-ray. Moreover, the extension of the electron cloud introduces a bias in the position of the ends of the TPC, that was considered for the determination of the total length of the chamber. Using the measurement of the drift time, we have determined the electron drift velocity as a function the electric field. At 1 kV/cm we obtained a drift velocity of 2.07 mm/µs, which is consistent with the reported values [START_REF] Aprile | Measurement of the lifetime of conduction electrons in liquid xenon[END_REF][START_REF] Ichige | Measurement of attenuation length of drifting electrons in liquid xenon[END_REF].
The electric field and energy dependence of the ionization charge yield is a complicated issue that has been addressed by many authors in the last few decades, and unfortunately it is still not fully understood. A realistic model of charge recombination proposed by Thomas and Imel [START_REF] Thomas | Recombination of electron-ion pairs in liquid argon and liquid xenon[END_REF][START_REF] Thomas | Statistics of charge collection in liquid argon and liquid xenon[END_REF] points in the good direction to explain the worse intrinsic energy resolution of LXe with respect to the Fano limit and Poisson expectations. In this chapter, we have studied the evolution of the energy resolution and ionization charge yield with the applied electric field, the drift length and γ-ray energy. The collected charge increases with the applied electric field and the γ-ray energy. These effects are directly related to fluctuations in the recombination rate along the track of the primary electrons. The obtained results are consistent with what was published by other authors [START_REF] Aprile | Performance of a liquid xenon ionization chamber irradiated with electrons and gamma-rays[END_REF][START_REF] Aprile | Detection of γ-rays with a 3.5 l liquid xenon ionization chamber triggered by the primary scintillation light[END_REF], showing some discrepancies with the theoretical model of Thomas and Imel. Moreover, we observed a non-linear response of the collected charge with the energy, being more significant at lower field strength.
The non-linearity of the ionization charge yield with the energy is an important subject that should be studied more in detail in the future. A small deviation of around 1 % was found at high energies (1274 keV). However, a higher deviation of tens of percents is expected at low energies(∼ 30 keV ), which may impact the Compton sequence reconstruction. It should be pointed out that the calibration presented in this chapter was made for γ-rays as incident particles. The non-linear response of the ionization and scintillation yields in LXe with respect to the energy results in a variation in the number of collected electrons depending on the type of interaction. The X-ray emission from the K-shell with a branching ration of 85 %, implies that the number of ionization electrons produced after a photoelectric absorption differs from the one produced after a Compton effect. This is especially important for Compton sequence reconstruction. The calibration of XEMIS for electronic recoils, in particular at energies below 200 keV, should be performed in the future to extend the understanding of the response of the detector.
We also observed an improvement of the energy resolution with the electric field strength and the energy of the γ-ray. Both effects are also related to the electron-ion recombination rate. For an electric field of 2.5 kV/cm we measured an energy resolution of 3.9 % (σ/E) for 511 keV γ-rays. For an energy of 1274 keV and an electric field of 1.5 kV/cm, an energy resolution of 2.85 % (σ/E) is achieved.
Cluster multiplicity has been studied as a function of the drift length for 511 keV events at 1 kV/cm. We showed that when photons interact very close to the Frisch grid, the generated electrons are mostly collected by an unique pixel, whereas as the electron drift time increases, the number of triggered pixels per cluster also increases. These results are consistent with the spread of the electron cloud as the electrons drift towards the anode due to the transverse diffusion. The evolution of the clusters multiplicity with the z-position has been successfully reproduced by a Monte Carlo simulation of the transport of electrons inside the TPC. The variation of the number of triggered pixels per clusters and the fraction of single-pixel clusters are remarkably well reproduced by simulation as a function of the energy selection threshold. These results indicate that charge sharing between neighboring pixels can be easily described by a Gaussian spread that varies with the drift distance as z (cm). The time difference distribution of the pixels of the same cluster is, however, not fully reproduced by the simulation. Indirect charge induction in non-collecting electrodes due to the inhomogeneities of the weighting potential between adjacent pixels, the range of the primary electrons and the X-ray emission should be included in a future simulation to account for other effects that affect the process of signal induction on a detector of the characteristics of XEMIS.
The presented simulation has also helped to estimate the transverse spatial resolution. The spatial resolution on a pixelated detector is limited by single-pixel clusters, where the position is irremediably reconstructed at the center of the pixel. The position of multiple-pixel clusters is, in general, reconstructed by means of the center of gravity method, that used the collected charge per pixel to weight . However, this method is not a good estimator of the real position since the size of the pixels is bigger that the electron cloud. To improve the spatial resolution along the x and y axis, two correction method have been tested with the simulation. The Gaussian correction method, that assumes a Gaussian distribution of the charge cloud, provides better results. For an anode segmented in pixels of size 3.125 × 3.125 mm 2 , we found a position resolution of the order of 100 µm for an energy deposit of 511 keV at 6 cm from the anode. The spatial resolution decreases for low energies and small z-position. We conclude that for a pixel size of 3.125 × 3.125 mm 2 a spatial resolution of less than 1 mm, along both x and y direction, is expected regardless the deposit energy and the position of the interaction.
Finally, we presented the Compton reconstruction algorithm used to triangulate the position of the source in 3D. This algorithm was originally developed for simulation purposes showing very promising results [START_REF] Grignon | Étude et développement d'un télescope Compton au xénon liquide dédié à l'imagerie médicale fonctionnelle[END_REF][START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF], but we have showed that it can be successfully applied to experimental data. An angular resolution of 4 • is measured for an electric field of 0.75 kV/cm. Besides the constraints of XEMIS1 to perform the Compton sequence reconstruction, we have showed the potential of the 3γ imaging technique.
Conclusion and Outlook
T he rapid evolution of the technologies associated to the development of liquid xenon-based detectors in the past few years, has made this fascinating material one of the principal choices as radiation detection medium in many state-of-art experiments, in the fields of particle physics, astrophysics and medical imaging. The use of liquid xenon for medical applications is not new. In fact, the first attempts of developing a device for functional medical imaging date back from the 1970s, and from the very beginning, the potential of the use of liquid xenon as detection medium was clearly revealed.
In the contest of modern medicine, new challenges continuously ahead with the same purpose of increasing the quality of life and welfare of the patients. For example, in the recent years, the radiation exposure of patients during a medical exam has become a hot topic. It is around this subject that many groups from around the world pursue efforts to improve and develop new technologies in nuclear medicine. The 3γ imaging technique is a clear example of an innovative imaging modality, which main purpose is to reduce the activity injected to the patient to unprecedented limits. The development of the 3γ imaging technique requires the collaboration between different research fields from the development of a new detector system and technologies associated with the used of liquid xenon, through new radiopharmaceuticals labeled with Sc-44 to the development of new reconstruction algorithms to provide a direct 3D reconstruction of the distribution of the source inside the patient. The fundamentals of the 3γ imaging technique are based on the reconstruction of the position of the radioactive source by applying Compton scatter kinetics to the three γ-rays emitted by a specific 3γ-emitter radionuclide. Compton imaging has proven to be an extremely useful tool in γ-spectroscopy, and it seems also a perfect candidate for medical applications.
The experimental demonstration of the potential of the 3γ imaging has been performed with a small dimension Compton camera, XEMIS1, that holds 30 kg of liquid xenon. In this thesis, we have presented, studied and discussed the main performances of XEMIS1. This work is mainly focused in the detection of the ionization signal in liquid xenon and the device optimization. To this extent, we have described in Chapter 2 the main steps related to the production, transport and detection of the ionization electrons in liquid xenon. The discussion is supported by some experimental results reported by other authors and by our own. The studies presented here are complementary to those performed by T. Oger [START_REF] Oger | Dévelopment expérimental d'un télescope Compton au xénon liquide pour l'imagerie médicale fonctionnelle[END_REF], Grignon [START_REF] Grignon | Étude et développement d'un télescope Compton au xénon liquide dédié à l'imagerie médicale fonctionnelle[END_REF], Hadi [START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF]. A detailed description of the experimental characteristic of XEMIS1 is carried out in Chapter 3. The basic cryogenics stages involved in setting up de detector before a data taking run, such as xenon liquefaction and liquid xenon purification and circulation processes are also presented in this chapter.
The main challenge associated with the development of a detector of the characteristics of XEMIS is that it requires information about the deposited energy, position and drift time of every individual interaction inside the detector with very good energy and spatial resolutions, both of them necessary for the Compton sequence reconstruction. These criteria demand an extremely low electronic noise. We have achieved an electronic noise below 100 electrons thanks to an ultra-low noise front-end electronics, IDeF-X LXe, which has been adapted to work at the liquid xenon temperature. The ASIC shows good performances in terms of gain linearity (in the energy range up to 2 MeV), electronic noise and baseline stability. A detailed study of the response of the IDeF-X LXe chip is presented in Chapter 4. In addition, we presented the simulation of the output signal of the IDeF-X LXe chip, that includes a precise simulation of the electronic noise. The simulated data have been extremely useful to study the optimal threshold level for the data acquisition and to optimize the measurement of the amplitude and drift time of the registered signals. A constant fraction discriminator provides the best values in terms of timing and amplitude resolutions. These results have been used in the development of a new front-end electronics called XTRACT. This new ASIC is especially developed to measure the charge and time of the ionization signals in the second prototype of a liquid xenon Compton camera developed for preclinical applications.
The main characteristics of this new camera called XEMIS2 are also presented in Chapter 3. This new prototype is a monolithic liquid xenon cylindrical camera that holds 200 kg of liquid xenon. The geometry of XEMIS2 is optimized to provide a full coverage of the small animal thanks to its 24 cm axial field of view. To detect both the ionization signal and the VUV scintillation photons produced in the liquid xenon after the interaction of ionizing radiation, the active volume of the detector is covered by 380 1" PMTs and two end segmented anodes with a total number of 24000 pixels. To meet the requirements of a detector designed for future clinical applications in hospital centers, particular emphasis was paid in the development of a very compact liquid xenon cryogenics infrastructure and a fast data acquisition system. The work presented in this document has contributed to substantial advancements in our understanding of the detector performances, which has lead to the final design and construction of this second prototype.
One of the most delicate part of the experimental configuration of XEMIS is the Frisch grid. It is obvious that the incorporation of a grid between the cathode and anode is necessary to remove the position-dependence of the collected signals. However, some not so obvious effects are associated with the use of a gridded ionization chamber. Charge loss is one of the major constraints that may limit the spectroscopy performances of a detector. During the development of XEMIS, all the aspects related to the loss of collected charge have been minimized. Electron attachment to electronegative impurities has been reduced thanks to an advanced purification system presented in Chapter 3. Attenuation lengths higher than 1 m are achieved after a short circulation period, which translates into a good charge collection uniformity along the drift length after correction. The method used to estimate the attenuation length is presented in Chapter 7. The loss of charge carriers due to recombination can be reduced by increasing the applied electric field. During this work, the performances of XEMIS1 have been tested for different electric field strengths from 0.25 kV/cm to 2.5 kV/cm. Another aspect that affects the charge collection is the transparency of the Frisch grid to electrons. A good electron transparency requires an adequate bias of the electrodes. However, this depends not only of the experimental configuration of the TPC, i.e. the gap distance between the grid and the anode, but also on the geometrical characteristics of the grid. The pitch of the grid as much as its thickness affect the applied voltages. Larger gaps require higher bias voltages to obtain the same electron transparency conditions. On the contrary, a smaller electric field ratio between the drift region and the gap is necessary with a more open grid. A more detailed description of the electron transparency of a Frisch grid and the experimental results obtained with XEMIS1 are presented in Chapter 5.
The defective shielding of the Frisch grid causes that electrons start inducing a signal in the anode before they actually pass through the grid. This affects the shape of the processed signals and introduces a certain dependency of the shape of the pulses with the position of the interaction. Better performances in terms of efficiency are achieved for higher gap distances and smaller pitch grids. The necessary requirements to reduce the inefficiency of the Frisch grid oppose to those necessary to increase the electron transparency. Therefore, an exhaustive study is required to achieve the best possible compromise between both effects. The results presented in Chapter 5 provide experimental evidence of the impact of the Frisch grid on the collected signals. The study has been performed in four different kind of meshes and two different gaps.
We presented a simulation study that helped to understand the effect of charge induction on the non-collecting pixels due to the weighting potential cross-talk between neighboring pixels. We showed that the induced transient signal on the adjacent pixels introduces a bias on the amplitude and time of the shaped signals. This effect becomes more important for smaller pixel sizes compared to the gap distance. The results obtained by simulation confirm that the time difference observed between the triggered pixels of the same cluster on the real data. This time difference considerably hinders the clusterization process and increases the possibility of mixing different interaction vertex. Thanks to this detailed study of the Frisch grid, some questions related to the experimental setup of XEMIS1 have been exposed, which has contributed to the optimization of the future detector XEMIS2.
In Chapter 6 we have introduced the experimental set-up used to measured the 511 keV γ-ray emitted from a low activity 22 Na source for the performance characterization of XEMIS1. We have presented a precise study of the noise, with the aim of correcting the raw data from the DC offset and set the optimal threshold level for pulse finding. The noise and pedestal calibration is made pixel per pixel. A very low threshold level of the order of four times the value of the noise is set on the pedestal-corrected signals to measure very small charge deposits in the detector. We presented the event reconstruction algorithm used to regroup those signals that come from the same interaction point but are collected by more than one pixel. A study to determine the optimal time window to match two different signals in the same cluster has been carried out. The results showed that an amplitude variable time window is the best option to avoid the missassotiation of signal that leads to energy and spatial resolution degradations.
Finally, in Chapter 7 we have presented and discussed the results obtained with XEMIS1 for the performances characterization of the detector. We have addressed all main aspects related to the development of a Compton camera dedicated to 3γ medical imaging. A timing resolution of 44.3 ± 3.0 ns for 511 keV photoelectric events has been estimated from the drift length distribution, equivalent to a spatial resolution along the z-axis of the order of 100 µm. To determine the beginning and the end of the TPC two effects have been identified and taken into account. The range of the primary electrons in the LXe introduces a shift in the measured position along the z-axis. A Monte Carlo simulation showed that for 511 keV γ-rays, this deviation is of the order of 100 µm and it decreases with decreasing energy.
Moreover, the extension of the electron cloud introduces a bias in the position of the ends of the TPC. Both effects were considered in the determination of the total length of the chamber, used to determined the electron drift velocity as a function the electric field. At 1 kV/cm we obtained a drift velocity of 2.07 mm/µs.
We studied the electric field and energy dependence of the ionization charge yield and energy resolution for 511 keV γ-rays. The collected charge increases with the applied electric field and the γ-ray energy. We also observed an increase of the energy resolution with the electric field strength and the energy of the γ-ray. These effects are directly related to fluctuations in the recombination rate along the track of the primary electrons. For an electric field of 2.5 kV/cm, we measured an energy resolution of 3.9 % (σ/E). For an energy of 1274 keV and an electric field of 1.5 kV/cm, an energy resolution of 2.85 % (σ/E) is achieved. We observed a non-linear response of the collected charge with the energy, being more significant at lower field strength.
The results of the cluster multiplicity as a function of the drift length for 511 keV events at 1 kV/cm and different thresholds levels are consistent with the spread of the electron cloud as the electrons drift towards the anode due to the transverse diffusion. These results indicate that charge sharing between neighboring pixels can be easily described by a Gaussian spread that varies with the drift distance as z (cm). This dependency has been successfully reproduced by a Monte Carlo simulation of the electron transport inside the TPC. The simulation has been also used to estimate the transverse spatial resolution in the detector. As a first approximation, the reconstructed position of clusters with more than one fired pixel is deduced from the center of gravity method. However, this technique is not a good estimator of the real position, since the size of the pixels is bigger that the electron cloud. To improve the spatial resolution along the x and y axis, two correction method have been tested. The Gaussian correction method, that assumes a Gaussian distribution of the charge cloud, provides better results with respect to the polynomial correction method. For a segmented anode with pixels of size 3.125 × 3.125 mm 2 , we found a position resolution of the order of 100 µm for an energy deposit of 511 keV at 6 cm from the anode. The spatial resolution decreases for low energies and small z-positions. We conclude that for a pixel size of 3.125 × 3.125 mm 2 a spatial resolution of less than 1 mm, along both x and y directions, is expected regardless the deposit energy and the position of the interaction.
Finally, we presented the Compton reconstruction algorithm used to triangulate the position of the source in 3D. Besides the constraints of XEMIS1 to perform the Compton sequence reconstruction, we have showed the potential of the 3γ imaging technique. An angular the resolution of 4 • is measured for an electric field of 1.5 kV/cm.
The results presented in this document provide a recapitulation of the main characteristics of the XEMIS detector, which provide the proof of concept of the 3γ imaging technique with a liquid xenon Compton camera. A very low electronic noise, a good energy and spatial resolutions and a very promising angular resolution for 511 keV γ-rays are compatible with the necessary requirements.
During this work, some limitations of the camera have been also identified allowing the optimization of XEMIS2. The use of a Frisch grid is necessary to remove the position dependence of the induced signals in the anode. However, some mechanical constraints may arise when migrating to a larger scale detector. The handling of the grid to achieve good flatness and parallelism with respect to the anode-cathode plane may be quite challenging for an anode of the dimension of XEMIS2. A new anode that includes vertical metallic columns will be tested for the first time. The pillars would support the grid, allowing to put the mesh directly over the anode with high precision. This system will allow smaller gaps of less than 200 µm. Moreover, the advanced cooling system of the electronics installed in XEMIS2 and a complete insulation of the detector will reduce the probability of the presence of bubble inside the fiducial volume, which will allow the use of a Frisch grid with a small pith of less than 100 µm. This experimental configuration is optimal to reduce the inefficiency of the Frisch grid and the effect of charge induction.
During this work, we have been focused in the detection of the ionization signal. The UV scintillation photons emitted after the interaction of an ionizing particle with the medium are exclusively used to provide triggering capabilities to the detector. In XEMIS2, the full coverage of the active zone with PMTs can be used to reduce the pile-up that comes from two different decays of the radioactive source. This can be possible by performing a pre-localization of the ionization signals inside the detector. A simulation of the geometry of XEMIS2 and the response to the scintillation signal is being carried out to optimize the scintillation signal extraction.
We have showed that charge sharing between neighboring pixels can be described by a Gaussian distribution that varies with the drift distance as z (cm), with a transverse diffusion of 230 µs. However, we have also presented that the simulation is not able to fully reproduce the time difference distribution of the pixels of the same cluster. In this thesis, we have identified three factors that may affect the collection of the ionization signal in the detector: indirect charge induction in non-collecting electrodes due to the inhomogeneities of the weighting potential between adjacent pixels, the range of the primary electrons and the X-ray emission. Experimentally, the impact of these factors is very difficult to isolate from other contributions. That is why, all these aspects should be considered and included in the simulation. A simulation of this level is, however, a very difficult task since it should account for physics of particle interaction in LXe, atomic de-excitation simulation, electron transport in LXe and electric field and charge induction simulations.
The non-linearity of the ionization charge yield with the energy is an important subject that should be studied more in detail in the future. During this work, we found a small deviation of around 1 % at high energies (1274 keV) at 1 kV/cm. However, at low energies (< 30 keV), we expect a non-linearity of the order of tens of percents, which may impact the Compton sequence reconstruction. As discussed in Chapter 2, for an incoming γ-ray of energy 1157 keV coming from the decay of the 44 Sc, a good angular resolution is obtained for scattering angles between ∼ 10 • to 60 • . Applying Compton kinematics, a scattering angle of these values implies a electronic recoil in the energy range between 40 keV to 610 keV, with a maximum in the differential cross section at the scattering angle of 26.4 • . This means that the calibration at energies below 200 keV should be carried out in the future.
In addition, the non-linear response of the ionization and scintillation yields in LXe with respect to the energy, results in a variation in the number of collected electrons depending on the type of interaction. The results presented in this document exclusively refer to the interaction of a γ-ray with the LXe. However, the X-ray emission from the K-shell with a branching ration of 85 %, suggests that the number of ionization electrons produced after a photoelectric absorption differs from the one produced after a Compton effect, where the probability of emission of a X-ray is less than 4 %. The distinction between a photoelectric absorption and electronic recoil can substantially help the Compton sequence reconstruction. Therefore, a calibration of XEMIS for electronic recoils, especially at low energies, is necessary to fully understand the performances of the detector as Compton camera, and to exploits the benefits of the Compton imaging. Due to the quantity of matter present between the outside and the liquid xenon, a low energy source should be placed inside the liquid xenon in order to avoid γ-ray attenuation. However, even a small deposit inside the TPC will disturb the electric field. The Compton coincidence technique could be a good option [START_REF] Valentine | Desing of a Compton spectrometer experiment for studying scintillator nonlinearity and intrinsic energy resolution[END_REF][START_REF] Valentine | Benchmarking the Compton coincidence technique for measuring electron response nonproportionality in inorganic scintillators[END_REF]. This method is based on measuring the energy of the Compton scattered γ-rays emitted from a source of 22 N a or 137 Cs from different deflection angles. Using the formulas of the Compton kinematics, we can determine the geometrical scattered angle from the Compton sequence reconstruction and determine the energy of the recoiling electron. The calculated energy can be, thus, compared with the measured energy. Due to the continuous spectrum of Compton electrons, the Compton coincidence technique can provide a wide range of energies from different scattering angles.
The XEMIS2 camera should be completely qualified this year and it will be operational and available from 2017 for preclinical research at the Center for Applied Medical Research (CIMA) located in the Nantes Hospital. XEMIS2 will provide the first images of a small animal obtained with a liquid xenon Compton camera. This images will be the conclusive evidence of the potential of the 3γ imaging technique in nuclear medicine. Le principe de l'imagerie à 3 photons est basé sur l'utilisation d'un isotope radioactif spécifique, le 44 Sc, qui émet un positron et un photon d'énergie 1,157 MeV en coïncidence spatiale et temporelle. Après l'annihilation du positron avec un électron rencontré à l'intérieur du corps du patient, deux photons γ d'énergie 511 keV sont émis à 180 • l'un de l'autre. La détection simultanée de ces deux photons permet de tracer une ligne entre les deux interactions. Cette ligne est appelée ligne de réponse (LOR). L'imagerie à 3 photons a pour but de localiser directement la position des sources d'émission en trois dimensions, pour chacune des désintégrations mesurées, grâce à l'intersection entre la LOR et un cône Compton. Ce cône est obtenu suite à l'interaction du photon d'énergie 1.157 MeV émis lors de la décroissance du 44g Sc avec un télescope Compton. L'information additionnelle apportée par ce troisième photon permet de localiser la position de l'émetteur le long de la LOR et ainsi d'obtenir la distribution de la source en 3D. Le bénéfice de cette nouvelle technique s'exprime directement en termes de réduction du nombre de désintégrations nécessaires pour l'obtention de l'image. Le temps des examens et/ou l'activité injecté au patient se voient ainsi diminués.
Optimisation d'une camera
Dans le but de consolider et de fournir une démonstration expérimentale de l'utilisation d'une caméra Compton au xénon liquide pour l'imagerie 3γ, une première phase de recherche et développement (R&D) a été effectuée. Cette première phase représente le point de départ du projet XEMIS (XEnon Medical Imaging System) né à Subatech. Elle implique la recherche fondamentale et ainsi la mise en application de technologies innovantes. Un premier prototype d'un télescope Compton au xénon liquide, appelé XEMIS1, a été développé avec succès par le laboratoire Subatech. Le choix du xénon liquide comme milieu de détection est motivé par le fait que les techniques de détection disponibles actuellement en imagerie, basées sur les cristaux scintillateurs pour la détection des photons γ, ne sont pas adaptées pour l'imagerie à 3 photons. Par ailleurs, les propriétés physiques fondamentales du xénon liquide, haute densité et son numéro atomique élevé, confèrent un pouvoir d'arrêt élevé aux rayonnements ionisants, ce qui fait le xénon liquide très bon candidat en tant que détecteur de rayons γ dans la gamme d'énergie allant de quelques dizaines de keV à plusieurs dizaines de MeV. Le xénon liquide est à la fois un excellent milieu actif pour la détection de rayonnements ionisants et un excellent scintillateur, avec l'avantage de rendre possible la construction de détecteurs grande taille avec un milieu sensible homogène. Ce sont les principales raisons pour lesquelles le xénon liquide a été choisi comme milieu de détection, non seulement pour l'imagerie médicale, mais aussi dans d'autres domaines tels que la physique des particules et l'astrophysique.
Les travaux présentés dans ce document ont été réalisés au sein du laboratoire Subatech sous l'avis scientifique du Dr. Jean-Pierre Cussonneau et la supervision du Dr. Ginés Martinez. Ce document est divisé en sept chapitres rendus aussi indépendants que possible. Ce manuscrit détaille la caractérisation et l'optimisation d'une caméra Compton au xénon liquide à phase unique pour l'imagerie 3γ. Il fournit la preuve expérimentale de sa faisabilité à travers un prototype à petite échelle, XEMIS1. Ce travail a été centré sur l'extraction du signal d'ionisation produit dans le xénon liquide et à l'optimisation du détecteur. Les résultats obtenus ont contribué à d'importantes avancées sur les performances du détecteur et sur l'extraction du signal d'ionisation, ce qui a conduit à la conception et la construction d'un second prototype dédié à l'imagerie du petit animal. Ce dispositif de plus grande taille, baptisé XEMIS2, est une caméra cylindrique monolithique remplie de xénon liquide placée autour du petit animal. La géométrie de XEMIS2 a été optimisée pour la mesure simultanée des trois rayons γ issus du 44g Sc, avec une très grande sensibilité et un large champ de vision.
Le Chapitre 1 est consacré à une introduction sur les propriétés générales du xénon liquide en tant que milieu de détection de rayonnement ionisantes. Nous présentons de façon génerale la physique de l'interaction des particules avec le xénon liquide, et la production des signaux d'ionisation et de scintillation. Un résumé général de divers détecteurs basés sur le xénon liquide utilisés dans des différents domaines de recherche expérimentale, est présenté. Ce chapitre se poursuit avec une brève introduction à l'imagerie en médecine nucléaire et en particulier sur les deux techniques les plus utilisées en imagerie fonctionnelle, la Tomographie à Emission Mono-Photonique (TEMP) et la Tomographie à Emission de Positons (TEP). Il intègre ensuite les bases de l'imagerie Compton et une description détaillée du principe de la technique d'imagerie 3γ. Enfin, les exigences de base d'une caméra Compton au xénon liquide consacrée à l'imagerie médicale sont exposées.
Dans le Chapitre 2, nous présentons le principe de base d'une chambre à projection temporelle au xénon liquide. Les chambres à projection temporel comptent parmi les technologies les plus prometteuses pour l'étude des phénomènes rares, comme la recherche de la matière noire ou la détection de neutrinos. XEMIS est une chambre à projection temporelle à simple phase, dont la conception a été optimisée pour des applications médicales. Ce chapitre présente le principe de base et les avantages de ce type de détecteur pour l'imagerie médicale. Les différents mécanismes qui peuvent affecter la production et la détection du signal d'ionisation dans le xénon liquide tel que la diffusion, la recombinaison et la présence d'impuretés sont discutés. Enfin, ce chapitre donne un bref résumé du processus de formation du signal d'ionisation dans l'anode segmentée, allant de l'interaction d'une particule ionisante avec le détecteur à la collection du signal par l'électronique front-end. La discussion est soutenue par nos résultats expérimentaux ainsi que par des résultats issus de la littérature.
Le Chapitre 3 donne une description détaillée de la caméra XEMIS1. Cela comprend une description des systèmes de détection de la lumière et de collection de charges, ainsi que de l'infrastructure cryogénique mise au point pour liquéfier et maintenir le xénon dans des conditions de température et de pression stables pendant de longues périodes de prise de données. La pureté du xénon liquide est une préoccupation majeure dans des détecteurs où les électrons doivent parcourir de longues distances sans rencontrer d'impuretés. Le système de purification et de circulation utilisé dans XEMIS est présenté dans ce chapitre. Ensuite, nous présentons les principales caractéristiques de la nouvelle caméra Compton au xénon liquide, XEMIS2. Ce nouveau prototype est une caméra cylindrique monolithique qui contient ∼200 kg de xénon liquide. La géométrie de XEMIS2 est optimisée pour fournir une couverture complète des petits animaux grâce à son champ de vue axiale de 24 cm. Afin de détecter à la fois le signal d'ionisation et les photons VUV de scintillation produits dans le xénon liquide après l'interaction des rayonnements ionisants, le volume actif du détecteur est couvert par 380 1' PMTs et menu deux anodes segmentées avec un nombre total de 24000 pixels. La construction et l'exploitation d'un détecteur de grande envergure pour des applications médicales soulève un ensemble de défis. Pour répondre aux exigences de ce détecteur dont l'utilisation est envisagée dans des centres hospitaliers, un effort particulier a été porté sur le développement d'une infrastructure très compacte autour de la cryogénie du xénon liquide et d'un système d'acquisition de données rapide et performant.
Le Chapitre 4 est consacré au système d'acquisition de données utilisé avec XEMIS1. Le système a été développé pour enregistrer les deux signaux d'ionisation et scintillation avec un temps mort le plus faible possible. La conception et les performances de l'électronique front-end utilisée dans XEMIS1 y sont discutées, ainsi qu'une étude détaillée de la réponse de l'électronique. L'ASIC présente d'excellentes propriétés en termes de linéarité du gain (dans une game d'énergie jusqu'à 2 MeV), de stabilité de la ligne de base et du bruit électronique. Mon travail de thèse a notamment porté sur la mesure du signal d'ionisation. Une attention particulière est portée à l'optimisation de la mesure d'amplitude et du temps des signaux d'ionisation. Dans cette thèse, une simulation Monte Carlo du signal de sortie du IDeF-X LXe a été mis en ouvre. Les résultats obtenus ont contribué au développement d'un système d'acquisition avancé pour la mesure du signal d'ionisation dans XEMIS2. Enfin, ce chapitre présente une description des principales caractéristiques de ce nouveau ASIC analogique, appelé XTRACT, est présenté.
Une des parties les plus sensibles de la configuration expérimentale de XEMIS est la grille Frisch. L'incorporation d'une grille entre la cathode et l'anode est nécessaire pour s'affranchir de la dépendance avec la position des signaux recueillis. Cependant, certains effets sont liés à l'utilisation d'une chambre d'ionisation de ses caractéristiques. Une étude complète des performances d'un chambre d'ionisation avec une grille de Frisch est présentée au Chapitre 5. Au cours de ce travail, trois effets ont été identifiés comme facteurs possibles ayant un impact direct sur l'extraction du signal d'ionisation : la transparence au passage des électrons, l'inefficacité de la grille de Frisch et l'induction de charge indirecte dans des pixels adjacents. Ces processus et les études réalisés dans cette thèse y sont expliqués en détail dans ce chapitre. Des données expérimentales ont été obtenues dans le but de fixer une limite supérieure de leur impact sur la qualité des signaux recueillis.
La perte de la charge collectée est l'une des principales contraintes pouvant limiter les performances spectroscopiques d'un détecteur. Au cours du développement de XEMIS, tous les aspects liés à la perte de charges recueillies ont été minimisés. La présence d'impuretés a été réduite grâce à un système de purification avancé présenté dans le Chapitre 5. Des longueurs d'atténuation supérieures à 1 m sont atteintes après une courte période de circulation, ce qui se traduit par une bonne uniformité de la collection de charge sur toute la longueur de dérive de la chambre. La méthode utilisée pour estimer la longueur d'atténuation est présentée dans le Chapitre 7. La perte des porteurs de charge due à la recombinaison peut être réduite par l'augmentation du champ électrique. Au cours de ce travail, les performances de XEMIS1 ont été évaluées pour différentes intensités de champ électrique de 0,25 kV/cm à 2,5 kV/cm. Un autre phénomène influant sur la collection de charges est la transparence de la grille de Frisch aux électrons. Une bonne transparence électronique nécessite une polarisation appropriée des électrodes. Cependant, elle ne dépend pas que de la configuration expérimentale du TPC, i.e. la distance entre la grille et l'anode, mais également des caractéristiques géométriques de la grille. Le pas de la grille et son épaisseur affectent les tensions appliquées. Une description plus détaillée de la transparence électronique d'une grille de Frisch du dispositif expérimental ainsi que des résultats obtenus avec XEMIS1 est présentée au Chapitre 5. Un blindage défectueux de la grille Frisch induit un signal dans l'anode dû à la présence d'électrons, avant leur passage à travers la grille. Cela affecte la forme des signaux traités et introduit une certaine dépendance dans la forme des impulsions avec la position de l'interaction. Les facteurs contribuant à l'amélioration de l'efficacité de la grille de Frisch détériorent la transparence des électrons. Par conséquent, un étude exhaustive est nécessaire pour trover le meilleur compromis possible entre les deux effets. Les résultats présentés dans le Chapitre 5 fournissent la preuve expérimentale de l'impact de la grille de Frisch sur les signaux recueillis. L'étude a été réalisée dans quatre différents types de mailles et deux gaps différents. Enfin, nous avons présenté une étude de simulation qui favorise la compression de l'effet de l'induction de charge dans les pixels voisins. Nous avons montré que le signal transitoire induit sur les pixels adjacents introduit un biais sur l'amplitude et le temps des signaux. Cet effet devient plus important pour des tailles des pixels de petite taille par rapport à la distance entre la grille et l'anode. Les résultats obtenus par simulation confirment la différence de temps observée entre les pixels à d'un même cluster dans les données éxperimentales. Cette différence de temps affecte considérablement le processus de clusterisation et augmente la possibilité de mélanger différentes vertex d'interaction. Grâce à cette étude détaillée de la grille de Frisch, des questions relatives à la configuration expérimentale de XEMIS1 ont été exposé, ce qui a contribué à l'optimisation du futur du détecteur XEMIS2.
Dans le Chapitre 6, nous présentons une description détaillée du système expérimental et de déclenchement de l'acquisiton des données pour la détection de rayons gamma d'énergie 511 keV provenant d'une source de 22 Na avec XEMIS1. Le protocole du système d'acquisition de données et du traitement des données de XEMIS1, est décrit. Cette partie inclue une présentation complète de la méthode d'analyse et d'étalonnage mise au point au cours de ce travail de thèse pour déterminer la bruit dans chaque pixel. Les résultats obtenus dans cette étude sont utilisés pour corriger les données brutes et fixer un niveau de seuil pour la sélection de l'événement. Un seuil très bas, de l'ordre de quatre fois la valeur du bruit, est réglé sur des signaux corrigés de la ligne de base pour mesurer de très petits dépôts de charge dans le détecteur. L'algorithme de reconstruction des événements utilisé pour regrouper les signaux provenant du même point d'interaction, mais collectées par plusieurs pixels, est présenté. Une étude a été réalisée, visant à déterminer la fenêtre temporelle optimale pour discriminer deux signaux différents détectés dans un même cluster. Les résultats ont montré qu'une fenÍtre variant avec l'amplitude du signal est la meilleure option pour éviter la mauvaise association des signaux et ainsi éviter la dégradation de la résolution en énergie et de la résolution spatiale. dérive z (cm). Ces résultats ont été reproduits avec succès par une simulation Monte Carlo du transport d'électrons à l'intérieur d'une TPC. La simulation a également été utilisée pour estimer la résolution spatiale transverse dans le détecteur. En première approximation, la position reconstituée de clusters avec plus d'un pixel est déduite à partir du centre de gravité. Cependant, cette technique ne permet pas une bonne estimation de la position réelle, étant donné que la taille des pixels est plus grande que la taille du nuage d'électrons. Afin d'améliorer la résolution spatiale le long des axes x et y, deux méthodes de correction ont été testées. La méthode de correction Gaussienne, qui suppose une distribution Gaussienne du nuage de charge, donne de meilleurs résultats que la méthode de correction polynomiale. Pour une anode segmentée avec des pixels de taille 3, 125 × 3, 125 mm 2 , une résolution sur la position de l'ordre de 100 µm pour un dépôt d'énergie de 511 keV à 6 cm de l'anode, a été obtenue. La résolution spatiale diminue pour des énergies plus faibles et de petites positions le long de l'axe z. En conclusion, pour une taille de pixel de 3, 125 × 3, 125 mm 2 , une résolution spatiale inférieure à 1 mm le long des directions x et y, est prévue quelle que soit l'énergie déposée et la position de l'interaction. Enfin, l'algorithme de reconstruction Compton utilisé pour trianguler la position de la source en 3D est présenté. Malgré les contraintes géométriques de XEMIS1 pour effectuer la reconstruction Compton, le potentiel de la technique d'imagerie de 3γ a été prouvé. Une résolution angulaire de 4 • a été mesurée pour un champ électrique de 0,75 kV/cm.
En conclusion, le travail realisé au cours de cette thèse a permis d'atteindre un très faible bruit électronique (inférieure à 100 électrons), une résolution temporelle de 44 ns pour des événements photoélectriques de 511 keV, une résolution en énergie de 4% (σ/E) à 511 keV et un champ électrique de 2,5 kV/cm, et une résolution spatiale transversale inférieure à 1 mm. Tous ces résultats sont compatible avec les exigences nécessaires à l'imagerie du petit animal avec XEMIS2, et ils s'avèrent très prometteurs pour l'avenir de l'imagerie 3γ. T 0 and T c are the triple and critical points respectively. The dashed lines are the theoretical predictions from [START_REF] Kimura | Electron Transport in Liquids: Conduction Bands and Localized States[END_REF] for the Xe and [START_REF] Schnyders | Electron drift velocities in liquefied argon and kypton at low electric field strengths[END_REF] for the Ar. The point represents the experimental values reported by [158]. The distributions are fitted by a double Gaussian function given by Equation 6.3.
List of Tables
Optimization of a single-phase liquid xenon
Mots clés
Xénon liquide, camera Compton, imagerie médicale, imagerie 3, TPC, 44Sc.
Abstract
The work described in this thesis is focused on the characterization and optimization of a single-phase liquid xenon Compton camera for medical imaging applications. The detector has been conceived to exploit the advantages of an innovative medical imaging technique called 3 imaging, which aims to obtain a precise 3D location of a radioactive source with high sensitivity and an important reduction of the dose administered to the patient. The 3 imaging technique is based on the detection in coincidence of 3gamma rays emitted by a specific (+, γ) emitter radionuclide, the 44 Sc. A first prototype of a liquid xenon Compton camera has been developed by Subatech laboratory within the XEMIS (Xenon Medical Imaging System) project, to proof the feasibility of the 3 imaging technique. This new detection framework is based on an advanced cryogenic system and an ultra-low noise front-end electronics operating at liquid xenon temperature. This work has contributed to the characterization of the detector response and the optimization of the ionization signal extraction. A particular interest has been given to the influence of the Frisch grid on the measured signals. First experimental evidences of the Compton cone reconstruction using a source of 22 Na (+, E = 1.274 MeV) are also reported in this thesis, which demonstrate the proof of concept of the feasibility of the 3 imaging. The results reported in this thesis have been essential for the development of a larger scale liquid xenon Compton camera for small animal imaging. This new detector, called XEMIS2, is now in phase of construction.
Key Words
Liquid xenon, Compton camera, medical imaging, 3 imaging, TPC, 44Sc. Lucía Gallego Manzano
Contents 1 . 1 15 1. 1 . 4 15 1. 2 27 1. 3 . 2 40 1. 4
1115141522732404 Fundamental properties of liquid xenon as radiation detection medium . . 1.1.1 Main properties of liquid xenon . . . . . . . . . . . . . . . . . . . 1.1.2 Response of liquid xenon to ionizing radiation . . . . . . . . . . . 1.1.3 Ionization properties . . . . . . . . . . . . . . . . . . . . . . . . . Scintillation mechanism . . . . . . . . . . . . . . . . . . . . . . . . Next generation of LXe detectors . . . . . . . . . . . . . . . . . . . . . . . 21 1.3 3γ imaging: a new medical imaging technique . . . . . . . . . . . . . . . . 27 1.3.1 A brief introduction to nuclear medical imaging . . . . . . . . . . Single Photon Emission Tomography . . . . . . . . . . . . . . . . 29 1.3.3 Positron Emission Tomography . . . . . . . . . . . . . . . . . . . 32 1.3.4 Future trends in functional imaging . . . . . . . . . . . . . . . . . 37 1.3.5 Medical imaging with a Compton Camera . . . . . . . . . . . . . 38 1.3.6 The 3γ imaging technique . . . . . . . . . . . . . . . . . . . . . . Conclusions Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Figure 1 . 1 -
11 Figure 1.1 -Example of a 511 keV electron recoil track obtained by simulation. Energy loss along the track is represented by the the color palette [21].
Figure 1 . 2 -
12 Figure 1.2 -Stopping power for electrons in xenon. Data from [13].
Figure 1 . 3 -
13 Figure 1.3 -CSDA range for electrons in xenon. Data from [13].
Figure 1 . 4 -
14 Figure 1.4 -Calculated photoelectric, total Compton scattering and pair production cross sections in xenon as a function of the photon energy. The values are taken from [13].
Figure 1 Figure 1 . 6 -Figure 1 . 7 -
11617 Figure 1.5 -Photoelectric effect
Figure 1 . 8 -
18 Figure 1.8 -Scintillation mechanism in LXe. Figure taken from [35].
Figure 1 . 11 -
111 Figure 1.11 -Evolution of the charge and scintillation light yields with the electric field in liquid xenon for 122 keV electron recoils (ER), 56 keVr nuclear recoils (NR) and α-particles.Figure taken from [40].
Figure taken from
Figure 1.11 -Evolution of the charge and scintillation light yields with the electric field in liquid xenon for 122 keV electron recoils (ER), 56 keVr nuclear recoils (NR) and α-particles.Figure taken from [40].
Figure 1 . 12 -
112 Figure 1.12 -(a) Schematic principle of a dual phase xenon Time Projection Chamber (TPC) [50] and (b) the XENON100 TPC (from XENON collaboration).
Figure 1 .
1 Figure 1.13 -Cutaway drawing of the LZ dark matter detector within the outer detector and water tank. Figure from LZ collaboration.
Figure 1 . 14 -
114 Figure 1.14 -The 800 kg XMASS detector. Figure from XMASS collaboration.
Figure 1 . 15 -
115 Figure 1.15 -Schematic design of the LXeGRIT LXeTPC. Figure taken from [56].
Figure 1 . 16 -
116 Figure 1.16 -(a) Schematic principle of the EXO-200 TPC and (b) EXO-200 TPC. Figures from EXO collaboration.
Figure 1 . 17 -
117 Figure 1.17 -Schematic description of the principle of an Anger scintillation camera.Figure taken from [85].
Figure 1 . 18 -
118 Figure 1.18 -Different types of collimators used in SPECT. Figure taken from [86].
Figure 1 . 19 -
119 Figure 1.19 -Principle of operation of a photomultiplier tube (PMT) [20].
Figure 1 . 20 -
120 Figure 1.20 -Schematic description of the principle of a PET system. Figure taken from [85].
Figure 1 . 21 -
121 Figure 1.21 -Illustration of the main coincidence event types: a) true; b) multiple; c) single; d) random and e) scattered. Figure adapted from [90].
Figure 1 . 22 -
122 Figure 1.22 -Comparison between conventional PET and TOF-PET. The measured time-of-flight difference (∆t) between the arrival photons in TOF-PET allows to constraint the annihilation point along the LOR [102].
Figure 1 . 23 -
123 Figure 1.23 -Basic principle of a Compton camera [113].
Figure 1 . 24 -
124 Figure 1.24 -Schematic illustration of the principle of the 3γ imaging technique with a LXe Compton telescope. The difference between the position of the source and a reconstructed intersection point (green point) is represented as ∆L. This difference can be also expressed in terms of the angle α [82].
Figure 1 . 26 -
126 Figure 1.26 -Energy of the recoil electron as a function of the scatter angle obtained from Equation 1.3. The red dashed lines represent the energy interval associated with an acceptable angular resolution.
Figure 1 . 27 -
127 Figure 1.27 -Decay scheme of 44g Sc. Figure from [127].
Figure 2 . 1 -
21 Figure 2.1 -Detector schematics of the TPC proposed for the first time by D. Nygren in 1974. Figure taken from [136].
Figure 2 . 2 -
22 Figure 2.2 -Schematics drawing of the principle of a TPC.
Figure 2 . 3 -
23 Figure 2.3 -Simultaneous fit of the Thomas and Imel recombination model to the charge yield and energy resolution for 570 keV γ-rays as a function of the electric field in LXe. Figure is taken from [145].
Figure 2.3 -Simultaneous fit of the Thomas and Imel recombination model to the charge yield and energy resolution for 570 keV γ-rays as a function of the electric field in LXe. Figure is taken from [145].
Figure 2 . 4 -
24 Figure 2.4 -Simultaneous fit of the Thomas and Imel recombination model to the charge yield and energy resolution for 662 keV γ-rays as a function of the electric field in LXe. Figure from [146].
Figure 2 . 5 -
25 Figure 2.5 -Scintillation and ionization yields as a function of the drift field for 662 keV γ-rays from a 137 Cs source. Figure taken from [148].
Figure 2 . 6 -
26 Figure 2.6 -Anti-correlation between scintillation and ionization signals for a 207 Bi source at a drift field of 4 kV/cm. Figure taken from [147].
Figure 2 . 7 -
27 Figure 2.7 -Electron drift velocity in liquid and gaseous xenon as a function of the reduced electric field. Figure is taken from [155]. The x-axis represents the drift field over the number density of atoms, where 1 T d = 10 -17 V cm 2 . The solid lines show the calculations by Atrazhev et al. and the points are experimental data from [157, 158, 159, 160].
Figures 2 .
2 Figures 2.7and 2.8 also show that the electron drift velocity in LXe is higher than that in gaseous state in the whole field range presented in Figure2.7, whereas higher values of v drif t are obtained for SXe. This effect can be explained due to the dependence of the electron mobility with density. The drift velocity of a charge carrier depends on the number of collisions per unit length. The probability that an electron undergoes a collision along its path is given by the scattering cross section, which is inversely proportional to the drift velocity. The scattering cross section depends on both the energy of the electron and the number of atoms per unit of volume N, i.e. the density of the medium. For xenon, the
7
7 Figures 2.7and 2.8 also show that the electron drift velocity in LXe is higher than that in gaseous state in the whole field range presented in Figure2.7, whereas higher values of v drif t are obtained for SXe. This effect can be explained due to the dependence of the electron mobility with density. The drift velocity of a charge carrier depends on the number of collisions per unit length. The probability that an electron undergoes a collision along its path is given by the scattering cross section, which is inversely proportional to the drift velocity. The scattering cross section depends on both the energy of the electron and the number of atoms per unit of volume N, i.e. the density of the medium. For xenon, the
Figures 2.7and 2.8 also show that the electron drift velocity in LXe is higher than that in gaseous state in the whole field range presented in Figure2.7, whereas higher values of v drif t are obtained for SXe. This effect can be explained due to the dependence of the electron mobility with density. The drift velocity of a charge carrier depends on the number of collisions per unit length. The probability that an electron undergoes a collision along its path is given by the scattering cross section, which is inversely proportional to the drift velocity. The scattering cross section depends on both the energy of the electron and the number of atoms per unit of volume N, i.e. the density of the medium. For xenon, the
Figure 2 . 8 -
28 Figure 2.8 -Electron drift velocity in liquid (163 K) and solid (157 K) xenon as a function of the electric field strength. Figure is taken from [150].
Figure 2 . 9 -
29 Figure 2.9 -Electron mobility in liquid and solid xenon and liquid argon as a function of the temperature. T 0 and T c are the triple and critical points respectively. The dashed lines are the theoretical predictions from[START_REF] Kimura | Electron Transport in Liquids: Conduction Bands and Localized States[END_REF] for the Xe and[START_REF] Schnyders | Electron drift velocities in liquefied argon and kypton at low electric field strengths[END_REF] for the Ar. The point represents the experimental values reported by[158]. Figure is taken from[START_REF] Aprile | Noble Gas Detectors[END_REF]. Original figure is from[158].
Figure 2 . 10 -
210 Figure 2.10 -Electron drift velocity in solid and liquid xenon as a function of the temperature. Figure taken from [158].
Figure 2 . 11 -
211 Figure 2.11 -Variation of the positive hole mobility in LXe as a function of temperature. Figure taken from [164].
Figure 2.11 -Variation of the positive hole mobility in LXe as a function of temperature. Figure taken from [164].
Figure 2 . 12 -
212 Figure 2.12 -Transverse (D T ) and longitudinal (D L ) diffusion coefficients for liquid xenon as a function of the electric field. Figure taken from [2].
Figure 2 .
2 Figure 2.13 -Transverse diffusion coefficient for liquid xenon and liquid argon as a function of the density-normalized electric field. Figure is taken from [3].
Figure 2 . 14 -
214 Figure 2.14 -Transverse diffusion as a function of the applied electric field obtained with XEMIS1. Figure taken from [165].
Figure 2 . 15 -
215 Figure 2.15 -Attachment rate constant of electrons in LXe as a function of the applied electric field for three different contaminant. Figure taken from [166].
Figure 2 . 16 -
216 Figure 2.16 -Example of an electron recoil track of 511 keV simulated using CASINO.Energy loss along the track is represented by the the color palette from yellow (511 keV) to blue (0 keV) (see Figure2.17(b))[START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF].
Figure 2 . 17 -
217 Figure 2.17 -Example of an electron cloud for 2000 simulated 511 keV primary electrons using CASINO.The blue region in (a) represents the LXe surface [21].
Figure 2 . 18 -
218 Figure 2.18 -Electron radial distribution in the primary electron cloud obtained as a function of energy [21].
Figure 2 . 19 -
219 Figure 2.19 -Energy lost distribution for 140 keV γ-rays after a Compton scattering interaction with a bound atomic electron through a 45 • deflection angle for Si (Z = 14), Ar (Z = 18) and Ge (Z = 32) [170].
Figure 2 . 20 -
220 Figure 2.20 -Differential cross section of Compton scattering. Original figure from [172].
Figure 2 . 21 -
221 Figure 2.21 -Angular resolution as a function of the energy of the incoming γ-ray for Xenon, Silicon and Germanium. Figure taken from [126].
Chapter 3 XEMIS 80 3. 1 . 2 82 3. 1 . 3 83 3. 1 . 4 103 3. 2 . 1 103 3. 2 . 2 107 3. 2 . 3
3801282138314103211032210723 : A liquid xenon Compton telescope for 3γ imaging Contents 3.1 XEMIS1: First prototype of a liquid xenon TPC for 3γ imaging . . . . . 80 3.1.1 Detector description . . . . . . . . . . . . . . . . . . . . . . . . . Light detection system . . . . . . . . . . . . . . . . . . . . . . . . Charge collection system . . . . . . . . . . . . . . . . . . . . . . . Cryogenics Infrastructure . . . . . . . . . . . . . . . . . . . . . . . 89 3.2 XEMIS2: A small animal imaging LXe detector . . . . . . . . . . . . . . Detector description . . . . . . . . . . . . . . . . . . . . . . . . . Light detection system . . . . . . . . . . . . . . . . . . . . . . . . Charge collection system . . . . . . . . . . . . . . . . . . . . . . . 108 3.2.4 Cryogenics Infrastructure. ReStoX: Recovery and Storage of Xenon 110 3.3 Conclusions Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Figure 3 . 1 -
31 Figure 3.1 -General view of the XEMIS1 experimental set-up installed at Subatech laboratory that comprises: (a) external cryostat that hosts the TPC, (b) injection panel, (c) heat exchanger and pulse tube refrigerator, (d) data acquisition system, (e) control panel and (f) rescue tank.
Figure 3 . 2 -
32 Figure 3.2 -Two different views of the XEMIS1 TPC. In the right side figure the resistive divider chain used to provide an uniform electric field along the drift volume is visible.
Figure 3 . 3 -
33 Figure 3.3 -VUV-sensitive Hamamatsu R7600-06MOD-ASSY PMT used in XEMIS1 TPC to detect the scintillation light.
Figure 3 . 4 -
34 Figure 3.4 -Left) 100 LPI metallic woven wire mesh and right) 70 LPI electroformed micro-mesh.
Figure 3 . 5 -
35 Figure 3.5 -Frontal view of the anode used in XEMIS1. The anode has a total surface of 51 x 51 mm 2 and it is segmented in 100 pixels of 3.1 x 3.1 mm 2 . The ionization signal is collected by the 64 internal pixels resulting in an active area of 2.5 x 2.5 cm 2 . The 36 pixels at the edges of the anode are connected to ground.
Figure 3 . 6 -
36 Figure 3.6 -Transversal cut of the segmented anode. The layers 1, 2, 3 and 4 correspond to Top, Layer2, Layer3 and Bottom respectively. Between the four copper layers there are alternate layers made of ceramics and prepeg for insulation and bounding. The thickness of the different layers is expressed in µm.
Figure 3 . 7 -
37 Figure 3.7 -Illustration of the four main layers of the segmented anode used in XEMIS1.
Figure 3 . 8 -
38 Figure 3.8 -Schematic diagram of the XEMIS1 LXe cryogenic system. Figure taken from [180]
Figure 3 . 9 -
39 Figure 3.9 -(a) Iwatani PC150 pulse-tube cryocooler and (b) available cooling power (heater power) versus cold head temperature [183] (right).
Figure 3 . 10 -
310 Figure 3.10 -Cross-section of the cooling tower of XEMIS1, showing a) PTR, b) cold head, c) cold finger and d) heater.
Figure 3 . 11 -
311 Figure 3.11 -Internal view of the stainless steal vacuum-insulated vessel. The cryostat that contains the TPC is placed inside the vacuum enclosure. All part of the system including the inner vessel, tubes, and the outer flange of the inner vessel that surround the front-end electronics are cover by MLI to reduce the heat load into the detector.
Figure 3 . 12 -
312 Figure 3.12 -Temperature profile of the cold finger during the precooling phase.
Figure 3 .
3 Figure 3.13 -Temperature profile of the internal cryostat during the precooling phase.
Figure 3 . 14 -
314 Figure 3.14 -Pressure profile of the internal cryostat during the precooling phase.
Figure 3 . 15 -
315 Figure 3.15 -(a) GXe mass variation inside the storage bottle and (b) LXe level variation inside the cryostat during liquefaction.
)[186]. The pump can provide a maximum circulation rate of 50 NL/min.
Figure 3 . 16 -
316 Figure 3.16 -XEMIS1 rare-gas purifier.
Figure 3 . 17 -
317 Figure 3.17 -Oil-free membrane pump used to recirculate the xenon during purification.
Figure 3 . 18 -
318 Figure 3.18 -XEMIS1 coaxial heat exchanger.
Figure 3 .
3 20 and Figure 3.21 shows the evolution of the LXe temperature and pressure, respectively, during several months of continuous operation.
Figure 3 . 19 -
319 Figure 3.19 -(a) Heat exchanger efficiency calculated as a function of gas flow [180]. (b) Estimated cooling power as a function of gas flow [180].
Figure 3 . 20 -
320 Figure 3.20 -Temperature inside the internal cryostat during circulation.
Figure 3 . 21 -
321 Figure 3.21 -Pressure inside the internal cryostat during circulation.
Figure 3 . 22 -
322 Figure 3.22 -High pressure bottles used to store the gaseous xenon when the detector is not in use.
Figure 3 . 23 -
323 Figure 3.23 -Evolution of the LXe level inside the cryostat during cryopumping.
Figure 3 . 24 -
324 Figure 3.24 -Screen shot of the slow control system used in XEMIS1.
Figure 3 . 25 -
325 Figure 3.25 -XEMIS1 pressure security systems
Figure 3 .
3 Figure 3.26 -4 m 3 rescue tank used to recuperate the xenon is case of emergency.
Figure 3 .
3 [START_REF] Rayleigh | On the scattering of light by small particles[END_REF], that includes the XEMIS2 cryostat and the cryogenic infrastructure used to store, liquefy and recuperate the xenon. The purification and recirculation systems are presented in Figure3.28.
Figure 3 . 27 -
327 Figure 3.27 -General view of the XEMIS2 experimental set-up installed in Subatech laboratory: right) XEMIS2 cryostat and left) Recovery and Storage of Xenon (ReStoX).
Figure 3 . 28 -
328 Figure 3.28 -General view of the XEMIS1 experimental set-up installed in Subatech laboratory: right) Recovery and Storage of Xenon (ReStoX) and left) purification system.
Figure 3 . 29 -
329 Figure 3.29 -Mechanical design of the active zone of the XEMIS2 prototype for small animal imaging [82].
Figure 3 . 30 -
330 Figure 3.30 -Schematic diagram of the dimensions of XEMIS2. Only half of the right TPC with respect to the cathode is represented.
Figure 3 . 31 -
331 Figure 3.31 -External 46 stainless steel field rings used to provide an uniform electric field along the drift volume of the detector.
Figure 3 . 32 -
332 Figure 3.32 -Mounting bracket for the 380 PMTs used to detect the VUV scintillation photons emitted after the interaction of an ionizing particle with the LXe.
Figure 3 . 33 -
333 Figure 3.33 -Frontal view of the cathode with the stainless steel field rings.
Figure 3 .
3 Figure 3.34 -(a) Frontal view and (b) bottom part of the segmented anode used in XEMIS2. The small squares at the center of (a) represents the pixels, whereas in (b) the vertical connectors of the front-end electronics are illustrated.
Figure 3 .
3 Figure 3.35 -(a) Cooling system design to reduce the heat transfer by conduction between the electronics towards the LXe. (b) Mechanical design of XEMIS2: a) IDeF-X LXe chip inside the LXe, b) interface liquid-vacuum and c) XTRACT in the vacuum.
Figure 3 .
3 Figure 3.36 -(a) Frontal view and (b) mechanical design of the internal part of the XEMIS2 cryostat.
Figure 3 . 37 -
337 Figure 3.37 -Liquid nitrogen container.
Figure 3 . 38 -
338 Figure 3.38 -Schematic diagram of the filling process of XEMIS2 with LXe and the GXe evacuation process.
Figure 3 . 39 -
339 Figure 3.39 -Schematic diagram of the purification and re-circulation process in XEMIS2.
Figure 4 . 1 -
41 Figure 4.1 -Basic diagram of a charge-sensitive preamplifier with a feedback capacitance C f and a feedback resistance R f as reset. An input current pulse i(t) is integrated on the CSA that produces an output voltage pulse V out (t) with a time constant τ = R f C f .
Figure 4 . 2 -
42 Figure 4.2 -Basic diagram of a charge-sensitive preamplifier with a CR-RC n filter. The output voltage pulse V out (t) has a quasi-gaussian shape with a rise and decay times that depend on the time properties of the different blocks of the electronic chain.
Figure 4 . 3 -
43 Figure 4.3 -Schematic of the IDeF-X HD-LXe ASIC.
Figure 4 . 4 -
44 Figure 4.4 -Technical details of the IDeF-X HD-LXe ASIC.
Figure 4 . 5 -
45 Figure 4.5 -(a) 32 channels IDeF-X front-end ASIC. (b) Bottom layer of the anode with the two standard 32 channels vertical mini edge card connectors. c) Two ASIC chips bounded to the two vertical connectors. The two PCBs are couple through a 64 channels interface board.
Figure 4 . 6 -
46 Figure 4.6 -Zoom of the interior view of the outer vessel of XEMIS1. The kapton bus connected to the 64 channels interface board and the buffer are visible.
Figure 4 . 7 -
47 Figure 4.7 -Equivalent noise charge vs. shaping time. At long shaping times the ENC noise is dominated by current or parallel noise, whereas at small shaping times (large bandwidth) the parallel contributions dominates. A minimum of noise is achieved when the series and parallel contributions are equal, so changing any of these noise contributions shifts the noise minimum. The 1/f noise contribution is independent of shaping time. The dependence on the input capacitance and the leakage current is also shown in the figure.
Figure 4 . 8 -
48 Figure 4.8 -Output signal of the shaper for a 60 mV injected delta-like pulse with 5 ns rise time and a peaking time of 1.39 µs.
Figure 4 . 10 -
410 Figure 4.10 -Output signal of the shaper for a 60 mV injected delta-like pulse with 5 ns rise time and four different peaking times.
Figure 4 . 11 -
411 Figure 4.11 -Output signal of the preamplifier for a 60 mV injected delta-like pulse with 5 ns, 250 ns and 500 ns rise time.
Figure 4 . 12 -
412 Figure 4.12 -Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm from the anode (red line) and a 60 mV injected delta-like pulse with 250 ns (black line) and 500 ns (blue line) rise time. The peaking time was set to 1.39 µs.
Figure 4 .
4 Figure 4.13 -Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm (red line) and a 1 mm (black line)from the anode. The peaking time was set to 1.39 µs.
Figure 4 . 14 -
414 Figure 4.14 -Comparison between the output signal of the shaper for 511 keV events for two different Frisch grids placed at 1 mm from the anode: a 100 LPI Frisch grid (red line) and a 50.3 LPI (black line). The peaking time was set to 1.39 µs.
Figure 4 Figure 4 . 15 -Figure 4 . 16 -
4415416 Figure 4.15 -Output signal of the shaper for a peaking time of 1.39 µs as a function of the preamplifier pulse rise time.
Figure 4 . 17 -
417 Figure 4.17 -Comparison between the experimental and injected signals at the output of the (a) preamplifier and (b) shaper. The average injected pulse was obtained over a large number of experimental preamplifier output pulses with a total energy of 511 keV.
Figure 4 . 18 -
418 Figure 4.18 -Output amplitude of the IDeF-X LXe chip as a function of the injected charge for the pixel 1 of the anode (peaking time 1.39 µs and gain 200 mV/fC). The red line represents a first order polynomial, which shows a perfect linear response of the electronics in all the dynamic range.
Figure 4 . 19 -
419 Figure 4.19 -Output amplitude of the IDeF-X LXe chip vs input charge for the pixel 1 of the anode and low injected amplitudes (peaking time 1.39 µs and gain 200 mV/fC). The red line represents a first order polynomial, which shows a perfect linear response of the electronics in almost the entire dynamic range. A lost of linearity due to the measurement method is observed close to the threshold level.
Figure 4 .
4 Figure 4.20 -Linearity difference between the measured charge and the first degree polynomial fit. The red line represents an exponential fit.
Figure 4 . 21 -
421 Figure 4.21 -Ratio between the number of measured signal and the number of injected pulses as a function of the measured charge.
Figure 4 . 22 -
422 Figure 4.22 -ENC vs. shaping time for a leakage current of 20 pA and a conversion gain of 200 mV/fC.
Figure 4 . 23 -
423 Figure 4.23 -ENC vs. shaping time for two different values of the leakage current of 20 pA and 100 pA, and a conversion gain of 200 mV/fC.
Figure 4.25 shows the ENC vs the shaping time with and without correlated noise correction for the two different values of i leak . The estimated contribution of the correlated noise as a function of the peaking time is presented in Figure 4.26.
Figure 4 . 24 -
424 Figure 4.24 -ENC vs. shaping time for two different values of the leakage current of 20 pA and 100 pA (conversion gain of 200 mV/fC) after correlated noise correction.
(a) i leak = 20 pA (b) i leak = 100 pA
Figure 4 . 25 -
425 Figure 4.25 -ENC vs. shaping time for two different values of the leakage current of (a) 20 pA and (b) 100 pA with and without correlated noise correction.
Figure 4 . 26 -
426 Figure 4.26 -Correlated noise contribution for two different values of the leakage current of 20 pA and 100 pA.
Figure 4 . 27 -
427 Figure 4.27 -Comparison between the normalized averaged signal obtained from experimental data and the simulated signal, in linear and logarithmic scale (τ 0 = 1.39 µs).
Figure 4 . 28 -
428 Figure 4.28 -Experimental distribution of the noise amplitudes for a peaking time of 1.39 µs. The red curve represents the Gaussian fit.
Frequency
Figure 4 . 29 -
429 Figure 4.29 -Average probability density functions of the (a) real and (b) imaginary parts of the DFT coefficients obtained from experimental noise events registered over a total time window of 77 s.
Figure 4 .
4 Figure 4.30 illustrates the probability density function (PDF) of the real and imaginary parts of the Fourier Transform coefficients for a given frequency obtained with experimental data. As we can see, both coefficients obey a zero mean Gaussian probability distribution with equal variance. This fact is derived from the stochastic character of the x[n] coefficients and explained by the Central Limit Theorem [200]. In addition, both coefficients are statistically independent, and hence uncorrelated as we can presume from the Figure 4.31.
(a) ℜe[k] (b) ℑm[k]
Figure 4 . 30 -
430 Figure 4.30 -Probability density functions of the (a) real and (b) imaginary parts of the DFT coefficients obtained for the same frequency (k = 21). The red curve represents the Gaussian fit.
Figure 4 . 31 -
431 Figure 4.31 -Distribution of the imaginary part of the DFT coefficients as a function of the real part.
Figure 4 .
4 33(b) reveals an almost uniform distribution of the phase. Discrepancies with the expected distribution should be associated with the limitations of the method.
FrequencyFigure 4 . 32 -
432 Figure 4.32 -(a) Power spectrum of the magnitude of the DFT as a function of the frequency obtained from experimental noise events registered over a total time window of 77 s. (b) shows the average value per bin.
(a) Magnitude (b) Phase
Figure 4 . 33 -
433 Figure 4.33 -Probability density functions of the (a) magnitude and (b) phase of the DFT obtained for the same frequency (k=21).
Figure 4 .Figure 4 . 34 -
4434 Figure 4.34 -Example of a typical simulated noise signal over a total time of window of 102.2 µs at a sampling rate of 12.5 MHz for a peaking time of 1.39 µs.
FrequencyFigure 4 . 35 -
435 Figure 4.35 -(a) Average power spectrum of the magnitude of the DFT obtained with the Monte Carlo Simulation and (b) Comparison between the experimental and simulated power spectra of the magnitude of the DFT.
Figure 4 . 36 -
436 Figure 4.36 -(a) Simulated distribution of the noise amplitudes obtained for 10000 simulated events for a peaking time of 1.39µs. The red curve represents the Gaussian fit. (b) Comparison between the normalized experimental and simulated noise distributions.
Figure 4 .Figure 4 . 37 -
4437 Figure 4.37 -Simulated output signal of XEMIS1 with amplitude 20σ noise for a peaking time of 1.39µs.
Figure 4 . 38 -
438 Figure 4.38 -Principle of the Constant Fraction Discriminator. The dotted line in (a)shows the result for a different rise time[START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF]. In (b) S 1 is the original signal and S 2 the CFD signal.
Figure 4 . 39 -
439 Figure 4.39 -Example of the CFD technique applied on a simulated signal (blue curve). The red curve is the CFD signal. The dashed black line represents the discriminator threshold. In (a) the threshold is directly applied on the simulated signal S1, while in (b) the threshold is set on the CFD signal S2. The threshold depends on the value of the σ noise of the noise distribution. The x-axis is expressed in time channels, where 1 channel equals to 80 ns.
Figure 4 . 40 -
440 Figure 4.40 -Efficiency results obtained for the CFD technique for a τ d = 12 channels and a attenuation fraction k = 1.436. The black dots correspond to the case where the threshold level was set on the signal S1, whereas for the red dots the threshold was set on the CFD signal S2. The signal S1 was obtained for a peaking time of 2.05µs.
Figure 4 . 41 -
441 Figure 4.41 -Time resolution measured with the CFD method (τ d = 12 channels and k = 1.436) for the two possible discriminator situations as a function of the simulated amplitude, expressed in units of SNR. The signal S1 was obtained for a peaking time of 2.05µs.
Figure 4 . 42 -
442 Figure 4.42 -Ratio between the measured averaged amplitude and the simulated amplitudes obtained with the CFD method (τ d = 12 channels and k = 1.436) for the two possible discriminator situations as a function of the average measured amplitude, expressed in units of SNR. The signal S1 was obtained for a peaking time of 2.05µs.
and 4.45 respectively. Based on these results, a shaping time of 1.39 µs is considered as the best option for the measurement of the drift time of the ionization signals in the detector.
Figure 4 . 43 -
443 Figure 4.43 -Comparison of the time resolution obtained for two different values of the peaking time.
Figure 4 . 44 -
444 Figure 4.44 -Difference between the average measured amplitude and the simulated amplitude obtained for two different values of the peaking time. The error bars are obtained from the RMS of the distribution.
Figure 4 . 45 -
445 Figure 4.45 -Comparison of the efficiency obtained for two different values of the peaking time.
Figure 4 .
4 [START_REF] Ozone | Liquid xenon scintillation detector for the µ → γ search experiment[END_REF] shows the slope of the constant-fraction signal at the zero-crossing point as a function of the delay for two different values of the attenuation fraction. The value of the
Figure 4 . 46 -
446 Figure 4.46 -Noise-induced time jitter. Figure adapted from [202].
Figure 4 . 47 -
447 Figure 4.47 -Slope of the constant fraction signal at the zero-crossing point as a function of the time delay τ d for two different attenuation fraction values: (a) k = 1.5 (red) and k = 1 (black).
Figure 4 .
4 48(b). As a result, the best σ t is obtained over a large range of delay values, which implies that several combinations of the CFD delay and the attenuation fraction may provide the same time resolution. Comparison of the time and amplitude resolutions for three different configurations of the CFD parameters are shown in Figure 4.49 and Figure 4.50 respectively. For the same attenuation fraction (k = 1), two different values of the delay have been tested.
Figure 4 . 48 -Figure 4 . 49 -Figure 4 . 50 -
448449450 Figure 4.48 -(a) Signal-to-noise ratio (RMS values) of the constant-fraction signal as a function of the delay τ d for two different attenuation fraction values. (a) k = 1.5 (red) and k = 1 (black). and (b) Value of σ t as a function of the delay τ d for two different attenuation fraction values. (a) k = 1.5 (red) and k = 1 (black).
Figure 4 . 52 -Figure 4 . 53 -Figure 4 . 54 -
452453454 Figure 4.52 -Comparison of the time resolution obtained for the Max and CFD methods. The CFD was performed for a delay of τ d = 9 and an attenuation fraction k = 1.5.
Figure 4 . 55 -
455 Figure 4.55 -Example of a typical noise signal. The red cross represents a trigger for a zero threshold level. A trigger is considered when the signal crosses the threshold with positive slope.
Figure 4 . 56 -
456 Figure 4.56 -Counting rate as a function of the discriminator threshold. The red line represents the Gaussian fit to the distribution.
Figure 4 . 57 -
457 Figure 4.57 -Time interval between two consecutive threshold crossing for a 3σ noise threshold level.
Figure 4 . 58 -
458 Figure 4.58 -Counting rate as a function of the discriminator threshold. The leading edge threshold was set a 3σ noise and the trailing edge threshold a 2.6σ noise .
Figure 4 . 59 -
459 Figure 4.59 -Time interval between two consecutive threshold crossing for a 3σ noise threshold level with a leading edge threshold at 3σ noise and a trailing edge threshold at 2.6σ noise .
Figure 4 . 60 -
460 Figure 4.60 -Schematic diagram of the data acquisition system of XEMIS2. The collected signals by the segmented anode are read-out by an IDeF-X ASIC in the same way as in XEMIS1. The information of time, amplitude and pixel address is extracted from each detected signal thanks to the XTRACT. The information is then read by a PU card that groups together 8 XTRACT ASICs. The information from the PU card is extracted towards the outside of the chamber via a LVDS connexion and stored in a disk.
Figure 4 .
4 Figure 4.61 shows a schematic diagram of the XTRACT ASIC. It consists of five main blocks: a constant fraction discriminator or CFD, a ramp generator for event-tagging, an analog memory circuit per channel to store the value of time and amplitude of each event, a time derandomizer or Asynchronous Binary Tree Multiplexer (ATBM) and a control digital module.
Figure 4 . 61 -
461 Figure 4.61 -Illustration of the XTRACT architecture.
Figure 4 . 62 -
462 Figure 4.62 -Example of a signal measured by the CFD method. The original signal is described by A(t). An attenuated and delayed signal A(t-n) is subtracted from the original signal, to produce a bipolar pulse designed by CFD(t). The zero-crossing point of the CFD signal corresponds to the moment A(t) reaches its maximum value. The signals are sent to the zero-crossing comparator only if A(t) crosses a certain threshold level (referred as seuil in the figure).
Figure 4 . 63 -
463 Figure 4.63 -Schematic diagram of the PU card.
Figure 4 .
4 Figure 4.64 illustrates an example of the XTRACT detection and read-out processes.In the example, three different pixels, corresponding to the channels 22, 3 and 4, collect a signal. When the first arriving pulse is detected by the CFD module, a trigger signal is generated and the flag of the pixel 22 states 1. At this moment the voltage ramp generator also starts. If another pulse is detected while the ramp generator is still on (pixels 3 and 4 in Figure4.64), their arrival times are obtained from the analog value of the ramp at the zero-crossing point. This time value is not the absolute time of the interaction inside the TPC but it is relative to the moment the first pulse is detected (trigger). The flag signals of the three fired pixels are also sent to the control unit to later be read by the PU card. The trigger warns the PU board that at least an event has been detected. When the card is ready, a CS is sent in order to start reading the information stored in the analog memory. The derandomizer selects the reading sequence, that can be different from the detecting one, i.e. the pixels may be read-out in a different order than that of detection. In the example presented in Figure4.64 channels are read-out starting from pixel 3 to pixel 22. When a pixel is read by the PU card, its flag signal states 0. A new reading order is sent from the
Figure 4 . 64 -
464 Figure 4.64 -Example of the detection and read-out process of three different signals.
Contents 5 . 1 180 5. 4 183 5. 5
5118041835 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5.1.1 Charge induction and the Shockley-Ramo theorem . . . . . . . . 172 5.1.2 Principle of a parallel plate ionization chamber . . . . . . . . . . 173 5.1.3 Frisch Grid Ionization Chamber . . . . . . . . . . . . . . . . . . . 177 5.2 Electrons collection by the Frisch Grid . . . . . . . . . . . . . . . . . . . . 179 5.3 Frisch Grid Inefficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . Charge induction on a pixelated anode . . . . . . . . . . . . . . . . . . . Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5.5.1 Measurement of the electron transparency of the Frisch grid . . . 185 5.5.2 Influence of the Frisch grid inefficiency on the pulse shape . . . . 189 5.5.3 Charge sharing and charge induction between neighboring pixels . 193 5.6 Conclusions Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Figure 5 . 1 -
51 Figure 5.1 -Schematic drawing of a conventional parallel plate ionization chamber.
Figure 5 . 2 -
52 Figure 5.2 -Illustration of the time development of the induced current (a) and voltage (b) on the anode on a two infinite parallel plate detector. In the figure, t -and t + are the electron and positive ion collecting times respectively.
Figure 5 . 3 -
53 Figure 5.3 -Illustration of a conventional Frisch plate ionization chamber. The grid is represented as a dashed line close to the anode.
Figure 5 . 4 -
54 Figure 5.4 -Illustration of the weighting potential of the anode for an ideal Frisch grid ionization chamber. The grid is placed at a distance 1-P from the anode [135].
Figure 5 . 5 -
55 Figure 5.5 -Example of the output voltage as a function of time in a Frisch-gridded ionization chamber. Only the signal induced by electrons is considered. In figure, t - c-g and t - g-a are the electron drift time from the point of interaction to the Frisch grid and the drift time between the grid and the anode respectively.
Figure 5 . 6 -
56 Figure 5.6 -Schematic illustration of the geometry of a Frisch grid ionization chamber.
Figure 5 . 7 -
57 Figure 5.7 -Left: Weighting potential distribution of the Frisch grid detector along the drift direction. The physical properties of the grid are listed in the figure, where D and p are cathode-grid and anode-grid distances respectively, d is the distance between the grid elements and r is the wire radius. Right: 2-D map of weighting potential around the Frisch grid. Image courtesy of[START_REF] Göök | Application of the Shockley-Ramo theorem on the grid inefficiency of Frisch grid ionization chambers[END_REF]
Figure 5 . 8 -
58 Figure 5.8 -Weighting potential of an infinite parallel plate electrode and two different pixel pitch of 55 µm and 100 µm.Figure taken from [213].
Figure 5.8 -Weighting potential of an infinite parallel plate electrode and two different pixel pitch of 55 µm and 100 µm.Figure taken from [213].
Figure 5 . 9 -
59 Figure 5.9 -Weighting potential of a neighbor pixel along the drift direction. The dashed line shows the same variation of a pixel that is located two pixels away of the actual collecting pixel. Figure taken from [20].
Figure 5 . 10 -
510 Figure 5.10 -Energy spectrum of the 511 keV γ-rays obtained with a 100 LPI mesh placed at 500 µm from the anode at an electric drift field of 1 kV/cm and two different electric field ratios of (a) R = 2 and (b) R = 6.
Figure 5 . 11 -
511 Figure 5.11 -Collected charge for 511 keV events as a function of the electric field ratio for a constant electric drift field of 1 kV/cm. The results were obtained for a 100 LPI Frisch grid located at 500 µm from the segmented anode.
Figure 5
5 Figure5.12 -Collected charge for 511 keV events as a function of V grid , for a constant electric drift field of 1 kV/cm. The results were obtained for a 100 LPI Frisch grid for two different gaps of 500 µm and 1 mm.
Figure 5
5 Figure 5.13 -Collected charge for 511 keV events as a function of the ratio between the electric field in the gap and the electric drift field for a constant electric drift field of 1 kV/cm. The results were obtained for a 50.29 LPI Frisch grid located at 1 mm from the segmented anode.
Figure 5
5 Figure 5.14 -Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm from the anode (red line) and a 60 mV injected step-like pulse with a slope of 250 ns (black line). The peaking time was set to 1.39 µs.
Figure 5 . 15 -
515 Figure 5.15 -Average output signal for 511 keV events and four different Frisch grids in linear and logarithmic scales. The peaking time was set to 1.39 µs.
Figure 5 . 16 -
516 Figure 5.16 -Average output signal for 511 keV events obtained with the 100 LPI Frisch grid located at 500 µm from the anode, as a function of the drift time for different z-intervals along the drift length. The signals are shown in linear (left) and logarithmic (right) scales. The peaking time was set to 1.39 µs.
Figure 5 . 17 -
517 Figure 5.17 -Pulse integral as a function of the drift time for different z-intervals along the drift length. Figure on the right is a zoom of the figure on the left.
Figure 5 . 18 -
518 Figure 5.18 -Total integrated charge as a function of the distance from the anode.
Figure 5 . 19 -
519 Figure 5.19 -Time difference between the pixels of the same cluster, for cluster with a total measured energy of 511 keV and a cluster time window of 2 µs.
Figure 5 . 20 -
520 Figure 5.20 -Time difference distribution between the pixels of the same cluster as a function of A neighbor . Only cluster with a total measured energy of 511 keV are included in the distribution. In the bottom figure the mean value of ∆t per slice in charge is represented.
Figure 5 . 21 -
521 Figure 5.21 -Example of the ∆t between pixels of the same cluster for a total charge of 511 keV and a A neighbor of 3σ noise and 5σ noise respectively.
Figure 5 . 22 -
522 Figure 5.22 -Time difference between the pixels of the same cluster as a function of A neighbor for three different interaction positions with respect to the anode.
Figure 5 . 23 -
523 Figure 5.23 -Simulation of the geometry of 9 adjacent pixels of 3.1 × 3.1 mm 2 obtained with gmsh. The cathode is considered a plane electrode of 9.3 × 9.3 mm 2 located at 1 mm from the segmented electrode.
Figure 5 . 24 -
524 Figure 5.24 -Weighting potential distribution for a 3.1 × 3.1 mm 2 pixel size and two different gaps. The distribution was obtained with Elmer by setting the pixel of interest at unity potential and the rest of the pixels and the cathode to ground.
Figure 5 . 25 -
525 Figure 5.25 -Amplitude of the induced signal as a function of the interaction position along the x-axis (y = 0), for a simulated transverse diffusion of 200 µm.
Figure 5 . 26 -
526 Figure 5.26 -Time difference between the signal of reference measured at the center of the collecting pixel and the induced signal as a function of the relative amplitude, for a simulated transverse diffusion of 200 µm.
Figure 5 . 27 -
527 Figure 5.27 -Amplitude of the induced signal as a function of the interaction position along x-axis (y = 0) for a simulated transverse diffusion of 300 µm.
Figure 5 . 28 -
528 Figure 5.28 -Time difference between the signal of reference measured at the center of the collecting pixel and the induced signal as a function of the relative amplitude, for a simulated transverse diffusion of 300 µm.
Figure 5 . 29 -Figure 5 . 30 -
529530 Figure 5.29 -(Left) Time difference distribution between the pixels of the same cluster as a function of A neighbor for a gap of 0.5 mm. (Right) Comparison of the average ∆t as a function of A neighbor for two different gap distances.
Figure 5 . 31 -
531 Figure5.31 -Average time difference between the pixels of the same cluster as a function of A neighbor , for cluster with a total measured energy of 511 keV and a Frisch grid located at 500 µm from the anode, for two different pixel configurations.
Figure 5 . 32 -
532 Figure5.32 -Average time difference between the pixels of the same cluster as a function of A neighbor , for clusters with a total measured energy of 511 keV and a Frisch grid located at 1 mm from the anode, for two different pixel configurations.
Chapter 6 Performance 207 6. 3 215 6. 4 215 6. 4 . 1
62073215421541 Evaluation of XEMIS1 for 511 keV γ-rays. Data Acquisition and Data treatment Contents 6.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.2 Data Acquisition and Trigger Description . . . . . . . . . . . . . . . . . . Data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Noise Analysis and Calibration . . . . . . . . . . . . . . . . . . . . . . . . Temperature effect on the measured signals . . . . . . . . . . . . 224 6.4.2 Charge-induced perturbations of the baseline . . . . . . . . . . . 230 6.5 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.5.1 Baseline subtraction . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.5.2 Common noise correction . . . . . . . . . . . . . . . . . . . . . . . 232 6.5.3 Gain Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.5.4 Signal selection and Clustering . . . . . . . . . . . . . . . . . . . . 235 6.5.5 Off-line Analysis: event selection . . . . . . . . . . . . . . . . . . 242 6.6 Conclusions Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Figure 6 . 1 -
61 Figure 6.1 -Schematic drawing of XEMIS1 experimental set-up: a) BaF 2 crystal and PMT, b) collimators, c) 22 N a source, d) entrance window, e) TPC and f) LXe PMT. The yellow line emulates the emission and detection of the 2 back-to-back 511 keV γ-rays.
Figure 6 .Figure 6 . 2 -Figure 6 . 3 -
66263 Figure 6.3 shows an example of two typical scintillation signals detected by the two PMTs. The PMT coupled to the BaF 2 scintillation crystal has a good time resolution of the order of 200 ps. The BaF 2 output is fed into a discriminator, which transforms the analog signal into a digital pulse. The output signal of the LXe PMT is, on the other hand, split by a
Figure 6 . 4 -
64 Figure 6.4 -Coincidence time distribution at 1 kV/cm.
Figure 6 . 5 -
65 Figure 6.5 -Example of a typical (a) scintillation and (b) ionization waveforms of a 511 keV γ-ray event from the 22 N a source.
Figure 6 . 6 -
66 Figure 6.6 -Evolution of the number of VUV scintillation photons that arrives to the LXe PMT as a function of the interaction point of the 511 keV ionization electrons. Figure taken from [123].
Figure 6 . 7 -
67 Figure 6.7 -Scintillation light distribution as a function of the time of CFD of 511 keV events. (b) Zoom in the region of interest. The results were obtained for a 100 LPI metallic woven Frisch grid placed 1 mm from the anode and a 12 cm long TPC.
Figure 6 . 8 -
68 Figure 6.8 -Scintillation light amplitude for a discriminator threshold level of 5 mV.
Figure 6 . 9 -
69 Figure 6.9 -Scintillation light distribution as a function of the time of CFD of 511 keV events. (b) Zoom in the region of interest. The results were obtained for a 100 LPI metallic woven Frisch grid placed 0.5 mm from the anode and a 6 cm long TPC.
Figure 6 . 10 -
610 Figure 6.10 -Experimental set-up of XEMIS1 for a 6 cm long TPC.
Figure 6 . 11 -Figure 6 . 12 -
611612 Figure 6.11 -Noise signal analysis diagram.
Figure 6 .
6 Figure 6.13 -Noise distribution of the pixels 63 and 19 respectively (see Appendix B). Each pixel belong to a different IDeF-X LXe ASIC. The solid red line is a Gaussian fit.
Figure 6 . 14 -Figure 6 . 15 -
614615 Figure 6.14 -Pedestal map obtained from the mean value of the pedestal per pixel.
Figure 6 . 16 -
616 Figure 6.16 -Relative position of the two IDeF-X LXe chips (black and blue line) with respect to the pixels in the anode. In the right figure the location of the connectors directly wire bounded to the 32 pixels of each ASIC is illustrated.
Left IDeF-X LXe chip.
Right IDeF-X LXe chip.
Figure 6 . 18 -
618 Figure 6.18 -Noise map distribution per pixel.
Figure 6 . 19 -
619 Figure 6.19 -Noise distribution per pixel.
Figure 6 . 20 -
620 Figure 6.20 -Raw noise distribution with no MLI insulation around the front flange of the TPC.
Figure 6 . 21 -
621 Figure 6.21 -Mean pedestal distribution of the pixels 63 and 19 respectively (see Appendix B). The distributions are fitted by a double Gaussian function given by Equation 6.3.
Figure 6 . 22 -
622 Figure 6.22 -(a) σ tail σ core per pixel and (b) Mapping of the fraction of rejected pedestals per pixels obtained with no MLI insulation on the front flange of the TPC.
Figure 6 . 23 -
623 Figure 6.23 -Noise distribution after pedestal subtraction extracted from one random sample for all the pixels with (left) and without (right) the pixels on the borders of the anode.
Figure 6 . 24 -Figure 6 . 25 -
624625 Figure 6.24 -Average baseline over the 64 pixels.
Figure 6 . 26 -
626 Figure 6.26 -Phase diagram and vapor pressure curve of xenon [2]. The left side figure shows a zoom on the range of interest.
Figure 6 . 27 -Figure 6 . 28 -
627628 Figure 6.27 -Evolution of the median value of the raw signals with the event number for the pixel 63 of the anode. Each event was registered over a time window of 15.36 µs. The bottom figure shows a zoom on a region of interest where two baseline perturbations are clearly visible.
Figure 6 . 29 -
629 Figure 6.29 -Evolution of the median value of the raw signals with the event number for eight pixels of the column 7 of the anode. The pixels are represented in ascending order.
Figure 6 . 30 -
630 Figure 6.30 -Evolution of the median value of the raw signals with the event number for eight pixels of the column 7 of the anode over a time interval of the total time window. The fluctuations of the baseline follow an ascending pattern from the bottom to the top part of the anode.
Figure 6 .
6 Figure 6.31(a) shows the median baseline evolution as a function of the event number for the same pixels column, when the temperature of the cold finger was set to 163 K. We can observe an important increase in the number of bubbles in the chamber compared to pressurization conditions (Figure6.31(b)). At low pressure, a 25 % of the events where rejected due to baseline rejection cut.
Figure 6 . 31 -
631 Figure 6.31 -Evolution of the median value of the raw signals with the event number for eight pixels of the column 7 of the anode under pressurization and de-pressurization conditions respectively.
Figure 6 . 33 -
633 Figure 6.33 -Baseline perturbation caused during the gain stage of the IDeF-X LXe. The left side figure is a zoom on the region of interest.
Figure 6 . 34 -
634 Figure 6.34 -Schematic diagram of the data analysis procedure.
Figure 6 . 35 -
635 Figure 6.35 -Value of the noise per pixel after pedestal subtraction and common noise rejection.
Figure 6 . 36 -
636 Figure 6.36 -511 keV peak position as a function of the channel number.
Figure 6 . 37 -
637 Figure 6.37 -Pixel signal distribution for 511 keV events as a function of the column number.
Figure 6 . 38 -
638 Figure 6.38 -Collected charge as a function of the energy threshold level for 511 keV events at 1 kV/cm.
Figure 6 . 39 -
639 Figure 6.39 -Pixel signal distribution of the segmented anode where a deposit of 511 keV is shared between four adjacent pixels. The waveform registered by the right bottom pixels is the TTT signal.
Figure 6 . 40 -
640 Figure 6.40 -Example of a 511 keV energy deposited shared by four adjacent pixels of the anode.
Figure 6 . 41 -
641 Figure 6.41 -Time difference between the pixels of the same cluster, for cluster with a total measured energy of 511 keV.
Figure 6 . 42 -Figure 6 . 43 -
642643 Figure 6.42 -Time difference distribution between pixels of the same cluster as a function of A neighbor . Only cluster with a total measured energy of 511 keV are included in the distribution. In the left side figure the mean value of ∆t with respect to A neighbor is presented.
Figure 6 . 44 -
644 Figure 6.44 -Time resolution as a function of SNR.
Figure 6 . 45 -
645 Figure 6.45 -Energy spectrum of single-cluster events for two different cluster time windows.
Figure 6 . 46 -
646 Figure 6.46 -Distribution of the number of clusters before and after cluster rejection.
Figure 6 . 47 -
647 Figure 6.47 -(a) Energy spectrum and (b) number of pixels per cluster for those clusters with an energy smaller than 20 keV with amplitude threshold level of 3.0σ noise .
Figure 6 . 48 -
648 Figure 6.48 -Distribution of the rejected clusters due to the energy exclusion.
Figure 6 . 49 -
649 Figure 6.49 -Scintillation light amplitude as a function of the time of the CFD of 511 keV.The red dashed line represents a cut to reject uncorrelated events with the ionization charge as well as noise events. All those events outside the cut are excluded from the analysis.
7. 1 .
1 Measurement of the liquid xenon purity
Figure 7 . 1 -
71 Figure 7.1 -Scatter plot of the measured charge as a function of the electron drift time for 511 keV γ-rays events at 1 kV/cm after two weeks of re-circulation. The black points represents the collected charge per slice, and the solid red line represents the fit of the collected charge to Equation 7.1.
Figure 7 . 2 -
72 Figure 7.2 -Drift time distribution of 511 keV single-cluster events at 1 kV/cm. The solid red line represents the exponential fit to the distribution.
Figure 7 . 3 -
73 Figure 7.3 -Beginning and end of the TPC at 1 kV/cm for single-cluster 511 keV events. The solid blue lines are the Error function fit to the drift time distribution at both edges of the chamber.
Figure 7 . 4 -
74 Figure 7.4 -Scatter plot of the normalized charge as a function of the z-position obtained by simulation using the results of CASINO (see Section 2.3.1).
Figure 7 . 5 -
75 Figure 7.5 -Electron drift velocity as a function of the applied electric field at a temperature of 168 K.
Figure 7 . 6 -
76 Figure 7.6 -X and Y multiplicities as a function of the drift length for single-cluster 511 keV events at 1 kV/cm and a 4σ noise selection threshold. The pixel size is 3.125 × 3.125 mm 2 .
Figure 7 . 7 -Figure 7 . 8 -
7778 Figure 7.7 -Schematic drawing of the concept of multiplicity in a segmented anode. In the three illustrated examples the total multiplicity is equal to 2. In a) the multiplicity along the y-axis (Y multiplicity) is 2, while the multiplicity along the x-axis or X multiplicity is 1. On the other hand, in b) X multiplicity is equal to 2 and Y multiplicity is 1.
Figure 7 . 10 -
710 Figure 7.10 -Ionization yield as a function of the electric field for 511 keV γ-rays. The red line represents the fit to Equation 2.4.
Figure 7 . 11 -
711 Figure 7.11 -Energy resolution as a function of the electric field for 511 keV γ-rays. The red line represents the fit to Equation 2.7.
Figure 7 . 12 -
712 Figure 7.12 -Two different views of the experimental set-up used to detect the 1274 keV γ-ray emitted by the 22 Na source. The solid red line represents the axis along the center of the TPC. The source (yellow) is centered with respect to the anode (green), while the BaF 2 and PMT are laterally shifted with respect to the central axis.
Figure 7 .
7 Figure 7.13 -Pulse height spectra for three different γ-ray energies from the lines of 511 keV, 662 keV and 1274 keV of22 Na and 137 Cs at 1.5 kV/cm. The red lines represent the Gaussian fit to the photoelectric peak.
Figure 7 . 15 -
715 Figure 7.15 -Collected charge as a function of the energy for two different applied electric fields. The dashed lines represent the first degree polynomial fit to data considering the point at (0,0).
Figure 7 . 16 -
716 Figure 7.16 -Total charge for two-cluster 511 keV events as a function of the collected charge per clusters. A minimum distance cut between clusters of 1 cm was included in order to avoid pile-up events. The solid red line represents the collected charge for single-clusters 511 keV events.
Figure 7 . 17 -
717 Figure 7.17 -Ionization yield as a function of the drift length at 1 kV/cm. The red solid line represents the value of the ionization yield obtained from the total charge spectrum integrated over all z-values, and the red dashed line shows the 1-σ statistical uncertainty. The error bars represent the statistical uncertainties deduced from the fit and the boxes represent the systematic uncertainties.
Figure 7 . 18 -
718 Figure 7.18 -Energy resolution (σ/E) as a function of the drift length at 1 kV/cm. The red solid line represents the value of the ionization yield obtained from the total charge spectrum integrated over all z-values, and the red dashed line shows the 1-σ statistical uncertainty. The error bars represent the statistical uncertainties deduced from the fit and the boxes represent the systematic uncertainties.
Figure 7 . 19 -
719 Figure 7.19 -Drift length distribution obtained with experimental and simulated data.
Figure
Figure 7.20 shows excellent agreement between the measured and simulated pixel multiplicity as a function of the drift length at 511 keV and 1 kV/cm, for different energy threshold levels. These results were obtained for a transverse diffusion of 230 µm z (cm), compatible with the published values[3]. The transport of electrons along the TPC was attenuated with an attenuation length of 1750 mm, similar to that measured experimentally. During data treatment same attenuation correction was applied to both simulated and measured signals. As expected, when the γ-ray interacts close to the Frisch grid, electrons drift a short distance before being collected by the anode and thus, the contribution of diffusion to charge sharing is small. The simulation also shows a remarkable coincidence of the fraction of single-pixel clusters as a function of the z-position for different threshold levels (see Figure7.21). Close to the grid and for a pulse selection threshold of 4σ noise , the fraction of full energy peak events collected by a single pixel is 54 %, whereas it increases to 60 % for a 10σ noise threshold.These results indicate that charge sharing between neighboring pixels can be described by a Gaussian spread that varies with the drift distance as z (cm). Other factors such as the range of the primary electrons and the transport of fluorescence X-ray can be neglected in the simulation, at distances at least 5 mm far from the Frisch grid. Close to the Frisch grid, where the lateral diffusion of carrier electrons become small, the primary charge cloud size may become not negligible, which means that there is a non-zero probability that the electron cloud will be collected by multiple pixels, increasing the clusters multiplicity. A further simulation, where both effects are included, should be performed in the future.
Figure 7 . 20 -
720 Figure 7.20 -Comparison between the number of triggered pixels as a function of the drift length for four different threshold levels along the y-coordinate obtained with experimental and simulated data.
Figure 7 . 21 -
721 Figure 7.21 -Comparison between the fraction of single-pixel clusters as a function of the drift length for four different threshold levels obtained with experimental and simulated data.
Figure 7 . 22 -
722 Figure 7.22 -Average time difference between the pixels of the same cluster as a function of A neighbor . Distributions were obtained for single-cluster events with a total measured energy of 511 keV.
Figure 7 . 23 -
723 Figure 7.23 -Time resolution as a function of the collected charge in the neighboring pixel (A neighbor ). Distributions were obtained for single-cluster events with a total measured energy of 511 keV.
Figure 7
7 Figure 7.24 -Reconstructed position along the y-coordinate using the center of gravity method for 511 keV events at 1 kV/cm and for all drift times.
Figure 7 . 25 -Figure 7 . 26 -
725726 Figure 7.25 -Center of gravity residuals for four different z-positions, obtained by simulation.
Figure 7 . 27 -
727 Figure 7.27 -Schematic diagram of the Gaussian correction between to neighboring pixels. p right and p lef t represent the center of both pixels, n σ is the number of sigmas of the Gaussian distribution at the inter-pixel position, and x η is the final reconstructed position. The standard deviation of the Gaussian distribution is given by the lateral diffusion coefficient.
Figure 7 . 28 -
728 Figure7.28 -Reconstructed position along the y-coordinate for 511 keV events at 1 kV/cm for experimental and simulated data. Two-pixel clusters along the y-direction are corrected using the Gaussian method.
Figure 7 . 29 -
729 Figure 7.29 -Residuals as a function of the reconstructed position for a simulated point-like electron cloud of 511 keV at 6 cm from the anode. Two-pixel clusters are corrected with the Gaussian method. Left) Mean value of the residuals as a function of the reconstructed position. The error bars represent the RMS. The position y = 0 correspond to the center of the anode, located between the two central pixels of size 3.125 × 3.125 mm 2 .
Figure 7 . 30 -
730 Figure 7.30 -Mean value of the residuals as a function of the center of gravity position, obtained by simulation. The error bars represent the RMS.
Figure 7 . 31 -
731 Figure 7.31 -Residuals as a function of the reconstructed position for a simulated point-like electron cloud of 511 keV at 6 cm from the anode. The center of gravity position is corrected with the polynomial correction. Left) Mean value of the residuals as a function of the reconstructed position. The error bars represent the RMS. The position y = 0 correspond to the center of the anode, located between the two central pixels of size 3.125 × 3.125 mm 2 .
Figure 7 . 32 -Figure 7 . 33 -
732733 Figure 7.32 -Residuals as a function of the reconstructed position for a simulated point-like electron cloud of 511 keV for six different initial positions.
Figure 7 . 34 -
734 Figure 7.34 -Comparison of the spatial resolution estimated from the RMS of the residuals for the center of gravity method and the two correction methods: Gaussian and Polynomial for 511 keV γ-rays, obtained by simulation.
Figure 7 . 36 -
736 Figure 7.36 -Diagram of the steps used in the event reconstruction algorithm.
Figure 7 . 37 -
737 Figure 7.37 -Schematic diagram of the Compton cone reconstruction process. Figure taken from [123].
Figure 7 . 38 -
738 Figure 7.38 -Schematic diagram of the cone-LOR intersection, where E is the emission point, A is the positron annihilation point which differs from E due to the mean free path of the positron inside the LXe, P is the projection of the emission point on the reconstructed LOR and I is the reconstructed intersection point. The distance between P and I represents the resolution along the LOR, ∆L.Figure adapted from [121].
Figure 7 . 39 -
739 Figure 7.39 -Angular distribution obtained for a preliminary study of the intersection LOR-Compton cone for an electric field of 0.75 kV/cm. Figure adapted from [82].
1. 1 1 . 2 1 . 3 162 7. 1
112131621 Several properties of noble gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Properties of Liquid xenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Decay times for the fast, slow and recombination time constants for electrons, α-particles and fission fragments . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4 Properties of some scintillators used in functional medicine imaging.Table taken from [93]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.5 Physical properties of the main radioisotopes used in SPECT. . . . . . . . . . . 37 1.6 Physical properties of positron emitters radionuclides used in PET. Figure taken from [107]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 1.7 Physical properties of some 3γ-emitting radionuclides [127]. . . . . . . . . . . . . 48 1.8 Main properties of the scandium isotopes of interest for nuclear medicine [131, 134]. 49 3.1 Components of the TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.2 Properties of three electroformed micro-meshes used in XEMIS1 . . . . . . . . . 85 3.3 Properties of the metallic woven meshes used in XEMIS1 . . . . . . . . . . . . . 85 4.1 IDeF-X HD-LXe main properties . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.2 Experimental noise rate as a function of the discriminator threshold level. The last column shows the expected values from the Rice's formula for a f n0 given by the experimental data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ionization yield and energy resolution of γ-rays in LXe at 1.5 kV/cm and 1 kV/cm.260 A.1 Noise counting rate as a function of the discriminator threshold level obtained from experimental and simulated data. . . . . . . . . . . . . . . . . . . . . . . . 300 A.2 Noise counting rate as a function of the discriminator threshold level obtained from experimental and simulated data. The leading edge threshold was set a 3σ noise and the trailing edge threshold a 2.6σ noise . . . . . . . . . . . . . . . . . . 300 Table Page
1. 1 9 1. 2 10 1. 4 11 1. 5 12 1. 6
192104115126 Example of a 511 keV electron recoil track obtained by simulation. Energy loss along the track is represented by the the color palette [21]. . . . . . . . . . . . . Stopping power for electrons in xenon. Data from [13]. . . . . . . . . . . . . . . 10 1.3 CSDA range for electrons in xenon. Data from [13]. . . . . . . . . . . . . . . . . Calculated photoelectric, total Compton scattering and pair production cross sections in xenon as a function of the photon energy. The values are taken from [13]. Photoelectric effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Compton scattering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.7 Pair production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.8 Scintillation mechanism in LXe. Figure taken from [35]. . . . . . . . . . . . . . . 17 1.9 Decay curves of the scintillation light for electrons, α-particles and fission fragments in LXe, without applied electric field [36, 38]. . . . . . . . . . . . . . . 18 1.10 Decay curves of the scintillation light for electrons in LXe with and without an applied electric field [39]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.11 Evolution of the charge and scintillation light yields with the electric field in liquid xenon for 122 keV electron recoils (ER), 56 keVr nuclear recoils (NR) and α-particles.Figure taken from [40]. . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.12 (a) Schematic principle of a dual phase xenon Time Projection Chamber (TPC) [50] and (b) the XENON100 TPC (from XENON collaboration). . . . . . . . . . . . 22 1.13 Cutaway drawing of the LZ dark matter detector within the outer detector and water tank. Figure from LZ collaboration. . . . . . . . . . . . . . . . . . . . . . 24 1.14 The 800 kg XMASS detector. Figure from XMASS collaboration. . . . . . . . . 24 1.15 Schematic design of the LXeGRIT LXeTPC. Figure taken from [56]. . . . . . . 25 1.16 (a) Schematic principle of the EXO-200 TPC and (b) EXO-200 TPC. Figures from EXO collaboration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.17 Schematic description of the principle of an Anger scintillation camera. Figure taken from [85]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.18 Different types of collimators used in SPECT. Figure taken from [86]. . . . . . . 30 1.19 Principle of operation of a photomultiplier tube (PMT) [20]. . . . . . . . . . . . 31 1.20 Schematic description of the principle of a PET system. Figure taken from [85]. 32 1.21 Illustration of the main coincidence event types: a) true; b) multiple; c) single; d) random and e) scattered. Figure adapted from [90]. . . . . . . . . . . . . . . . . 34 1.22 Comparison between conventional PET and TOF-PET. The measured time-of-flight difference (∆t) between the arrival photons in TOF-PET allows to constraint the annihilation point along the LOR [102]. . . . . . . . . . . . . . . . . . . . . . . . 36 1.23 Basic principle of a Compton camera [113]. . . . . . . . . . . . . . . . . . . . . . 39 1.24 Schematic illustration of the principle of the 3γ imaging technique with a LXe Compton telescope. The difference between the position of the source and a reconstructed intersection point (green point) is represented as ∆L. This difference can be also expressed in terms of the angle α [82]. . . . . . . . . . . . . . . . . . 1.25 Expected angular resolution of XEMIS as a function of the scatter angle for an electric field of 2 kV/cm (black line). The electronic noise was fixed to 150 e -/cluster and the intrinsic energy resolution σ LXe was parametrized using the results of [115]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.26 Energy of the recoil electron as a function of the scatter angle obtained from Equation 1.3. The red dashed lines represent the energy interval associated with an acceptable angular resolution. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.27 Decay scheme of 44g Sc. Figure from [127]. . . . . . . . . . . . . . . . . . . . . . 2.1 Detector schematics of the TPC proposed for the first time by D. Nygren in 1974. Figure taken from [136]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Schematics drawing of the principle of a TPC. . . . . . . . . . . . . . . . . . . . 2.3 Simultaneous fit of the Thomas and Imel recombination model to the charge yield and energy resolution for 570 keV γ-rays as a function of the electric field in LXe. Figure is taken from [145]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Simultaneous fit of the Thomas and Imel recombination model to the charge yield and energy resolution for 662 keV γ-rays as a function of the electric field in LXe. Figure from [146]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Scintillation and ionization yields as a function of the drift field for 662 keV γ-rays from a 137 Cs source. Figure taken from [148]. . . . . . . . . . . . . . . . . . . . 2.6 Anti-correlation between scintillation and ionization signals for a 207 Bi source at a drift field of 4 kV/cm. Figure taken from [147]. . . . . . . . . . . . . . . . . . 2.7 Electron drift velocity in liquid and gaseous xenon as a function of the reduced electric field. Figure is taken from [155]. The x-axis represents the drift field over the number density of atoms, where 1 T d = 10 -17 V cm 2 . The solid lines show the calculations by Atrazhev et al. and the points are experimental data from [157, 158, 159, 160]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Electron drift velocity in liquid (163 K) and solid (157 K) xenon as a function of the electric field strength. Figure is taken from [150]. . . . . . . . . . . . . . . . 2.9 Electron mobility in liquid and solid xenon and liquid argon as a function of the temperature.
Figure is taken from [4]. Original figure is from [158]. . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Electron drift velocity in solid and liquid xenon as a function of the temperature.
Figure taken from [158]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Variation of the positive hole mobility in LXe as a function of temperature. Figure taken from [164]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 Transverse (D T ) and longitudinal (D L ) diffusion coefficients for liquid xenon as a function of the electric field. Figure taken from [2]. . . . . . . . . . . . . . . . . 2.13 Transverse diffusion coefficient for liquid xenon and liquid argon as a function of the density-normalized electric field. Figure is taken from [3]. . . . . . . . . . . . 2.14 Transverse diffusion as a function of the applied electric field obtained with XEMIS1. Figure taken from [165]. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Attachment rate constant of electrons in LXe as a function of the applied electric field for three different contaminant. Figure taken from [166]. . . . . . . . . . . 2.16 Example of an electron recoil track of 511 keV simulated using CASINO. Energy loss along the track is represented by the the color palette from yellow (511 keV) to blue (0 keV) (see Figure 2.17(b)) [21]. . . . . . . . . . . . . . . . . . . . . . . 2.17 Example of an electron cloud for 2000 simulated 511 keV primary electrons using CASINO.The blue region in (a) represents the LXe surface [21]. . . . . . . . . . 2.18 Electron radial distribution in the primary electron cloud obtained as a function of energy [21]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.19 Energy lost distribution for 140 keV γ-rays after a Compton scattering interaction with a bound atomic electron through a 45 • deflection angle for Si (Z = 14), Ar (Z = 18) and Ge (Z = 32) [170]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.20 Differential cross section of Compton scattering. Original figure from [172]. . . . 2.21 Angular resolution as a function of the energy of the incoming γ-ray for Xenon, Silicon and Germanium. Figure taken from [126]. . . . . . . . . . . . . . . . . . 2.22 Geant4 simulation of the interaction of the 3γ-rays emitted by a 44 Sc source with a LXe TPC. a) all hits and b) only those hits with the emission of a X-ray from the K-shell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 General view of the XEMIS1 experimental set-up installed at Subatech laboratory that comprises: (a) external cryostat that hosts the TPC, (b) injection panel, (c) heat exchanger and pulse tube refrigerator, (d) data acquisition system, (e) control panel and (f) rescue tank. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Two different views of the XEMIS1 TPC. In the right side figure the resistive divider chain used to provide an uniform electric field along the drift volume is visible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 VUV-sensitive Hamamatsu R7600-06MOD-ASSY PMT used in XEMIS1 TPC to detect the scintillation light. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Left) 100 LPI metallic woven wire mesh and right) 70 LPI electroformed micro-mesh. 3.5 Frontal view of the anode used in XEMIS1. The anode has a total surface of 51 x 51 mm 2 and it is segmented in 100 pixels of 3.1 x 3.1 mm 2 . The ionization signal is collected by the 64 internal pixels resulting in an active area of 2.5 x 2.5 cm 2 . The 36 pixels at the edges of the anode are connected to ground. . . . . . . . . . 3.6 Transversal cut of the segmented anode. The layers 1, 2, 3 and 4 correspond to Top, Layer2, Layer3 and Bottom respectively. Between the four copper layers there are alternate layers made of ceramics and prepeg for insulation and bounding. The thickness of the different layers is expressed in µm. . . . . . . . . . . . . . . 3.7 Illustration of the four main layers of the segmented anode used in XEMIS1. . . 3.8 Schematic diagram of the XEMIS1 LXe cryogenic system. Figure taken from [180]3.9 (a) Iwatani PC150 pulse-tube cryocooler and (b) available cooling power (heater power) versus cold head temperature[START_REF] Haruyama | Performance of a liquid xenon calorimeter cryogenic system for the MEG experiment[END_REF] (right). . . . . . . . . . . . . . . . . . 3.10 Cross-section of the cooling tower of XEMIS1, showing a) PTR, b) cold head, c) cold finger and d) heater. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Internal view of the stainless steal vacuum-insulated vessel. The cryostat that contains the TPC is placed inside the vacuum enclosure. All part of the system including the inner vessel, tubes, and the outer flange of the inner vessel that surround the front-end electronics are cover by MLI to reduce the heat load into the detector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Temperature profile of the cold finger during the precooling phase. . . . . . . . 3.13 Temperature profile of the internal cryostat during the precooling phase. . . . . 3.14 Pressure profile of the internal cryostat during the precooling phase. . . . . . . 3.15 (a) GXe mass variation inside the storage bottle and (b) LXe level variation inside the cryostat during liquefaction. . . . . . . . . . . . . . . . . . . . . . . . 3.16 XEMIS1 rare-gas purifier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.17 Oil-free membrane pump used to recirculate the xenon during purification. . . . 3.18 XEMIS1 coaxial heat exchanger. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.19(a) Heat exchanger efficiency calculated as a function of gas flow [180]. (b) Estimated cooling power as a function of gas flow [180]. . . . . . . . . . . . . . . 3.20 Temperature inside the internal cryostat during circulation. . . . . . . . . . . . 3.21 Pressure inside the internal cryostat during circulation. . . . . . . . . . . . . . . 3.22 High pressure bottles used to store the gaseous xenon when the detector is not in use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.23 Evolution of the LXe level inside the cryostat during cryopumping. . . . . . . . 3.24 Screen shot of the slow control system used in XEMIS1. . . . . . . . . . . . . . 3.25 XEMIS1 pressure security systems . . . . . . . . . . . . . . . . . . . . . . . . . 3.26 4 m 3 rescue tank used to recuperate the xenon is case of emergency. . . . . . . . 3.27 General view of the XEMIS2 experimental set-up installed in Subatech laboratory: right) XEMIS2 cryostat and left) Recovery and Storage of Xenon (ReStoX). . . 3.28 General view of the XEMIS1 experimental set-up installed in Subatech laboratory: right) Recovery and Storage of Xenon (ReStoX) and left) purification system. . 3.29 Mechanical design of the active zone of the XEMIS2 prototype for small animal imaging [82]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.30 Schematic diagram of the dimensions of XEMIS2. Only half of the right TPC with respect to the cathode is represented. . . . . . . . . . . . . . . . . . . . . 3.31 External 46 stainless steel field rings used to provide an uniform electric field along the drift volume of the detector. . . . . . . . . . . . . . . . . . . . . . . . 3.32 Mounting bracket for the 380 PMTs used to detect the VUV scintillation photons emitted after the interaction of an ionizing particle with the LXe. . . . . . . . . 3.33 Frontal view of the cathode with the stainless steel field rings. . . . . . . . . . . 3.34 (a) Frontal view and (b) bottom part of the segmented anode used in XEMIS2. The small squares at the center of (a) represents the pixels, whereas in (b) the vertical connectors of the front-end electronics are illustrated. . . . . . . . . . . 3.35 (a) Cooling system design to reduce the heat transfer by conduction between the electronics towards the LXe. (b) Mechanical design of XEMIS2: a) IDeF-X LXe chip inside the LXe, b) interface liquid-vacuum and c) XTRACT in the vacuum. 3.36 (a) Frontal view and (b) mechanical design of the internal part of the XEMIS2 cryostat. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.37 Liquid nitrogen container. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.38 Schematic diagram of the filling process of XEMIS2 with LXe and the GXe evacuation process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.39 Schematic diagram of the purification and re-circulation process in XEMIS2. . . 4.1 Basic diagram of a charge-sensitive preamplifier with a feedback capacitance C f and a feedback resistance R f as reset. An input current pulse i(t) is integrated on the CSA that produces an output voltage pulse V out (t) with a time constant τ = R f C f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Basic diagram of a charge-sensitive preamplifier with a CR-RC n filter. The output voltage pulse V out (t) has a quasi-gaussian shape with a rise and decay times that depend on the time properties of the different blocks of the electronic chain. . . 4.3 Schematic of the IDeF-X HD-LXe ASIC. . . . . . . . . . . . . . . . . . . . . . 4.4 Technical details of the IDeF-X HD-LXe ASIC. . . . . . . . . . . . . . . . . . . 4.5 (a) 32 channels IDeF-X front-end ASIC. (b) Bottom layer of the anode with the two standard 32 channels vertical mini edge card connectors. c) Two ASIC chips bounded to the two vertical connectors. The two PCBs are couple through a 64 channels interface board. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Zoom of the interior view of the outer vessel of XEMIS1. The kapton bus connected to the 64 channels interface board and the buffer are visible. . . . . . 4.7 Equivalent noise charge vs. shaping time. At long shaping times the ENC noise is dominated by current or parallel noise, whereas at small shaping times (large bandwidth) the parallel contributions dominates. A minimum of noise is achieved when the series and parallel contributions are equal, so changing any of these noise contributions shifts the noise minimum. The 1/f noise contribution is independent of shaping time. The dependence on the input capacitance and the leakage current is also shown in the figure. . . . . . . . . . . . . . . . . . . . . . 4.8 Output signal of the shaper for a 60 mV injected delta-like pulse with 5 ns rise time and a peaking time of 1.39 µs. . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Output signal of the preamplifier for a 60 mV injected delta-like pulse with 5 ns rise time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Output signal of the shaper for a 60 mV injected delta-like pulse with 5 ns rise time and four different peaking times. . . . . . . . . . . . . . . . . . . . . . . . 4.11 Output signal of the preamplifier for a 60 mV injected delta-like pulse with 5 ns, 250 ns and 500 ns rise time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm from the anode (red line) and a 60 mV injected delta-like pulse with 250 ns (black line) and 500 ns (blue line) rise time. The peaking time was set to 1.39 µs. . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm (red line) and a 1 mm (black line)from the anode. The peaking time was set to 1.39 µs. . . . . . . . . . . . . . . . . . . . 4.14 Comparison between the output signal of the shaper for 511 keV events for two different Frisch grids placed at 1 mm from the anode: a 100 LPI Frisch grid (red line) and a 50.3 LPI (black line). The peaking time was set to 1.39 µs. . . . . . 4.15 Output signal of the shaper for a peaking time of 1.39 µs as a function of the preamplifier pulse rise time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.16 Output signal of the shaper for a peaking time was of 2.72 µs as a function of the preamplifier pulse rise time. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.17 Comparison between the experimental and injected signals at the output of the (a) preamplifier and (b) shaper. The average injected pulse was obtained over a large number of experimental preamplifier output pulses with a total energy of 511 keV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.18 Output amplitude of the IDeF-X LXe chip as a function of the injected charge for the pixel 1 of the anode (peaking time 1.39 µs and gain 200 mV/fC). The red line represents a first order polynomial, which shows a perfect linear response of the electronics in all the dynamic range. . . . . . . . . . . . . . . . . . . . . . . 4.19 Output amplitude of the IDeF-X LXe chip vs input charge for the pixel 1 of the anode and low injected amplitudes (peaking time 1.39 µs and gain 200 mV/fC). The red line represents a first order polynomial, which shows a perfect linear response of the electronics in almost the entire dynamic range. A lost of linearity due to the measurement method is observed close to the threshold level. . . . . 4.20 Linearity difference between the measured charge and the first degree polynomial fit. The red line represents an exponential fit. . . . . . . . . . . . . . . . . . . . 4.21 Ratio between the number of measured signal and the number of injected pulses as a function of the measured charge. . . . . . . . . . . . . . . . . . . . . . . . . 4.22 ENC vs. shaping time for a leakage current of 20 pA and a conversion gain of 200 mV/fC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.23 ENC vs. shaping time for two different values of the leakage current of 20 pA and 100 pA, and a conversion gain of 200 mV/fC. . . . . . . . . . . . . . . . . . 4.24 ENC vs. shaping time for two different values of the leakage current of 20 pA and 100 pA (conversion gain of 200 mV/fC) after correlated noise correction. . . 4.25 ENC vs. shaping time for two different values of the leakage current of (a) 20 pA and (b) 100 pA with and without correlated noise correction. . . . . . . . . . . . 4.26 Correlated noise contribution for two different values of the leakage current of 20 pA and 100 pA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.27 Comparison between the normalized averaged signal obtained from experimental data and the simulated signal, in linear and logarithmic scale (τ 0 = 1.39 µs). . . 4.28 Experimental distribution of the noise amplitudes for a peaking time of 1.39 µs. The red curve represents the Gaussian fit. . . . . . . . . . . . . . . . . . . . . . 4.29 Average probability density functions of the (a) real and (b) imaginary parts of the DFT coefficients obtained from experimental noise events registered over a total time window of 77 s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.30 Probability density functions of the (a) real and (b) imaginary parts of the DFT coefficients obtained for the same frequency (k = 21). The red curve represents the Gaussian fit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.31 Distribution of the imaginary part of the DFT coefficients as a function of the real part. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.32 (a) Power spectrum of the magnitude of the DFT as a function of the frequency obtained from experimental noise events registered over a total time window of 77 s. (b) shows the average value per bin. . . . . . . . . . . . . . . . . . . . . . 4.33 Probability density functions of the (a) magnitude and (b) phase of the DFT obtained for the same frequency (k=21). . . . . . . . . . . . . . . . . . . . . . . 4.34 Example of a typical simulated noise signal over a total time of window of 102.2 µs at a sampling rate of 12.5 MHz for a peaking time of 1.39 µs. . . . . . . . . . . 4.35 (a) Average power spectrum of the magnitude of the DFT obtained with the Monte Carlo Simulation and (b) Comparison between the experimental and simulated power spectra of the magnitude of the DFT. . . . . . . . . . . . . . . 4.36 (a) Simulated distribution of the noise amplitudes obtained for 10000 simulated events for a peaking time of 1.39µs. The red curve represents the Gaussian fit. (b) Comparison between the normalized experimental and simulated noise distributions.149 4.37 Simulated output signal of XEMIS1 with amplitude 20σ noise for a peaking time of 1.39µs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.38 Principle of the Constant Fraction Discriminator. The dotted line in (a) shows the result for a different rise time [22]. In (b) S 1 is the original signal and S 2 the CFD signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.39 Example of the CFD technique applied on a simulated signal (blue curve). The red curve is the CFD signal. The dashed black line represents the discriminator threshold. In (a) the threshold is directly applied on the simulated signal S1, while in (b) the threshold is set on the CFD signal S2. The threshold depends on the value of the σ noise of the noise distribution. The x-axis is expressed in time channels, where 1 channel equals to 80 ns. . . . . . . . . . . . . . . . . . . . . . 4.40 Efficiency results obtained for the CFD technique for a τ d = 12 channels and a attenuation fraction k = 1.436. The black dots correspond to the case where the threshold level was set on the signal S1, whereas for the red dots the threshold was set on the CFD signal S2. The signal S1 was obtained for a peaking time of 2.05µs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.41 Time resolution measured with the CFD method (τ d = 12 channels and k = 1.436) for the two possible discriminator situations as a function of the simulated amplitude, expressed in units of SNR. The signal S1 was obtained for a peaking time of 2.05µs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.42 Ratio between the measured averaged amplitude and the simulated amplitudes obtained with the CFD method (τ d = 12 channels and k = 1.436) for the two possible discriminator situations as a function of the average measured amplitude, expressed in units of SNR. The signal S1 was obtained for a peaking time of 2.05µs.153 4.43 Comparison of the time resolution obtained for two different values of the peaking time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.44 Difference between the average measured amplitude and the simulated amplitude obtained for two different values of the peaking time. The error bars are obtained from the RMS of the distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.45 Comparison of the efficiency obtained for two different values of the peaking time. 4.46 Noise-induced time jitter. Figure adapted from [202]. . . . . . . . . . . . . . . . 4.47 Slope of the constant fraction signal at the zero-crossing point as a function of the time delay τ d for two different attenuation fraction values: (a) k = 1.5 (red) and k = 1 (black). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.48 (a) Signal-to-noise ratio (RMS values) of the constant-fraction signal as a function of the delay τ d for two different attenuation fraction values. (a) k = 1.5 (red) and k = 1 (black). and (b) Value of σ t as a function of the delay τ d for two different attenuation fraction values. (a) k = 1.5 (red) and k = 1 (black). . . . . 4.49 Time resolution obtained for three different values of the delay τ d and the attenuation fraction k. τ n represents the additional numerical delay added to measure the maximum amplitude of the signals. . . . . . . . . . . . . . . . . . . 4.50 Difference between the average measured amplitude and the simulated charge for three different values of the delay τ d and the attenuation fraction k. τ n represents the additional numerical delay added to measure the maximum amplitude of the signals. The error bars are obtained from the RMS of the distribution. . . . . . 4.51 Time and amplitude measurement on a simulated signal using the method of a peak sensing ADC. The leading threshold was set on 3σ noise , whereas the trailing edge threshold was 2σ noise . The maximum was found at 0.18 V (red star) corresponding to a drift time of 53 µs (670 channels). . . . . . . . . . . . . . . 4.52 Comparison of the time resolution obtained for the Max and CFD methods. The CFD was performed for a delay of τ d = 9 and an attenuation fraction k = 1.5. . 4.53 Difference between the average measured amplitude and the simulated amplitude obtained for both techniques. The CFD was performed for a delay of τ d = 9 and an attenuation fraction k = 1.5. The error bars are obtained from the RMS of the distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.54 Comparison of the efficiency obtained for both techniques. The CFD was performed for a delay of τ d = 9 and an attenuation fraction k = 1.5. . . . . . . 4.55 Example of a typical noise signal. The red cross represents a trigger for a zero threshold level. A trigger is considered when the signal crosses the threshold with positive slope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.56 Counting rate as a function of the discriminator threshold. The red line represents the Gaussian fit to the distribution. . . . . . . . . . . . . . . . . . . . . . . . . . 4.57 Time interval between two consecutive threshold crossing for a 3σ noise threshold level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.58 Counting rate as a function of the discriminator threshold. The leading edge threshold was set a 3σ noise and the trailing edge threshold a 2.6σ noise . . . . . . . 4.59 Time interval between two consecutive threshold crossing for a 3σ noise threshold level with a leading edge threshold at 3σ noise and a trailing edge threshold at 2.6σ noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.60 Schematic diagram of the data acquisition system of XEMIS2. The collected signals by the segmented anode are read-out by an IDeF-X ASIC in the same way as in XEMIS1. The information of time, amplitude and pixel address is extracted from each detected signal thanks to the XTRACT. The information is then read by a PU card that groups together 8 XTRACT ASICs. The information from the PU card is extracted towards the outside of the chamber via a LVDS connexion and stored in a disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.61 Illustration of the XTRACT architecture. . . . . . . . . . . . . . . . . . . . . . 4.62 Example of a signal measured by the CFD method. The original signal is described by A(t). An attenuated and delayed signal A(t-n) is subtracted from the original signal, to produce a bipolar pulse designed by CFD(t). The zero-crossing point of the CFD signal corresponds to the moment A(t) reaches its maximum value. The signals are sent to the zero-crossing comparator only if A(t) crosses a certain threshold level (referred as seuil in the figure). . . . . . . . . . . . . . . . . . . 4.63 Schematic diagram of the PU card. . . . . . . . . . . . . . . . . . . . . . . . . .4.64 Example of the detection and read-out process of three different signals. . . . . . 5.1 Schematic drawing of a conventional parallel plate ionization chamber. . . . . . 5.2 Illustration of the time development of the induced current (a) and voltage (b) on the anode on a two infinite parallel plate detector. In the figure, t -and t + are the electron and positive ion collecting times respectively. . . . . . . . . . . 5.3 Illustration of a conventional Frisch plate ionization chamber. The grid is represented as a dashed line close to the anode. . . . . . . . . . . . . . . . . . . 5.4 Illustration of the weighting potential of the anode for an ideal Frisch grid ionization chamber. The grid is placed at a distance 1-P from the anode [135]. . 5.5 Example of the output voltage as a function of time in a Frisch-gridded ionization chamber. Only the signal induced by electrons is considered. In figure, t - c-g and t - g-a are the electron drift time from the point of interaction to the Frisch grid and the drift time between the grid and the anode respectively. . . . . . . . . . 5.6 Schematic illustration of the geometry of a Frisch grid ionization chamber. . . . 5.7 Left: Weighting potential distribution of the Frisch grid detector along the drift direction. The physical properties of the grid are listed in the figure, where D and p are cathode-grid and anode-grid distances respectively, d is the distance between the grid elements and r is the wire radius. Right: 2-D map of weighting potential around the Frisch grid. Image courtesy of [211] . . . . . . . . . . . . . 5.8 Weighting potential of an infinite parallel plate electrode and two different pixel pitch of 55 µm and 100 µm.
Figure taken from[START_REF] Krapohl | Monte Carlo and Charge Transport Simulation of Pixel Detector Systems[END_REF]. . . . . . . . . . . . . . . . 5.9 Weighting potential of a neighbor pixel along the drift direction. The dashed line shows the same variation of a pixel that is located two pixels away of the actual collecting pixel. Figure taken from[START_REF] Knoll | Radiation Detection and Measurements[END_REF]. . . . . . . . . . . . . . . . . . . . . . . . 5.10 Energy spectrum of the 511 keV γ-rays obtained with a 100 LPI mesh placed at 500 µm from the anode at an electric drift field of 1 kV/cm and two different electric field ratios of (a) R = 2 and (b) R = 6. . . . . . . . . . . . . . . . . . .5.11Collected charge for 511 keV events as a function of the electric field ratio for a constant electric drift field of 1 kV/cm. The results were obtained for a 100 LPI Frisch grid located at 500 µm from the segmented anode. . . . . . . . . . . . . . 5.12 Collected charge for 511 keV events as a function of V grid , for a constant electric drift field of 1 kV/cm. The results were obtained for a 100 LPI Frisch grid for two different gaps of 500 µm and 1 mm. . . . . . . . . . . . . . . . . . . . . . . 5.13 Collected charge for 511 keV events as a function of the ratio between the electric field in the gap and the electric drift field for a constant electric drift field of 1 kV/cm. The results were obtained for a 50.29 LPI Frisch grid located at 1 mm from the segmented anode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 Comparison between the output signal of the shaper for 511 keV events with a 100 LPI Frisch grid placed at 500 µm from the anode (red line) and a 60 mV injected step-like pulse with a slope of 250 ns (black line). The peaking time was set to 1.39 µs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Average output signal for 511 keV events and four different Frisch grids in linear and logarithmic scales. The peaking time was set to 1.39 µs. . . . . . . . . . . . 5.16 Average output signal for 511 keV events obtained with the 100 LPI Frischgrid located at 500 µm from the anode, as a function of the drift time for different z-intervals along the drift length. The signals are shown in linear (left) and logarithmic (right) scales. The peaking time was set to 1.39 µs. . . . . . . . . . 5.17 Pulse integral as a function of the drift time for different z-intervals along the drift length. Figure on the right is a zoom of the figure on the left. . . . . . . . 5.18 Total integrated charge as a function of the distance from the anode. . . . . . . 5.19 Time difference between the pixels of the same cluster, for cluster with a total measured energy of 511 keV and a cluster time window of 2 µs. . . . . . . . . . 5.20 Time difference distribution between the pixels of the same cluster as a function of A neighbor . Only cluster with a total measured energy of 511 keV are included in the distribution. In the bottom figure the mean value of ∆t per slice in charge is represented. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.21 Example of the ∆t between pixels of the same cluster for a total charge of 511 keV and a A neighbor of 3σ noise and 5σ noise respectively. . . . . . . . . . . . . . . . . . 5.22 Time difference between the pixels of the same cluster as a function of A neighbor for three different interaction positions with respect to the anode. . . . . . . . . 5.23 Simulation of the geometry of 9 adjacent pixels of 3.1 × 3.1 mm 2 obtained with gmsh. The cathode is considered a plane electrode of 9.3 × 9.3 mm 2 located at 1 mm from the segmented electrode. . . . . . . . . . . . . . . . . . . . . . . . . 5.24 Weighting potential distribution for a 3.1 × 3.1 mm 2 pixel size and two different gaps. The distribution was obtained with Elmer by setting the pixel of interest at unity potential and the rest of the pixels and the cathode to ground. . . . . . 5.25 Amplitude of the induced signal as a function of the interaction position along the x-axis (y = 0), for a simulated transverse diffusion of 200 µm. . . . . . . . . 5.26 Time difference between the signal of reference measured at the center of the collecting pixel and the induced signal as a function of the relative amplitude, for a simulated transverse diffusion of 200 µm. . . . . . . . . . . . . . . . . . . . . . 5.27 Amplitude of the induced signal as a function of the interaction position along x-axis (y = 0) for a simulated transverse diffusion of 300 µm. . . . . . . . . . . . 5.28 Time difference between the signal of reference measured at the center of the collecting pixel and the induced signal as a function of the relative amplitude, for a simulated transverse diffusion of 300 µm. . . . . . . . . . . . . . . . . . . . . . 5.29 (Left) Time difference distribution between the pixels of the same cluster as a function of A neighbor for a gap of 0.5 mm. (Right) Comparison of the average ∆t as a function of A neighbor for two different gap distances. . . . . . . . . . . . . . 5.30 Mean time difference between the pixels of the same cluster as a function ofA neighbor for cluster with a total measured energy of 511 keV and which pixels are shared between the two IDeF-X LXe front-end electronics. . . . . . . . . . .5.31 Average time difference between the pixels of the same cluster as a function ofA neighbor , for cluster with a total measured energy of 511 keV and a Frisch grid located at 500 µm from the anode, for two different pixel configurations. . . . .5.32 Average time difference between the pixels of the same cluster as a function ofA neighbor , for clusters with a total measured energy of 511 keV and a Frisch grid located at 1 mm from the anode, for two different pixel configurations. . . . . . 6.1 Schematic drawing of XEMIS1 experimental set-up: a) BaF 2 crystal and PMT, b) collimators, c) 22 N a source, d) entrance window, e) TPC and f) LXe PMT. The yellow line emulates the emission and detection of the 2 back-to-back 511 keV γ-rays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Schematic drawing of XEMIS1 trigger setup. . . . . . . . . . . . . . . . . . . . . 6.3 Example of two typical scintillation signals from the (a) BaF 2 crystal and (b) LXe PMT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Coincidence time distribution at 1 kV/cm. . . . . . . . . . . . . . . . . . . . . . 6.5 Example of a typical (a) scintillation and (b) ionization waveforms of a 511 keV γ-ray event from the 22 N a source. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Evolution of the number of VUV scintillation photons that arrives to the LXe PMT as a function of the interaction point of the 511 keV ionization electrons. Figure taken from [123]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Scintillation light distribution as a function of the time of CFD of 511 keV events. (b) Zoom in the region of interest. The results were obtained for a 100 LPI metallic woven Frisch grid placed 1 mm from the anode and a 12 cm long TPC. 6.8 Scintillation light amplitude for a discriminator threshold level of 5 mV. . . . . . 6.9 Scintillation light distribution as a function of the time of CFD of 511 keV events. (b) Zoom in the region of interest. The results were obtained for a 100 LPI metallic woven Frisch grid placed 0.5 mm from the anode and a 6 cm long TPC. 6.10 Experimental set-up of XEMIS1 for a 6 cm long TPC. . . . . . . . . . . . . . . 6.11 Noise signal analysis diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 Raw signals distribution as a function of time (a) before and (b) after pulse rejection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Noise distribution of the pixels 63 and 19 respectively (see Appendix B). Each pixel belong to a different IDeF-X LXe ASIC. The solid red line is a Gaussian fit. 6.14 Pedestal map obtained from the mean value of the pedestal per pixel. . . . . . . 6.15 Pedestal value per pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.16 Relative position of the two IDeF-X LXe chips (black and blue line) with respect to the pixels in the anode. In the right figure the location of the connectors directly wire bounded to the 32 pixels of each ASIC is illustrated. . . . . . . . . 6.17 Pedestal value per pixel for the two IDeF-X LXe ASICs. Each chip is coupled to 32 pixels of the anode. The ASICs are identified according to the configuration shows in Figure 6.16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.18 Noise map distribution per pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.19 Noise distribution per pixel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.20 Raw noise distribution with no MLI insulation around the front flange of the TPC.221 6.21 Mean pedestal distribution of the pixels 63 and 19 respectively (see Appendix B).
6. 22
22 (a) σ tail σ core per pixel and (b) Mapping of the fraction of rejected pedestals per pixels obtained with no MLI insulation on the front flange of the TPC. . . . . . . . . . 6.23 Noise distribution after pedestal subtraction extracted from one random sample for all the pixels with (left) and without (right) the pixels on the borders of the anode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.24 Average baseline over the 64 pixels. . . . . . . . . . . . . . . . . . . . . . . . . . 6.25 Pedestal and noise values for the pixel 0 over several months of data-taking. . . 6.26 Phase diagram and vapor pressure curve of xenon [2]. The left side figure shows a zoom on the range of interest. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.27 Evolution of the median value of the raw signals with the event number for the pixel 63 of the anode. Each event was registered over a time window of 15.36 µs. The bottom figure shows a zoom on a region of interest where two baseline perturbations are clearly visible. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.28 Median distribution of the pixel 63. The black line is the Gaussian fit. The two black dashed lines represents the rejection interval cut obtained from the method presented in Section 6.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.29 Evolution of the median value of the raw signals with the event number for eight pixels of the column 7 of the anode. The pixels are represented in ascending order.228 6.30 Evolution of the median value of the raw signals with the event number for eight pixels of the column 7 of the anode over a time interval of the total time window. The fluctuations of the baseline follow an ascending pattern from the bottom to the top part of the anode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.31 Evolution of the median value of the raw signals with the event number for eight pixels of the column 7 of the anode under pressurization and de-pressurization conditions respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.32 Front view of the XEMIS1 TPC.The copper structure place around the electronic cards was installed to reduced the temperature of the anode and electronics thanks to a liquid nitrogen circuit. . . . . . . . . . . . . . . . . . . . . . . . . . 6.33 Baseline perturbation caused during the gain stage of the IDeF-X LXe. The left side figure is a zoom on the region of interest. . . . . . . . . . . . . . . . . . . . 6.34 Schematic diagram of the data analysis procedure. . . . . . . . . . . . . . . . . . 6.35 Value of the noise per pixel after pedestal subtraction and common noise rejection.233 6.36 511 keV peak position as a function of the channel number. . . . . . . . . . . . . 6.37 Pixel signal distribution for 511 keV events as a function of the column number. 6.38 Collected charge as a function of the energy threshold level for 511 keV events at 1 kV/cm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.39 Pixel signal distribution of the segmented anode where a deposit of 511 keV is shared between four adjacent pixels. The waveform registered by the right bottom pixels is the TTT signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.40 Example of a 511 keV energy deposited shared by four adjacent pixels of the anode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.41 Time difference between the pixels of the same cluster, for cluster with a total measured energy of 511 keV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.42 Time difference distribution between pixels of the same cluster as a function of A neighbor . Only cluster with a total measured energy of 511 keV are included in the distribution. In the left side figure the mean value of ∆t with respect to A neighbor is presented. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.43 Example of the ∆t between pixels of the same cluster for a total charge of 511 keV and a A neighbor of 4σ noise , 10σ noise and 95σ noise respectively. . . . . . . . . . . . . 6.44 Time resolution as a function of SNR. . . . . . . . . . . . . . . . . . . . . . . . 6.45 Energy spectrum of single-cluster events for two different cluster time windows.6.46 Distribution of the number of clusters before and after cluster rejection. . . . . . 6.47 (a) Energy spectrum and (b) number of pixels per cluster for those clusters with an energy smaller than 20 keV with amplitude threshold level of 3.0σ noise . . . . . 6.48 Distribution of the rejected clusters due to the energy exclusion. . . . . . . . . . 6.49 Scintillation light amplitude as a function of the time of the CFD of 511 keV. The red dashed line represents a cut to reject uncorrelated events with the ionization charge as well as noise events. All those events outside the cut are excluded from the analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Scatter plot of the measured charge as a function of the electron drift time for 511 keV γ-rays events at 1 kV/cm after two weeks of re-circulation. The black points represents the collected charge per slice, and the solid red line represents the fit of the collected charge to Equation 7.1. . . . . . . . . . . . . . . . . . . 7.2 Drift time distribution of 511 keV single-cluster events at 1 kV/cm. The solid red line represents the exponential fit to the distribution. . . . . . . . . . . . . . . 7.3 Beginning and end of the TPC at 1 kV/cm for single-cluster 511 keV events. The solid blue lines are the Error function fit to the drift time distribution at both edges of the chamber. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Scatter plot of the normalized charge as a function of the z-position obtained by simulation using the results of CASINO (see Section 2.3.1). . . . . . . . . . . . 7.5 Electron drift velocity as a function of the applied electric field at a temperature of 168 K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 X and Y multiplicities as a function of the drift length for single-cluster 511 keV events at 1 kV/cm and a 4σ noise selection threshold. The pixel size is 3.125 × 3.125 mm 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Schematic drawing of the concept of multiplicity in a segmented anode. In the three illustrated examples the total multiplicity is equal to 2. In a) the multiplicity along the y-axis (Y multiplicity) is 2, while the multiplicity along the x-axis or X multiplicity is 1. On the other hand, in b) X multiplicity is equal to 2 and Y multiplicity is 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Fraction of cluster with only one triggered pixel as a function of the drift length in the xy-plane for single-cluster 511 keV events at 1 kV/cm and a 4σ noise selection threshold. The pixel size is 3.125 × 3.125 mm 2 . . . . . . . . . . . . . . . . . . . 7.9 Pulse height spectra of 511 keV γ-rays at (a) 1 kV/cm and (b) 2.5 kV/cm. . . . 7.10 Ionization yield as a function of the electric field for 511 keV γ-rays. The red line represents the fit to Equation 2.4. . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Energy resolution as a function of the electric field for 511 keV γ-rays. The red line represents the fit to Equation 2.7. . . . . . . . . . . . . . . . . . . . . . . . 7.12 Two different views of the experimental set-up used to detect the 1274 keV γ-ray emitted by the 22 Na source. The solid red line represents the axis along the center of the TPC. The source (yellow) is centered with respect to the anode (green), while the BaF 2 and PMT are laterally shifted with respect to the central axis. 7.13 Pulse height spectra for three different γ-ray energies from the lines of 511 keV, 662 keV and 1274 keV of 22 Na and 137 Cs at 1.5 kV/cm. The red lines represent the Gaussian fit to the photoelectric peak. . . . . . . . . . . . . . . . . . . . . . 7.14 Energy resolution as a function of the energy for two different applied electric fields.261 7.15 Collected charge as a function of the energy for two different applied electric fields. The dashed lines represent the first degree polynomial fit to data considering the point at (0,0). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16 Total charge for two-cluster 511 keV events as a function of the collected charge per clusters. A minimum distance cut between clusters of 1 cm was included in order to avoid pile-up events. The solid red line represents the collected charge for single-clusters 511 keV events. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.17 Ionization yield as a function of the drift length at 1 kV/cm. The red solid line represents the value of the ionization yield obtained from the total charge spectrum integrated over all z-values, and the red dashed line shows the 1-σ statistical uncertainty. The error bars represent the statistical uncertainties deduced from the fit and the boxes represent the systematic uncertainties. . . . 7.18 Energy resolution (σ/E) as a function of the drift length at 1 kV/cm. The red solid line represents the value of the ionization yield obtained from the total charge spectrum integrated over all z-values, and the red dashed line shows the 1-σ statistical uncertainty. The error bars represent the statistical uncertainties deduced from the fit and the boxes represent the systematic uncertainties. . . . 7.19 Drift length distribution obtained with experimental and simulated data. . . . . 7.20 Comparison between the number of triggered pixels as a function of the drift length for four different threshold levels along the y-coordinate obtained with experimental and simulated data. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.21 Comparison between the fraction of single-pixel clusters as a function of the drift length for four different threshold levels obtained with experimental and simulated data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.22 Average time difference between the pixels of the same cluster as a function of A neighbor . Distributions were obtained for single-cluster events with a total measured energy of 511 keV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.23 Time resolution as a function of the collected charge in the neighboring pixel (A neighbor ). Distributions were obtained for single-cluster events with a total measured energy of 511 keV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.24 Reconstructed position along the y-coordinate using the center of gravity method for 511 keV events at 1 kV/cm and for all drift times. . . . . . . . . . . . . . . . 7.25 Center of gravity residuals for four different z-positions, obtained by simulation. 7.26 Center of gravity residuals as a function of the reconstructed position for two different interaction points, obtained by simulation. . . . . . . . . . . . . . . . 7.27 Schematic diagram of the Gaussian correction between to neighboring pixels. p right and p lef t represent the center of both pixels, n σ is the number of sigmas of the Gaussian distribution at the inter-pixel position, and x η is the final reconstructed position. The standard deviation of the Gaussian distribution is given by the lateral diffusion coefficient. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.28 Reconstructed position along the y-coordinate for 511 keV events at 1 kV/cm for experimental and simulated data. Two-pixel clusters along the y-direction are corrected using the Gaussian method. . . . . . . . . . . . . . . . . . . . . . . . 7.29 Residuals as a function of the reconstructed position for a simulated point-like electron cloud of 511 keV at 6 cm from the anode. Two-pixel clusters are corrected with the Gaussian method. Left) Mean value of the residuals as a function of the reconstructed position. The error bars represent the RMS. The position y = 0 correspond to the center of the anode, located between the two central pixels of size 3.125 × 3.125 mm 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.30 Mean value of the residuals as a function of the center of gravity position, obtained by simulation. The error bars represent the RMS. . . . . . . . . . . . . . . . . . 7.31 Residuals as a function of the reconstructed position for a simulated point-like electron cloud of 511 keV at 6 cm from the anode. The center of gravity position is corrected with the polynomial correction. Left) Mean value of the residuals as a function of the reconstructed position. The error bars represent the RMS. The position y = 0 correspond to the center of the anode, located between the two central pixels of size 3.125 × 3.125 mm 2 . . . . . . . . . . . . . . . . . . . . . . . 7.32 Residuals as a function of the reconstructed position for a simulated point-like electron cloud of 511 keV for six different initial positions. . . . . . . . . . . . . 7.33 Residuals as a function of the reconstructed position for a simulated point-like electron cloud at 3 cm from the anode, and five different energies. . . . . . . .
4u L'Université Nantes Angers Le Mans Optimisation d'une camera Compton au xénon liquide à simple phase pour l'imagerie médicale 3
Table 1 . 5
15
123 I 13 hours 159
201 T l 73 hours 170, 135
e -capture 69 -80 keV
67 Ga 3.26 days 93.3, 185.6, 300
111 In 2.81 days 171.3, 245.4
-Physical properties of the main radioisotopes used in SPECT.
Table 1 . 6 -
16 Physical properties of positron emitters radionuclides used in PET. Figure taken from [107].
Table 1 . 7
17
14 O 22 Na 44g Sc 82 Rb 94m Tc
Half life: 70.6 s 2.6 years 3.97 hours 1.273 min 52 min
β + BR a (%): 99.249 90.326 94.27 13.13 67.6
81.76
E max β + (keV): 1808.24 545.7 1474.3 2601 2439
3378
E γ (keV): 2312.6 1274.5 1157 776.52 871.5
993.19
1522.1
1868.7
2740.1
3129.1
γ BR (%): 99.38 99.94 99.9 15.08 94.2
2.21
4.5
5.7
3.5
2.21
-Physical properties of some 3γ-emitting radionuclides
[START_REF] Center | [END_REF]
.
a Branching Ratio.
Table 1 .
1
43 Sc 44g Sc 44m Sc 47 Sc
Half life: 3.89 hours 3.97 hours 2.44 days 3.35 days
Emitter: β + (88 %) β + (94.27%) γ (98.8 %) β -(100 %)
E max β (keV): 1200 1474 - 440.9 (68.4 %)
600.3 (31.6 %)
E γ (keV): 372.8 (23 %) 1157 (99.9 %) 270 159 (68 %)
Production: 43 Ca(p, n) 44 Ca(p, n) 44 Ca(p, n) 48 T i(p, 2p)
Clinical use: diagnosis diagnosis diagnosis therapy
42
Ca(d, n)
44
Ca(d, 2n)
44
Ca(d, 2n)
nat Ca(α, X)
44
T i/
44
Sc gen.
nat Ca(α, Xn)
43
T i 8 -Main properties of the scandium isotopes of interest for nuclear medicine
[START_REF] Duchemin | Etude de voies alternatives pour la production de radionucléides innovants pour les applications médicales[END_REF][START_REF] Walczak | Cyclotron production of 43 Sc for PET imaging[END_REF]
.
Table 3 .
3 .1 lists the different components of the TPC. A more detailed description of the light detection and charge collection systems is presented in the following subsections.
Designation Materials Thickness (mm) Tolerance (mm)
Flange Glue Anode Support Frisch grid Field rings Lower board Column spacer Cathode Screening mesh Stainless steel STYCAST * a Copper Copper Macor ceramics Macor ceramics Copper Copper 0.5 0.1 2.6 1 0.5 4 4.5 0.005 0.005 ± 0.2 ± 0.1 ± 0.1 ± 0.04 ± 0.03 ± 0.05 ± 0.05 ± 0.02 ± 0.02
1 -Assembly components of the XEMIS1 TPC.
Table 3 .
3 2 -Properties of three electroformed micro-meshes used in XEMIS1.
Electroformed Electroformed Electroformed
670 LPI 500 LPI 70 LPI
Table 3 .
3 .3. Figure3.4 shows a 100 LPI woven wire mesh and a 70 LPI electroformed micro-mesh both used in the development of XEMIS1.
Mesh
3 -Properties of the metallic woven meshes used in XEMIS1.
Table 4 . 1 -
41 • C. The technical specifications of the chip are shown in Figure 4.4. Main properties of the IDeF-X HD-LXe chip.
Parameter Value
Chip size 4.34 x 2.00 x 0.16 cm 3
Number of channels 32
Technology 0.35 µm CMOS
Supply voltage 3.3 V
C f 50 fF
Power consumption 27 mV (800 µW/channels)
Gain 50, 100, 150, 200 mV/fC
Dynamic range 1.3 MeV for LXe at 200 mV/fC
Peak time 0.73 to 10.73 µs (16 values)
ENC (room temperature) 150 e -RMS [187]
Leakage current tolerance up to 4 nA
The 32 channels IDeF-X front-end ASIC is bonded on a 4.34 x 2.00 x 0.16 cm 3 PCB (Printed Circuit Board) (see left image in Figure
N -1 n=0 x 2 [n] = 1 to facilitate our notation and to avoid additional normalization factors.
Entries 35 40 30 6 10 × Entries Mean RMS Constant 3.836e+07 Mean 0.05481 4.3225e+08 0.0545 -85.44 2.271e+03 ± 0.00411 ± -Entries 4.3225e+08 Mean 0.0545 -RMS 85.44 Constant 2.271e+03 ± 3.836e+07 Mean 0.00411 ± 0.05481 -
Sigma Sigma 85.42 85.42 ± ± 0.00 0.00
25
20
15
10
5
500 0 - - 400 - 300 - 200 - 100 0 100 200 300 400 500
Amplitude (electrons)
Table 4 .
4 2 summarizes the average noise counting rates obtained for five different thresholds. The value obtained for the zero-threshold rate is consistent with equation (4.21) for a peaking time of 1.39 µs. The results presented in the table are compatible with the values obtained directly from Equation (4.20) for a fixed f n0 = 365 kHz measured experimentally:
Table 4 . 2 -
42 Experimental noise rate as a function of the discriminator threshold level. The last column shows the expected values from the Rice's formula for a f n0 given by the experimental data.
2.27 220x10 3 222x10 3
2 4.54 49x10 3 51x10 3
3 6.81 4263 4286
4 9.08 229 143
The rise time of the signals is shortened as the drift distance decreases, and the
Signal (a.u.) 0.6 1 0.8 500 LPI -gap 1mm 100 LPI -gap 1mm 70 LPI -gap 1mm 50.3 LPI -gap 1mm 100 LPI -gap 0.5mm
0.4
0.2
0
-4 -2 0 2 Time ( µ 4 s) 6
Signal (a.u.) -1 10 1 500 LPI -gap 1mm 100 LPI -gap 1mm 70 LPI -gap 1mm 50.3 LPI -gap 1mm 100 LPI -gap 0.5mm
-2 10
-6 -3 10 -4 -2 0 2 4 Time ( µ 6 s) 8
7.2. Drift time distribution and measurement of the electron drift velocity
Entries 500 400 / ndf Constant 2 χ / ndf 2 χ Constant 340.5 / 360 0.006 ± 6.159 340.5 / 360 0.006 ± 6.159
Slope Slope 0.06413 -0.06413 - ± ± 0.00047 0.00047
300
200
100
10 -0 0 10 20 30 40
µ Drift time ( s)
Table 7 . 1 -
71 Ionization yield and energy resolution of γ-rays in LXe at 1.5 kV/cm and 1 kV/cm.
22 Na 137 Cs 22 Na 511 662 1274 0.9957 ± 0.001 1.027 ± 0.001 4.97 ± 0.03 % 4.37 ± 0.05 % 1.324 ± 0.001 1.364 ± 0.002 4.04 ± 0.06 % 3.49 ± 0.09 % 2.553 ± 0.005 2.624 ± 0.004 2.92 ± 0.14 % 2.85 ± 0.10 %
7.5. Monte Carlo simulation of the response of XEMIS1 to 511 keV γ-rays
Multiplicity X 1.5 1.6 1.7 1.8 Simulation Experimental Data Multiplicity X 1.5 1.6 1.8 1.7 Simulation Experimental Data
1.4 1.4
1.3 4 σ 1.3 6 σ
1.2 10 20 30 40 50 60 1.2 10 20 30 40 50 60
Drift Length (mm) Drift Length (mm)
Multiplicity X 1.4 1.5 1.6 1.7 Simulation Experimental Data Multiplicity X 1.5 1.6 1.8 1.7 Simulation Experimental Data
1.4
1.3 8 σ 1.3 10 σ
1.2 10 20 30 40 50 60 1.2 10 20 30 40 50 60
Drift Length (mm) Drift Length (mm)
Compton au xénon liquide à simple phase pour l'imagerie médicale 3γ La rapide évolution des différentes techniques d'imagerie médicale, particulièrement dans les domaines de l'instrumentation de détection et de l'analyse de l'image, a marqué le début du 21ième siècle. Elle est en partie due aux importants progrès technologiques réalisés par la société, et aux liens très étroits caractérisant les mondes de la recherche et de l'industrie. Dans les thématiques de la physique expérimentale, où le but est d'investiguer la structure de la matière, l'origine de l'univers ou les lois fondamentales régissant les propriétés de la nature sont au coeur des recherches scientifiques. De nouveaux instruments de mesure sont constamment imaginés pour appréhender de nouvelles observations où les limites de nos connaissances sont testées. Ainsi, de nombreux outils utilisés actuellement en pratique clinique ont des origines directement empruntées à la communauté scientifique autour d'expériences de physique fondamentale. Cependant, malgré les très bonnes images diagnostiques obtenues actuellement en routine clinique, les progrès technologiques ne cessent d'évoluer. L'augmentation de l'espérance de vie et la volonté de toujours progresser dans ce sens lancent de nouveaux défis, en particulier pour les techniques d'imagerie fonctionnelle pratiquées en médecine nucléaire. La réduction de la dose de radiation administrée au patient, la diminution du temps de pause des caméras utilisées en imagerie et la nécessité d'un suivi thérapeutique plus personaliè comptent parmi les principaux vecteurs pour orienter les futures améliorations. C'est autour de ces objectifs que le laboratoire Subatech propose depuis 2004 la mise au point d'une nouvelle technique d'imagerie médicale, baptisée imagerie à 3 photons. L'imagerie à 3 photons est basée sur deux nouveaux concepts : l'utilisation conjointe d'une nouvelle technologie de caméra, un télescope Compton au xénon liquide, et d'un nouveau médicament radioactif basé sur le 44g Sc.
Compton camera for 3 medical imaging Résumé Les
travaux décrits dans cette thèse sont centrés sur la caractérisation et l'optimisation d'une camera Compton à phase unique au xénon liquide pour des applications médicales. Le détecteur a été conçu pour exploiter les avantages d'une technique d'imagerie médicale innovante appelée l'imagerie 3. Elle vise à l'obtention de la position en 3D d'une source radioactive avec une très haute sensibilité et une réduction importante de la dose administrée au patient. L'imagerie 3 est basée sur la détection en coïncidence de 3 photons gamma émis par un émetteur spécifique (+, γ), le44 Sc. Un premier prototype de camera Compton au xénon liquide a été développé par le laboratoire Subatech à travers le projet XEMIS (Xenon Medical Imaging System), pour démontrer la faisabilité de l'imagerie 3. Ce nouveau système de détection comporte un système de cryogénie avancé et une électronique front-end à très faible bruit qui fonctionne à la température du xénon liquide. Ce travail a contribué à la caractérisation de la réponse du détecteur et à l'optimisation de la mesure du signal d'ionisation. L'influence de la grille de Frisch sur le signal mesuré a été particulièrement étudiée. Les premières preuves de la reconstruction Compton en utilisant une source de22 Na (+, E = 1.274 MeV) sont aussi rapportées dans cette thèse et valident la preuve de concept de la faisabilité de l'imagerie 3. Les résultats présentés dans cette thèse ont joué un rôle essentiel dans le développement d'une camera Compton au xénon liquide de grandes dimensions pour l'imagerie des petits animaux. Ce nouveau détecteur, appelée XEMIS2, est maintenant en phase de construction.
Rate of energy loss per unit length in the medium.
Approximation of the average path length travelled by a charged particle before it slow down to rest[13].
1.2. Next generation of LXe detectors
1.3. 3γ imaging: a new medical imaging technique
Constant Fraction Discriminator
Commissariat à l'énergie atomique et aux énergies alternatives
Complementary Metal Oxide Semiconductor
The peaking time is defined as the time between the 5 % of the amplitude and the maximum.
-8 -6 -4 -2 -
-10 3 -10 2 -
One frequency channel k equals 10 kHz
One channel corresponds to 80 ns.
One ADC channel corresponds to 0.6 mV
Time-to-amplitude converter
The rise time is defined as the time required by the pulse to rise from 5 % to 100 % of its amplitude
-4 -2 -
-4 -2
-600 -500 -400 -300 -200 -100 -
-6 -4 -2 -
Analog to Digital Converter
Chapter 6. Performance Evaluation of XEMIS1 for 511 keV γ-rays.
One time channels corresponds to 80 ns
One ADC channels corresponds to 0.6 mV
The specific latent heat of water at 1 bar is around 2258 kJ/kg.
-3 -2 -1 -0
Le Chapitre 7 est consacré à la présentation et la discussion des résultats obtenus au cours de ce travail de thèse, avec XEMIS1. Les résultats présentés dans cette section visent à fournir une compréhension complète de la réponse de XEMIS1 aux rayons gamma de 511 keV. Ce chapitre montre l'étude détaillée de la résolution en énergie, la résolution temporelle, la résolution spatiale et la résolution angulaire avec un faisceau mono-énergétique de rayons gamma de 511 keV émis par une source de[START_REF] Leo | Techniques for Nuclear and Particle Physics Experiments[END_REF] Na de faible activité. L'évolution du rendement de charges d'ionisation ainsi que l'évolution de la résolution en énergie avec le champ électrique appliqué et la longueur de dérive ont été analysées. Un étalonnage préliminaire de la réponse du détecteur pour différentes énergies de rayons γ est présenté. Il a été observé une augmentation de la charge recueillie avec le champ électrique appliqué et l'énergie des rayons γ, ainsi qu'une augmentation de la résolution en énergie avec l'intensité du champ électrique et l'énergie du rayon γ. Ces effets sont directement liés aux fluctuations du taux de recombinaison tout au long de la trace des électrons primaires. Pour un champ électrique de 2,5 kV/cm, une résolution en énergie de 4 % (σ/E) a été mesurée. Pour une énergie de 1274 keV et un champ électrique de 1,5 kV/cm, une résolution en énergie de 2,85 % (σ/E) a été atteinte. Une réponse non-linéaire de la charge collectée a été observée en fonction de l'énergie, et est d'autant plus significative à faible intensité de champ. Les propriétés de transport des électrons dans le xénon liquide, telles que la vitesse de dérive des électrons et la diffusion, sont discutées. Les résultats obtenus sur la multiplicité des clusters en fonction de la longueur de dérive pour des événements de 511 keV à 1 kV/cm et pour différents seuils sont conformes avec la propagation du nuage d'électrons vers l'anode due à la diffusion transversale. Ces résultats indiquent que le partage de charges entre les pixels voisins peut être facilement décrit par une Gaussienne variant avec la distance de
Acknowledgements
tal y como soy. , Javi, por mucho más de lo que podría escribir. v
[1,1.5)mm z= [1.5,2) mm z= [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]3) mm z= [3,[START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF] mm z= [START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF][START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF] mm z= [START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF][START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF] mm z= [START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF][START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF] mm z= [START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF][START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF] mm z= [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF][START_REF] Gornea | Double beta decay in liquid xenon[END_REF] [1,1.5) mm z= [1.5,2) mm z= [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]3) mm z= [3,[START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF] mm z= [START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF][START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF] mm z= [START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF][START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF] mm z= [START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF][START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF] mm z= [START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF][START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF] mm z= [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF][START_REF] Gornea | Double beta decay in liquid xenon[END_REF] [START_REF] Aprile | Liquid Xenon Detectors for Particle Physics and Astrophysics[END_REF]3) mm z= [3,[START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF] mm z= [START_REF]Liquid rare gas detectors: recent developments and applicstions[END_REF][START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF] mm z= [START_REF] Kwong | Liquefied Noble Gas Detectors for Detection of Nuclear Materials[END_REF][START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF] mm z= [START_REF] Nikkel | Liquefied Noble Gas (LNG) detectors for detection of nuclear materials[END_REF][START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF] mm z= [START_REF] Xing | Etudes et Simulations du Signal d'ionisation induit par l'interaction de rayon gamma avec le xénon liquide d'une caméra XEMIS[END_REF][START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF] mm z= [START_REF] Doke | Absolute Scintillation Yields in Liquid Argon and Xenon for Various Particles[END_REF][START_REF] Gornea | Double beta decay in liquid xenon[END_REF] mm s) µ Time ( to the grid. This leaves the intrinsic energy resolution of LXe as the dominant component. The energy resolution was deduced from the ratio between the standard deviation and the mean of the fit. For an electric field of 1 kV/cm we obtained an energy resolution of 4.92 ± 0.03 % (σ/E). The slightly non-symmetrical behavior at lower amplitude values can be in part attributed to events where the γ-ray was scattered before reaching the LXe. The γ-ray loses part of its energy during the scattering process and will therefore produce a smaller signal in the detector. If the energy loss in the interaction is small, some of these events will be included in the photoelectric peak, which will irretrievably contaminate the low energy part of the spectrum. The contribution of the scattered photons increases at lower electric fields due to the broadened distribution, and for interactions that occur close to the anode. To partially reduce the effect of the scattered events and the non linear behaviour of LXe with the energy (see next section), only the right part of the energy spectrum was fitted for the energy resolution determination. ). The collected charge increases with the applied electric field. This effect is directly related to the electron-ion recombination quenching produced by the electric field. As the electric field increases, more electrons escape recombination and leave the interaction site. Additionally, the improvement of the energy resolution with increasing the electric field is clearly visible in Figure 7.11. An energy resolution of 3.86 ± 0.05 % (σ/E) is obtained at 2.5 kV/cm. Associated errors were obtained by adding in quadrature the statistical error from the fit of the photoelectric peak, although there are too small to be shown in the figure.
The results reported in this section are consistent with the values measured by other authors [START_REF] Ichige | Measurement of attenuation length of drifting electrons in liquid xenon[END_REF]. The value of the energy resolution for 511 keV γ-rays as a function of the Drift Length (mm)
Compton Imaging and Angular Resolution
To test the tracking Compton performance of XEMIS1, the TPC was irradiated with a non-collimated 22 Na source placed at around 13 cm from the anode. Unlike for the 511 keV calibration, we did not use a coincidental trigger between the two back-to-back 511 keV events, but instead a coincidental trigger between the two 511 keV γ-rays and the 1274 keV photons is implemented. The third γ-ray should undergo at least two interactions inside the active zone of the TPC to start the reconstruction of the source. In this section a brief summary of the 3-γ reconstruction algorithm is carried out, and the experimental results obtained with the 22 Na source at an electric fields of 0.75 kV/cm is presented.
Compton Sequence Reconstruction
The goal of the reconstruction algorithm is to transform clusters into LORs or cones to triangulate the position of the source in 3D. The intersection between a Compton cone and a LOR allows to localize a single decay of the radioactive source. These are the basics of the proposed 3γ imaging technique. Every step of the reconstruction algorithm is illustrated in Figure 7.36. The code summarized in this section was originally developed for simulation purposes but it has already been successfully tested with experimental data [START_REF] Manzano | XEMIS: A liquid xenon detector for medical imaging[END_REF][START_REF] Hadi | Simulation de l'imagerie à 3γ avec un télescope Compton au xénon liquide[END_REF]. The step number 3 is not necessary in XEMIS1 since only one γ-ray interacts in the active zone of the TPC.
Appendix
A
Noise counting rate simulation
A Monte Carlo simulation of the noise counting rate has been implemented in order to crosscheck the accuracy of the method reported in Section 4.4.1 to simulate the noise from experimental data. In order to take into account the electronic limitations, a second threshold on the trailing edge of the signals was included. Figure A.2 shows the noise rate vs. an asymmetric threshold between nσ noise -(n -0.4)σ noise . Consistency in the results with respect to the experimental results is conserved (Table A
1 shows the complete mapping of XEMIS1. The anode is segmented in 64 pixels of 3.125 × 3.125 mm 2 that gives a total detection area of 2.5 × 2.5 cm 2 . The white region depicts the active region. The numbers represent the pixel address, used to access a specific electronic channel. The reference system (0,0) was chosen to be at the center of the anode. The left bottom pixel is used to register the TTT signal.
List of Abbreviations | 765,568 | [
"781358"
] | [
"128",
"487064"
] |
01480149 | en | [
"sdu"
] | 2024/03/04 23:41:46 | 2016 | https://theses.hal.science/tel-01480149/file/2016PA066356.pdf | Keywords: Evolution of galaxies, Secular dynamics, Gravitation, Diffusion, Kinetic theory
Understanding the long-term evolution of self-gravitating astrophysical systems, such as for example stellar discs, is now a subject of renewed interest, motivated by the combination of two factors. On the one hand, we now have at our disposal the well established ΛCDM model for the formation of structures. When considered on galactic scales, depending on the nature of the accretion processes, interactions with the circum-galactic environment, may either be constructive (e.g., adiabatic gas accretion) or destructive (e.g., satellite infall). The statistical impacts of these cosmic perturbations on self-gravitating systems are now being quantified in detail. On the other hand, recent theoretical works now provide a precise description of the amplification of external disturbances and discreteness noise as well as their effects on a system's orbital structure over cosmic time, while properly accounting for the effect of self-gravity. These theories offer new physical insights on the dynamical processes at play in these self-gravitating systems on secular timescales.
These two complementary developments now allow us to address the pressing question of the respective roles of nature vs. nurture in the establishment of the observed properties of self-gravitating systems. Numerous dynamical challenges are therefore ready to be re-examined in much greater detail than before. Examples include: the secular evolution of the metallicity dispersion relationship in galactic discs, the mechanisms of disc thickening via giant molecular clouds or spiral waves, the stellar dynamical evolution of galactic centres, etc. Characterising the secular evolution of such self-gravitating systems is a stimulating task, as it requires intricate theoretical models, complex numerical experiments, and an accurate understanding of the involved physical processes.
The purpose of the present thesis is to describe such secular dynamics in contexts where self-gravity is deemed important. Two frameworks of diffusion, either external or internal, will be presented in detail. These approaches will be applied to various astrophysical systems to illustrate the particular relevance and ability of these approaches to describe the long-term evolution of self-gravitating systems. This thesis will first investigate the secular evolution of discrete razor-thin stellar discs and recover the formation of narrow ridges of resonant orbits in agreement with observations and numerical simulations, thanks to the first implementation of the Balescu-Lenard equation. The spontaneous thickening of stellar discs as a result of Poisson shot noise will also be investigated. These various approaches allow in particular for a self-consistent description of stellar migration and disc thickening. Finally, we will illustrate how the same formalisms allow us to describe the dynamics of stars orbiting a central super massive black hole in galactic centres. Other processes of secular orbital restructuration will be discussed in less details.
Résumé
La description de l'évolution à long-terme des systèmes astrophysiques auto-gravitants tels que les disques stellaires, fait aujourd'hui l'objet d'un regain d'intérêt sous l'impulsion de deux développements récents. Cela repose tout d'abord sur le succès de la théorie ΛCDM pour décrire la formation des grandes structures. A l'échelle des galaxies, les interactions avec le milieu circum-galactique peuvent, selon la nature du processus d'accrétion, être constructives (par exemple via l'accrétion adiabatique de gaz) ou destructives (par exemple via l'interaction avec un satellite). Ce nouveau paradigme permet ainsi de quantifier en détail l'impact statistique de ces perturbations cosmiques sur les systèmes autogravitants. En outre, de récents développements théoriques permettent maintenant de décrire précisément l'amplification des perturbations extérieures ou internes (bruit de Poisson) et des effets qu'elles peuvent avoir sur la structure orbitale d'un système sur les temps cosmiques, tout en considérant les effets associés à l'auto-gravité. Ces nouvelles théories offrent de nouvelles clés pour comprendre les processus dynamiques à l'oeuvre dans ces systèmes auto-gravitants sur les temps séculaires.
Ces récents progrès complémentaires nous permettent d'aborder la question lancinante des rôles respectifs de l'inné et de l'acquis sur les propriétés observées des systèmes auto-gravitants. De nombreuses énigmes astrophysiques peuvent maintenant être reconsidérées dans de plus amples détails. Les exemples ne manquent pas : l'évolution séculaire de la dispersion en métallicité dans les disques stellaires, les mécanismes d'épaississement des disques stellaires sous l'effet des nuages moléculaires ou des ondes spirales, la dynamique séculaire des centres galactiques, etc. Caractériser l'évolution séculaire de tels systèmes auto-gravitants est un exercice stimulant qui demande de subtils modèles théoriques, de complexes expériences numériques, mais également une compréhension précise des processus physiques impliqués.
Cette thèse est consacrée à la description de ces dynamiques séculaires, notamment dans les situations pour lesquelles l'auto-gravité joue un rôle important. Deux formalismes de diffusion, externe et interne, seront présentés en détail. Ces deux approches seront appliquées à trois problèmes astrophysiques, pour illustrer leur pertinence et abilité à décrire l'évolution à long-terme de systèmes autogravitants. Dans un premier temps, nous nous pencherons sur le cas des disques stellaires discrets infiniment fins, et retrouverons la formation d'étroites arêtes d'orbites résonantes en accord avec les observations et les simulations numériques, par le biais de la première mise en oeuvre de l'équation de Balescu-Lenard. Nous considérerons ensuite dans ce même cadre le mécanisme d'épaississement spontané des disques stellaires sous l'effet du bruit de Poisson. Ces différentes approches permettent en particulier de décrire de manière cohérente la migration radiale des étoiles et l'épaississement des disques galactiques. Enfin, nous illustrerons comment les mêmes formalismes permettent également de décrire la dynamique des étoiles orbitant un trou noir supermassif dans les centres galactiques. D'autres processus de restructuration orbitale seront discutés plus brièvement.
Mots-clés: Evolution des galaxies, Dynamique séculaire, Gravitation, Diffusion, Théorie cinétique.
Table of contents
Résumé
Abstract Introduction 1.1 Context
The current paradigm for the formation of astrophysical structures is the Lambda Cold Dark Matter (ΛCDM) model (Springel et al., 2006). Initial quantum density fluctuations (Bardeen et al., 1986) in the non-baryonic dark matter appear right after the Big Bang and get stretched by the expansion of the universe. On the other hand, gravity drives a hierarchical clustering which leads to a strong increase of initial overdensities. These densities grow, separate, collapse and give birth to galaxy haloes. These structures form on the large scale a "cosmic web" of nodes, filaments, walls and voids (see, e.g., Frenk & White, 2012, for a review), as can be seen in the top row of figure 1.1.1. At a later stage in the evolution Figure 1.1.1: Snapshot extracted from the Horizon-AGN cosmological hydrodynamical simulation (Dubois et al., 2014). It illustrates the dark matter density (top row) and stellar density (bottom row) centred on a massive halo at redshift z = 1.2 for various scales. One notes the formation of large scale structures (the "cosmic web") via the hierarchical clustering of primordial quantum fluctuations, stretched by the expansion of the universe and increased in contrast by self-gravity.
of the universe, "dark energy", which now accounts for approximately 70% of the total energy content of the universe (Planck Collaboration et al., 2014), comes into play to cause the originally slowing universal expansion to reaccelerate (Riess et al., 1998;Perlmutter et al., 1999). This late reacceleration tends to isolate even more the different regions of the clustering hierarchy, reducing the later merging rate of haloes, leading to a more dynamically quiescent period.
Contrary to dark matter, baryonic matter, which only makes up approximately 15% of the matter content of the universe, is not a collisionless fluid and therefore undergoes shocks as large scale structures develop. Indeed, baryons are accreted along the structures formed by the dark matter, cool within the haloes, and form stars (White & Rees, 1978). This essentially leads to the formation of gravitationally bound objects, the galaxies. From such mechanisms, one expects principally two types of galaxies. Spirals form when haloes accrete gas that dissipates and leads to the formation of discs. On the other hand, ellipticals are mainly expected to form as results of collisions and mergers triggering AGN feedback that prevents further gas accretion [START_REF] Toomre | [END_REF]Toomre, 1977a;Barnes & Hernquist, 1992;Dubois et al., 2016). Figure 1.1.2 illustrates two examples of galactic morphologies observed in the local universe. This general paradigm of formation is widely accepted as broadly correct, but some of its pre- dictions still appear as inconsistent with the observations (Silk & Mamon, 2012;[START_REF] Kroupa | [END_REF]. See for example Appendix 4.D for a description of one of these tensions through the so-called cusp-core problem. One important outcome of these developments is that galaxies are not isolated islands distributed randomly in the universe, but rather follow and interact with the intricate cosmic web network (Bond et al., 1996;Pichon et al., 2011).
Galaxies are therefore complex structures at the interface of two scales: the large scale structure (the so-called intergalactic medium) and the small scale of their internal constituents, the interstellar medium and the stars. Galaxies are also at the interface of both destructive and constructive processes. Indeed, gas accretion onto a galaxy can either be smooth and constructive as with cold gas flows, or abrupt, violent and destructive as with mergers or satellite infalls with inconsistent impact parameters. In the former case, the very formation process sets the forming stars in a very specific configuration, with a large reservoir of free energy. Later, galaxies are subject to both stabilising and destabilising influences. The constant resupply of new stars from the quasi-circular gas orbits makes disc galaxies dynamically colder and more responsive, while a wide variety of heating mechanisms tend to increase the velocity dispersion of the stars, making galaxies dynamically hotter and less responsive. Finally, galaxies, because they were formed from cold gas, are originally created in a highly improbable state, i.e. a low entropy state of low velocity dispersion. These (thermodynamically improbable) states are maintained by symmetry, given that their initially axially symmetric distributions do not allow for efficient angular momentum exchange. The dynamics of the system will aim at leaving these metastable states (quasisteady on short timescales) towards more probable states of higher entropy. Understanding the longterm evolution of such self-gravitating systems requires to consider the joint contributions associated with on the one hand external effects, the cosmic environment, and on the other hand associated with internal effects, such as the system's own graininess and internal structure (e.g., bars or giant molecular clouds). Getting a better grasp at the secular dynamics of these systems involves therefore ranking the strengths of each of these sources of evolution, i.e. quantifying the effects of nurture and nature on their evolution, and weighing their respective efficiency.
The general purpose of the present thesis is to describe and understand the secular evolution of
STELLAR DISCS
self-gravitating systems. See [START_REF] Kormendy | Secular Evolution in Disk Galaxies[END_REF]; Binney (2013b); Sellwood (2014) for detailed reviews on recents developments in this respect. This thesis also aims at incrementing our theoretical knowledge of the self-interaction of self-gravitating systems, while providing explicit solutions to the recently published corresponding kinetic equations. Before entering the core of the thesis, let us first briefly describe the structure of disc galaxies and their associated dynamical components (section 1.2), as well as the tools and mechanisms from Hamiltonian dynamics essential to characterise the evolution of such systems (section 1.3).
Stellar discs
Following [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF], let us now review the important orders of magnitude for a spiral galaxy such as the Milky Way. As illustrated in figure 1.2.1, the Milky Way is comprised of various components. First, the stellar component of the Milky Way is made of about 10 11 stars for a total mass of Bulge SMBH Thin disc
Thick disc
Halo Figure 1.2.1: Qualitative illustration of the main dynamical components of a spiral galaxy. Most of the galaxies are expected to contain a central super massive black hole (see chapter 6 for a detailed discussion of the secular dynamics of stars in the vicinity of such objects). The central region of the galaxy takes the form of a spherical component, the bulge. At larger radii, the stellar disc is roughly made of two distinct components, namely a thin disc of stars (see chapters 3 and 4 for a discussion of the secular dynamics of razor-thin stellar discs), and a thick disc of stars (see chapter 5 for a discussion of the secular dynamics of thickened stellar discs). Finally, the disc is embedded in a spheroidal dark matter halo (see Appendix 4.D for a brief illustration of how the dynamics of such spherical systems may be described).
order 5×10 10 M . Most of the stars belong to a disc of approximate radius 10 kpc. The Sun is located near its midplane at a radius of 8 kpc. Observations have indicated that the stellar disc may be constituted of two components, a thin and thick discs, of respective typical thickness 300 pc and 1 kpc. The thick disc is made of older stars with different chemical compositions and its luminosity is about 7% that of the thin disc. Stellar discs are said to be dynamically cold, as the random velocities of their constituents are much smaller than the mean ordered velocity, i.e. the mean quasi-circular motion. Chapters 3 and 4 will especially consider the secular dynamics of razor-thin stellar discs, while chapter 5 will investigate possible mechanisms of secular thickening of stellar discs. While this is not illustrated in figure 1.2.1, let us also note that stellar discs also contain gas, atomic and molecular hydrogen and helium, forming the interstellar medium (ISM). The ISM only makes up about 10% of the total stellar mass, and is therefore of little importance for the dynamics of the Milky Way. However, the transient giant molecular clouds, dense gas regions, remain important for the dynamics of a galaxy as they are the birth place of new stars. They impose as well the chemistry of the newly formed stars. In the centre of the disc, one finds an amorphous component, the bulge, of approximate mass 0.5×10 10 M . Contrary to the disc, the bulge is a dynamically hot region, where random velocities are larger than the mean velocity. Let us finally note that the Milky Way's bulge, being triaxial, is sometimes called a bar. The secular effects associated with bars will only be briefly discused in chapter 5. At the centre of these regions is located a super massive black hole of approximate mass 4×10 6 M , called Sgr A * . See chapter 6 for a description of the secular dynamics of stars near super massive black holes. Finally, the most massive component of the Milky Way is its surrounding dark matter halo, with an approximate radius of 200 kpc and approximate mass of 10 12 M . In the context of galactic dynamics, the halo mainly only interacts with the stellar component through the joint gravitational potential they define. See Appendix 4.D for a brief illustration of how to describe the secular dynamics of dark matter haloes.
Hamiltonian Dynamics
Let us now present a short introduction to Hamiltonian dynamics, with a particular emphasis on the tools and processes essential for the secular evolution of self-gravitating systems. We refer the reader to [START_REF] Goldstein | Classical mechanics[END_REF]; [START_REF] Arnold | Mathematical methods of classical mechanics[END_REF]; [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF] for thorough presentations of Hamiltonian dynamics.
A n-dimensional dynamical system can be described by its Hamiltonian H expressed as a function of the canonical coordinates (q, p). These coordinates follow Hamilton's equations reading
dq dt = ∂H ∂p ; dp dt = - ∂H ∂q .
(1.1)
We define the configuration space of a system as the n-dimensional space with coordinates (q 1 , ..., q n ), and the associated momentum space (p 1 , ..., p n ). More importantly, we define as phase space the 2n-dimensional space with coordinates (q 1 , ..., q n , p 1 , ..., p n ) = (q, p) = w.
Let us consider two scalar functions F 1 (w) and F 2 (w) depending on the phase space coordinates. We define their Poisson bracket as
F 1 , F 2 = ∂F 1 ∂q • ∂F 2 ∂p - ∂F 1 ∂p • ∂F 2 ∂q .
(1.2)
Thanks to this notation, Hamilton's equation can be written as dw dt = w, H .
(1.3)
In addition, the phase space coordinates satisfy the canonical commutation relations namely w α , w β = J αβ with J = 0 I -I 0 , (1.4)
where we introduced the 2n×2n symplectic matrix J, and where 0 and I are respectively the n×n zero and identity matrix.
Because Hamiltonian dynamics describes the system's dynamics in phase space, it allows for generalised change of coordinates. Phase space coordinates W = (Q, P ) are said to be canonical if they satisfy W α , W β = J αβ .
(1.5)
The essential property of canonical coordinates is that in any of these coordinates, Hamilton's equations conserve the same form. One has Ẇ = [W , H], where the Hamiltonian is expressed as a function of the new coordinates and the Poisson bracket involves as well derivatives w.r.t. the new coordinates. Let us also note that infinitesimal phase space volumes are conserved by canonical transformations so that dW = dw. Poisson brackets are also conserved through canonical transformations.
We define an integral of motion I(w) to be any function of the phase space coordinates constant along the orbits. It is said to be isolating if for any value in the image of I, the region of phase space which reaches this value is a smooth manifold of dimension 2n-1. For example, for Hamiltonians independent of time, the energy constitutes an isolating integral of motion. A system is said to be integrable (in the Liouville sense) if it possesses n independent integrals of motions, i.e. whose differentials are linearly independent in all points. For such integrable systems, one may then devise a set of canonical coordinates, the angle-action coordinates (θ, J ), such that the actions J are independent isolating integrals of motion. Within these coordinates, the Hamiltonian H becomes independent of the angles θ, so that H = H(J ). Hamilton's equations then read dθ dt = ∂H ∂J = Ω(J ) ; dJ dt = 0 , (1.6)
where we introduced as Ω(J ) = ∂H/∂J the intrinsic frequencies of motion. In these coordinates, the motions are straight lines given by θ = θ 0 +Ω(J ) t ; J = cst.
(1.7)
An additional property here is that the angles θ are assumed to be 2π-periodic, so that the actions J describe a n-dimensional torus in phase space on which the orbit lies. This is the crucial strength of the angle-action coordinates, which formally allows for a simple description of the complex trajectories in the physical phase space (q, p) as straight lines motions in the angle-action space (θ, J ). Unfortunately, angle-action coordinates are not always guaranteed to exist. In addition, even for integrable systems, simple analytical expressions for these coordinates are rarely available. In the upcoming chapters, we will illustrate examples of angle-action coordinates for razor-thin and thickened axisymmetrics discs, 3D spherical systems, and Keplerian systems. Figure 1.3.1 offers a visualisation of angle-action coordinates for 1D harmonic oscillators. This is an important example when applying the epicyclic approximation in chapters 3 and 5. As emphasised in equation (1.7), once the angle-action coordinates have been con- cles' trajectories in the physical phase space (x, v). The trajectories take the form of concentric circles along which particles move. Here, the action J should be seen as a label for the circle, while the angle θ should be seen as the position along the circle. Right panel: Illustration of the trajectories in the angle-action space (θ, J). In these coordinates, the motions are straight lines. The action J is conserved, while the angle θ evolves linearly with time with the frequency Ω = ∂H/∂J.
structed, individual motions then take the form of quasiperiodic motions along the tori defined by the actions J . In figure 1.3.2, we illustrate two possible behaviours for the motion along this torus. These are resonant periodic motions or non-resonant quasiperiodic motions, depending on the properties of the intrinsic frequencies Ω.
Let us now assume that our system can be described statistically by a distribution function (DF) F (w). Let us then present the differential equation satisfied by F as a consequence of the individual evolutions imposed by Hamilton's equation (1.3). As the DF evolves, probability must be conserved, so that the DF satisfies a continuity equation in phase space given by
∂F ∂t + ∂ ∂w • F ẇ = 0 . (1.8) 0 0 2π 2π θ 1 θ 2 0 0 2π 2π θ 1 θ 2 Figure 1.
3.2: Illustration of two integrable trajectories in angle space. An integrable trajectory is fully characterised by its actions J , while the position of the particle along its orbit is described by the angles θ. Along the unperturbed motion, the actions are conserved, while the angles evolve linearly with time with the frequency Ω. Left panel: Illustration of a degenerate trajectory for which there exists n ∈ Z 2 such that n•Ω = 0, i.e. the frequencies are in a rational ratio. The trajectory is closed, periodic, and does not fill the angle space (see chapter 6 for an illustration of how to study the secular evolution of degenerate systems). Right-panel: Illustration of a non-degenerate trajectory, for which the trajectory is quasiperiodic and densely covers the angle domain.
Using Hamilton's equation (1.3), this can equivalently be rewritten as
0 = ∂F ∂t + ẇ• ∂F ∂w = ∂F ∂t + F, H = ∂F ∂t + ∂F ∂q • ∂H ∂p - ∂F ∂p • ∂H ∂q = dF dt , (1.9)
where we introduced as dF/dt the rate of change of the local probability density along the motion. Equation (1.9) has numerous names and depending on the context can be referred to as Liouville's equation (when considering the full N -body DF of a system of N particles), collisionless Boltzmann equation (when restricted to a DF depending on only one particle coordinates), or Vlasov equation (when accounting for the self-consistency of the system's potential). See Hénon (1982) for a historical account of these various names. Equation (1.9) essentially captures the conservation of the system's probability during its diffusion. Equation (1.9) becomes particularly simple when the system admits angle-action coordinates. It then reads ∂F ∂t +Ω• ∂F ∂θ = 0 .
(1.10)
With such a rewriting, one can note that steady states of the collisionless Boltzmann equation are reached by DFs such that F = F (J ). This is Jeans theorem (Jeans, 1915). These steady states are of particular importance for self-gravitating systems. Indeed, they are very efficiently reached thanks to two complementary dynamical mechanims. The first mechanism is phase mixing and is illustrated in figures 1. 3.3 and 1.3.4 This mixing mechanism relies on the fact that any dependence of the intrinsic frequencies Ω with the actions J introduces a shearing and dephasing in the angle coordinates. This leads to the appearance of ever finer structures in the system's DF, which, when coarse grained, converges to a steady state F = F (J ) independent of the angles. The second mechanism is the one of violent relaxation (Lynden-Bell, 1967) illustrated in figure 1.3.5. This occurs for selfgravitating systems initially far from equilibrium. Such systems undergo a phase of violent and abrupt potential oscillations, during which the energy of individual particles is redistributed. This allows the system to reach very efficiently a steady state on a few dynamical times. These two processes motivate the use of the orbit-averaged approximation in chapter 2. Secular dynamics can then mostly be seen as a slow evolution along quasi-stationary collisionless equilibria given by Jeans theorem.
Let us now discuss one final important physical process occurring in self-gravitating systems as a result of their ability to amplify and respond to perturbations. This is the mechanism of dynamical fric-q p q p Figure 1.3.3: Inspired from figure 4.27 of [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF]. Illustration of phase mixing undergone by a population of anharmonic oscillators (see left panel of figure 1.3.1). Each particle follows a circular trajectory in phase space, but the intrinsic frequency of motion decreases with the size of the circles. Because of this shearing in frequency, the particles dephase (left panel), leading to the appearance of ever finer structures in phase space (right panel). This is phase mixing. When coarse grained, these fine structures are washed out and the systems reaches a quasi-stationary mixed state. tion, first introduced in the seminal work from Chandrasekhar (1943a). This is illustrated in figure 1.3.6. See Nelson & Tremaine (1999) for a review. Let us consider a test mass travelling through of "sea" of background particles assumed to be infinite and homogeneous. Because of their interaction with the test mass, the background particles tend to accumulate behind the test mass, forming a gravitational wake, the polarisation cloud. Because it is located behind the test mass, this wake induces a drag force on the test mass. This is the dynamical friction. In addition, one can also note that along its motion, the test mass appears as dressed by its polarisation cloud. Collective effects, i.e. the fact that the system is self-gravitating, therefore lead to an increase of the effective mass of the test particle. A self-gravitating system can therefore (strongly) amplify perturbations. Collisions between dressed particles can have a qualitatively different outcome than collisions between bare ones (Weinberg, 1998). This dressing is important in particular in cold dynamical systems such as stellar discs, see chapter 4.
After having briefly laid the required elements of Hamiltonian dynamics needed to address the secular dynamics of self-gravitating systems, chapter 2 will rely on these remarks to present the formalisms appropriate to describe their long-term evolution.
Overview
This thesis discusses approaches to the long-term evolution of self-gravitating systems. It also illustrates applications to various classes of astrophysical systems to recover some of the features they develop on secular timescales. Two main types of secular evolution are considered depending on the sources of perturbations and fluctuations in the system. This dichotomy, around which this thesis is organised, allows for the detailed description of the secular dynamics of large classes of astrophysical systems. This thesis is composed of five main chapters. First, chapter 2 presents the main theoretical tools required for the description of such secular dynamics, and derives the associated diffusion equations. Chapter 3 focuses on the secular dynamics of razor-thin stellar discs, and emphasises how their secular dynamics may be significantly simplified by relying on a tailored WKB approximation. Chapter 4 considers the same razor-thin discs and emphasises how a proper accounting of the disc's self-gravitating amplification allows for a precise description of their diffusion features. Chapter 5 focuses on the dynamics of thickened stellar discs, simplifies their dynamics via a new thickened WKB approximation, and investigates various possible sources of secular thickening. Chapter 6 focuses on quasi-Keplerian systems (such as galactic centres) and details how their intrinsic dynamical degeneracies can be dealt with. Fi- Here, within the angle-action coordinates, as a result of the conservation of actions, trajectories are simple straight lines. Provided that the intrinsic frequencies Ω = ∂H/∂J change with the actions, particles of different actions dephase. This phase mixing in the angles θ is one of the main justifications for the consideration of orbit-averaged diffusion, i.e. the assumption that the system's mean DF depends only on the actions. This is at the heart of both diffusion equations presented in chapter 2. [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF]. Illustration of the mechanism of violent relaxation, during which an initially out-of-equilibrium self-gravitating system undergoes a phase of strong potential fluctuations allowing the system to rapidly reach a collisionless quasi-stationary state.
nally, in chapter 7, we present the conclusions of the thesis and outline possible follow-up works. Let us briefly sum up below the content of each chapter.
Friction drag Polarisation cloud
Sea of particles Test mass
Figure 1.3.6: Illustration of the homogeneous dynamical friction, as first introduced in Chandrasekhar (1943a). We consider a test mass (illustrated with the red particle) moving to the right along a straight line, while embedded in a homogeneous "sea" of background particles (illustrated with black dots). Along its motion, the test particle is followed by a gravitational wake, also coined polarisation cloud, constituted of background stars. This polarisation cloud has two main effects. First, being located behind the test mass, it exerts a drag force on the test mass, hence the name dynamical friction. It also illustrates the importance of collective effects in self-gravitating systems. Because of these polarised background stars, the test mass is dressed. Its effective mass is increased, which hastens the secular diffusion. (See chapter 4 for a detailed discussion on the importance of collective effects in cold dynamical systems.) Let us finally note that in real self-gravitating systems, the situation is more intricate if one accounts for the complexity of the trajectories, i.e. the fact that the system is inhomogeneous (Heyvaerts et al., 2016). In particular, there are situations where the polarisation can accelerate rather than drag.
In chapter 2, we present the main formalisms capturing the secular dynamics of self-gravitating systems. We successively consider two types of diffusion: collisionless and collisional. The first type of collisionless diffusion corresponds to cases where the source of fluctuations is induced by an external perturber. We investigate the interplay between the spectral properties of the external perturbations and the internal orbital structure of the system. The second type of collisional diffusion is associated with cases where the source of evolution is due to the system's own intrinsic graininess. Self-gravitating systems being inhomogeneous, we especially emphasise how this approach allows for the description of distant resonant encounters. They are shown to be the drivers of the evolution in systems made of a finite number of particles. Throughout these derivations, we also underline how these diffusion equations account for the system's self-gravity, i.e. its ability to amplify perturbations. This proves essential for cold dynamical systems such as stellar discs.
In chapter 3, we consider a first class of astrophysical systems, razor-thin stellar discs. The main aim of this chapter is to illustrate the use of a tailored WKB approximation (i.e. limited to radially tightly wound perturbations) to explicitly and straightforwardly compute the properties of the diffusion occurring in such systems. When applied to isolated discrete stellar discs, we illustrate how the two diffusion formalisms (collisionless and collisional) allow for the recovery of the shot-noise driven formation of narrow ridges observed in numerical simulations. We also discuss one discrepancy obtained in the applications of these formalisms, namely the mismatch of the diffusion timescales. This is interpreted as being due to the neglect of some contributions to the disc's self-gravity (namely the loosely wound contributions), which are accounted for in chapter 4.
The heart of chapter 4 is to illustrate, in the context of razor-thin discs, how one can fully account for self-gravity through a proper numerical calculation. Relying on the collisional Balescu-Lenard equation, we show how this formalsim recovers in detail the diffusion features observed in secular simulations of stable self-gravitating razor-thin discs. We emphasise that collective effects cause cool discs to have 2-body relaxation time much shorter than naively expected. We also argue that this anomalous relaxation introduces small scale structures in the disc, which destabilise it at the collisionless level. Resorting to our own simulations, we also investigate in detail some generic properties of such systems, such as the scaling of the system's diffusion with the number of particles, as well as the presence of unstable secular dynamical phase transitions.
In chapter 5, we extend the results of chapter 3 to thickened stellar discs. We illustrate how one may devise a thickened WKB approximation offering straightforward estimations of the collisionless and collisional diffusion fluxes. We show how these two formalisms allow for the qualitative recovery of the diffusion features observed in numerical simulations of stable thickened stellar discs, with the caveat of a diffusion timescale discrepancy due to the neglect of the contributions of loosely wound perturbations to the disc's self-gravity. We also investigate some other possible mechanisms of secular thickening such as series of central decaying bars, or the joint evolution of giant molecular clouds. This illustrates how different perturbation mechanisms can lead to different signatures in the disc's diffusion.
Chapter 6 develops this diffusion formalism for quasi-Keplerian systems, such as galactic centres. Because these systems are dominated by one central object, their constituents approximately follow closed Keplerian orbits. These systems are dynamically degenerate. We detail how such degeneracies can be dealt with to derive the associated kinetic equation. We show how this new diffusion equation captures the mechanism of resonant relaxation between Keplerian wires. We also emphasise how this approach sheds new light on some important diffusion properties of these systems. We focus in particular on understanding the Schwarzschild barrier, which strongly damps the rate with which stars can diffuse towards the central black hole.
Chapter 2
Secular diffusion
The work presented in this chapter is based on Fouvry et al. (2015dFouvry et al. ( ,b, 2016a,b),b).
Introduction
The previous chapter described the typical fate of self-gravitating systems which can be briefly summed up as follows. As a result of both phase mixing (see figure 1.3.4) and violent relaxation (see figure 1.3.5), self-gravitating systems very efficiently reach quasi-stationary states for the collisionless mean field dynamics. The systems are virialised and the mean potentials do not strongly fluctuate anymore. Stars follow their orbit set up by the mean field potential and are typically uniformly distributed in phase along each of them. Yet, as gravity is a long-range interaction, self-gravitating systems have the ability to amplify and dress perturbations (see, e.g., figure 1.3.6). These collective effects have two main consequences. They may first lead to the spontaneous growth of dynamical instabilities if ever the system was dynamically unstable. Moreover, even for genuinely stable systems, these effects can also lead to polarisation, i.e. a dressing of perturbations and therefore a boost in amplitude of the fluctuations in the system. This self-gravitating amplification is especially important for cold dynamical systems, i.e. within which most of the gravitational support comes from centrifugal forces and for which the velocity dispersion is low. This makes the system strongly responsive. This is for example important for stellar discs, where new stars, born on the cold orbits of the gas, are constantly being supplied to the system.
Once the system has reached a quasi-stationary state through these various mixing processes, the mean collisionless dynamics maintains stationarity and such a quiescent system can now only slowly evolve on long timescales. 1 This is the timescale for secular evolution, which will be our main interest here. At this stage, only additional fluctuations can drive the system's evolution. Such considerations fall within the general framework of the fluctuation-dissipation theorem, for which fluctuations occurring in the system lead to its dissipation and diffusion. Let us now introduce an important dichotomy on which the two upcoming sections rely. There are two main channels to induce fluctuations in a system. Fluctuations of the first type are induced by external stochastic perturbations, whose non-stationary contributions will be felt by the system and will lead therein to slow orbital distortions. As will be discussed in detail in the next section, the efficiency of such secular dynamics is dictated in particular by the match between the temporal frequencies of these perturbations and the system's natural intrinsic frequencies. We call this framework the collisionless framework. Another source of fluctuations is also present in any system made of a finite number N of particles: these are finite-N effects, also called Poisson shot noise. This graininess can not only be triggered by the finite number of constituents in the system, but can also originate from the variety of its components, e.g., the existence of a mass spectrum of components. As a direct consequence of the finite number of particles, the system's self-induced potential is not perfectly smooth, and therefore fluctuates around its mean quasi-stationary value. These unavoidable and non-vanishing fluctuations may then act as the source of a secular irreversible evolution. We call this framework the collisional framework, in the sense that is relies on encounters between the finite number of particles. Let us finally note that whatever the source of the perturbations, these fluctuations are dressed by collective effects. A proper accounting of the importance of the gravitational polarisation is at the heart of the upcoming derivations. This dichotomy is essential for all the upcoming sections. It allows us to distinguish secular evolution induced by the system's environment from secular evolution induced by the system's internal properties. It is therefore an useful tool to disentangle the respective contributions from nurture and nature in driving the evolution of a self-gravitating system. The aim of the present chapter is to detail the relevant formalisms allowing for the description of long-term evolutions induced by (internal or external) potential fluctuations. The following chapters will illustrate applications of this formalism to various astrophysical systems. Let us first focus in section 2.2 on the collisionless framework, where the dynamics is driven by external perturbations. Then, in section 2.3, we will consider the collisional framework of diffusion, sourced by the discreteness of these self-gravitating systems.
Collisionless dynamics
Let us first describe the collisionless diffusion that external potential fluctuations may induce. Such externally driven secular evolution can be addressed via the so-called dressed secular collisionless diffusion equation, where the source of evolution is taken to be potential fluctuations from an external bath. It has already been a theme of active research, as we now briefly review. Binney & Lacey (1988) computed the first-and second-order diffusion coefficients in action space describing the orbital diffusion occurring in a system because of fluctuations in the gravitational potential. This first approach however did not account for collective effects, i.e. the ability of the system to dress and amplify perturbations. Weinberg (1993) emphasised the importance of self-gravity for the non-local and collective relaxation of stellar systems. Weinberg (2001a,b) considered similar secular evolutions while accounting for the self-gravitating amplification of perturbations, and studied the impacts of the properties of the noise processes. Ma & Bertschinger (2004) relied on a quasilinear approach to investigate the diffusion of dark matter induced by cosmological fluctuations. Pichon & Aubert (2006) sketched a time-decoupling approach to solve the collisionless Boltzmann equation in the presence of external perturbations and applied it to a statistical study of the effect of dynamical flows through dark matter haloes on secular timescales. The approach developed therein is close to the one presented in Fouvry et al. (2015d). Chavanis (2012a) considered the evolution of homogeneous collisionless systems when forced by an external perturbation, while Nardini et al. (2012) investigated similarly the effects of stochastic forces on the long-term evolution of long-range interacting systems.
In the upcoming section, let us follow Fouvry et al. (2015d) and present a derivation of the appropriate secular resonant collisionless dressed diffusion equation. This derivation is based on a quasilinear timescale decoupling of the collisionless Boltzmann equation. This yields two evolution equations, one for the fast dynamical evolution and amplification of perturbations within the system, and one for the secular evolution of the system's mean DF.
Evolution equations
Let us consider a collisionless self-gravitating quasi-stationary system undergoing external stochastic perturbations. The mean system being quasi-stationary, we introduce its quasi-stationary Hamiltonian H 0 , associated with the mean potential ψ 0 . We assume that throughout its evolution, the system remains integrable, so that one can always define an angle-action mapping (x, v) → (θ, J ) appropriate for the Hamiltonian H 0 . Thanks to Jeans theorem (Jeans, 1915), the mean DF of the system, F , depends only on the actions, so that F = F (J , t). We suppose that an external source is perturbing the system, and we expand the system's total DF and Hamiltonian as F tot (J , θ, t) = F (J , t)+δF (J , θ, t) , H tot (J , θ, t) = H 0 (J , t)+δψ e (J , θ, t)+δψ s (J , θ, t) .
(2.1)
In the decompositions from equation (2.1), one should pay attention to the presence of two types of potential perturbations. Here, δψ e corresponds to an external stochastic perturbation, while δψ s corresponds to the self-response of the system induced by its self-gravity (Weinberg, 2001a). This additional perturbation is crucial to capture the system's gravitational susceptibility, i.e. its ability to amplify perturbations. We place ourselves in the limit of small perturbations, so that δF F , and δψ e , δψ s ψ 0 . Assuming that the system evolves in a collisionless fashion, its dynamics is fully described by the collisionless Boltzmann equation (1.9) reading
∂F tot ∂t + F tot , H tot = 0 , (2.2)
where [ . , . ] stands for the Poisson bracket as defined in equation (1.2). Let us inject the decomposition from equation (2.1) into equation (2.2), to get ∂F ∂t + ∂δF ∂t + F, H 0 + F, δψ e +δψ s + δF, H 0 + δF, δψ e +δψ s = 0 .
(2.3)
Because we assumed the mean DF to be quasi-stationary, i.e. F = F (J , t), one has F, H 0 = 0, since H 0 = H 0 (J ). Let us now take an average of equation ( 2.3) w.r.t. the angles θ. In equation (2.3), all the terms linear in the perturbations vanish, and we get a secular evolution equation for the mean DF F as
∂F ∂t = ∂ ∂J • dθ (2π) d δF ∂[δψ e +δψ s ] ∂θ , (2.4)
where d is the dimension of the physical space, e.g., d = 2 for a razor-thin disc. At this stage, let us note that ∂F/∂t can be considered as a second order term as it is the product of two fluctuations. Keeping only first order terms in equation (2.3) (quasilinear approximation), one finally gets a second evolution equation of the form
∂δF ∂t + Ω• ∂δF ∂θ - ∂F ∂J • ∂[δψ e +δψ s ] ∂θ = 0 , (2.5)
where we used the assumptions from equation (2.1) to rewrite the Poisson brackets. We also introduced the mean orbital frequencies Ω = ∂H 0 /∂J . The two evolution equations (2.4) and (2.5) are the two coupled evolution equations from which one can obtain the secular collisionless diffusion equation. Equation (2.5) describes the evolution of the perturbation δF on dynamical timescales, while equation (2.4) describes the long-term evolution of the quasi-stationary DF F . Let us now solve equation (2.5) to describe the dynamical amplification of perturbations. Its solution, when injected in equation (2.4), will then allow for the description of the secular evolution of the system's mean quasi-stationary DF.
As the angles θ are 2π-periodic, let us define the discrete Fourier transform w.r.t. these variables as
X(θ, J ) = m∈Z d
X m (J ) e im•θ ; X m (J ) = dθ (2π) d X(θ, J ) e -im•θ , (2.6) so that equation (2.5) immediately becomes
∂δF m ∂t + im•Ω δF m -im• ∂F ∂J δψ e m +δψ s m = 0 .
(2.7)
We now introduce the assumption of timescale decoupling, also coined Bogoliubov's ansatz. Indeed, let us assume that the fluctuations (i.e. δF , δψ e , and δψ s ) evolve rapidly on dynamical timescales, while the mean orbit-averaged quantities (such as F ) only evolve on secular timescales, i.e. over many dynamical times. As a consequence, in equation (2.7), we may push the secular time to infinity, while assuming in the meantime that ∂F/∂J = cst. Forgetting transient terms and bringing the initial time to -∞ to consider only the forced regime of evolution, equation (2.7) can then be solved explicitly as
δF m (J , t) = t -∞
dτ e -im•Ω(t-τ ) im• ∂F ∂J δψ e m +δψ s m (J , τ ) .
(2.8)
We define the temporal Fourier transform with the convention
f (ω) = +∞ -∞ dt f (t) e iωt ; f (t) = 1 2π +∞ -∞ dω f (ω) e -iωt .
(2.9)
Taking the temporal Fourier transform of equation (2.7), we immediately get δ F m (J , ω) = -m•∂F/∂J ω-m•Ω δ ψ e m (J , ω) + δ ψ s m (J , ω) , (2.10) so that we expressed the DF's perturbations in terms of the potential fluctuations.
Matrix method
The next step of the calculation is to account for the system's self-gravity, i.e. the fact that the perturbing DF δF should be consistent with the self-induced potential perturbation δψ s and its associated density δρ s . One has δρ s (x) = dv δF (x, v) .
(2.11)
In equation ( 2.11), the potential and density perturbations are connected through Poisson's equation ∆δψ s = 4πGδρ s . The method to deal with this self-consistency constraint is to follow Kalnajs matrix method (Kalnajs, 1976). Let us introduce a representative biorthogonal basis of potential and densities ψ (p) and ρ (p) satisfying ∆ψ (p) = 4πGρ (p) ; dx ψ (p) * (x) ρ (q) (x) = -δ q p .
(2.12)
We will then use these basis elements to represent any potential and density disturbances in the system. The potential perturbations δψ s and δψ e may therefore be written δψ s (x, t) = p a p (t) ψ (p) (x) ; δψ e (x, t) = p b p (t) ψ (p) (x) , (2.13) and we introduce as c p = a p +b p the total potential perturbation. The linearity of Poisson's equation immediately ensures that one also has the decomposition δρ s (x, t) = p a p (t) ρ (p) (x). Multiplying equation (2.11) by ψ (p) * (x) and integrating over dx, we get a p (t) =m dxdv δF m (J , t) e im•θ ψ (p) * (J ) .
(2.14)
The transformation to angle-action coordinates (x, v) → (θ, J ) is canonical so that it conserves infinitesimal volumes, i.e. one has dxdv = dθdJ . Equation (2.14) can then be rewritten as m (J ) stands for the Fourier transformed basis elements in angles following equation (2.6). Thanks to equation (2.10) and taking a temporal Fourier transform, we finally obtain
a p (ω) = (2π) d q c q (ω) m dJ m•∂F/∂J ω-m•Ω ψ (p) * m (J ) ψ (q) m (J ) .
(2.16)
Let us finally introduce the system's reponse matrix M as
M pq (ω) = (2π) d m dJ m•∂F/∂J ω-m•Ω ψ (p) * m (J ) ψ (q) m (J ) ,
(2.17) so that equation (2.16) becomes a(ω) = M(ω)• c(ω) .
(2.18)
One should note that the response matrix depends only on the mean state of the system, since ∂F/∂J only evolves on secular timescales, the perturbing and self-gravitating potentials are absent, and the basis elements ψ (p) from equation (2.12) are chosen once for all. Assuming that the mean system is linearly stable, so that the eigenvalues of M(ω) are smaller than 1 for all values of ω, one can invert equation (2.18) as
c(ω) = I-M(ω) -1 • b(ω) , (2.19)
where I stands for the identity matrix. Equation (2.19) is a crucial relation, which allows us to express the total perturbations as a function of the external perturbation only, whose statistical properties may be characterised. Equation (2.19) describes the short timescale (dynamical) response of the system and the associated self-gravitating amplification.
Diffusion coefficients and statistical average
Let us now describe how these solutions may be used in equation (2.4) to describe the secular evolution of the system. The l.h.s. of equation (2.4) requires us to evaluate an expression of the form (2.20) where we used the fact that δψ -m = δψ * m . Thanks to the resolution from equation (2.8), we may now rewrite equation (2.4) as
1 (2π) d dθ δF (J , θ, t) ∂[δψ e +δψ s ] ∂θ = - m δF m im δψ e * m +δψ s * m ,
∂F ∂t = ∂ ∂J • m m D m (J , t) m• ∂F ∂J , (2.21)
where the diffusion coefficients D m (J , t) are given by
D m (J , t) = p,q ψ (p) m (J ) ψ (q) * m (J ) c * q (t) t -∞
dτ e -im•Ω(t-τ ) c p (τ ) .
(2.22)
The amplification relation from equation (2.19) allows us to rewrite equation (2.22) as a function of the external perturbation b only, to get D m (J , t) = 1 (2π) 2 p,q p1,q1 ψ (p) m (J )ψ (q) * m (J ) dω e iωt I-M(ω)
-1 * qq1 b * q1 (ω) × t -∞
dτ e -im•Ω(t-τ ) dω e -iω τ I-M(ω )
-1 pp1 b p1 (ω ) .
(2.23)
The final step of the derivation is to consider stastical averages over various pertubations realisations, i.e. to consider only the mean response of the system. Let us denote as . the ensemble average operation on such different realisations. When applying this average, we assume that the response matrix M, as well as the DF F and its gradients ∂F/∂J , do not change significantly from one realisation to another. Thanks to these assumptions, equation (2.21) becomes
∂F ∂t = ∂ ∂J • m m D m (J , t) m• ∂F ∂J .
(2.24)
Let us now suppose that the external perturbations are stationary in time, so that one can introduce the corresponding temporal autocorrelation function C as
C k (t 1 -t 2 ) = b k (t 1 ) b * (t 2 ) , (2.25)
where it is assumed that the exterior perturbation is of zero mean. When Fourier transformed, equation (2.25) becomes
b k (ω) b * (ω ) = 2πδ D (ω-ω ) C k (ω) .
(2.26)
One can now immediately rewrite the averaged diffusion coefficients from equation (2.24) as
D m (J , t) = 1 2π p,q ψ (p) m (J ) ψ (q) * m (J ) dω 0 -∞ dτ e -i(ω-m•Ω)τ I-M -1 • C• I-M -1 pq (ω) , (2.27)
where we relied on the hermiticity of the response matrix M * = M t . One should note that after the ensemble average, the diffusion coefficients become (explicitly) independent of t (while they still depend on the secular timescale via the slow variations of F ). To shorten temporarily the notations, let us introduce the notation L = I-M -1
• C• I-M -1 . In equation (2.27), one must then evaluate a double integral of the form
1 2π +∞ -∞ dω L(ω) 0 -∞ dτ e -i(ω-m•Ω)τ = i 2π +∞ -∞ dω L(ω) ω-m•Ω = i 2π P +∞ -∞ dω L(ω) ω-m•Ω + 1 2 L(m•Ω) , (2.28)
where to perform the integration over τ , we kept only the boundary term for τ = 0, by adding a small imaginary part to the frequency ω, so that ω = ω+i0 + , which ensures the convergence for τ → -∞. To evaluate the last integral over ω, we also relied on Plemelj formula (2.29) where P stands for Cauchy principal value. The last step of the derivation is to note that the contributions associated with the principal values in equation (2.28) have no impact on the secular diffusion equation. Indeed,equation (2.22) gives us that the diffusion coefficients are such that D -m (J ) = D * m (J ). Since we are summing on all vectors m ∈ Z d , we may then rewrite equation (2.24) as
1 x±i0 + = P 1 x ∓ iπδ D (x) ,
∂F ∂t = ∂ ∂J • m m Re[D m (J , t)] m• ∂F ∂J .
(2.30) Equations (2.17) and (2.25) impose M * = M t , and C * = C t , so that the matrix L defined above equation (2.28) is also hermitian. We finally recover the collisionless secular dressed secular diffusion equation as
∂F ∂t = ∂ ∂J • m m D m (J ) m• ∂F ∂J , (2.31)
where the anisotropic diffusion coefficients are given by
D m (J ) = 1 2 p,q ψ (p) m (J ) ψ (q) * m (J ) I-M -1 • C• I-M -1 pq (ω = m•Ω) . (2.32)
Let us finally introduce the total diffusion flux F tot as
F tot = m m D m (J ) m• ∂F ∂J , (2.33)
so that equation (2.31) becomes ∂F ∂t = div(F tot ) .
(2.34)
With this convention, -F tot corresponds to the direction along which individual particles diffuse. Equation (2.31) is the main result of this section.
Let us now briefly discuss the physical content of equation (2.31). First, because it is written as the divergence of a flux, the total number of stars is conserved during the diffusion. One can also note that the diffusion coefficients D m (J ) from equation (2.32) capture the joint and coupled contributions from the external perturbations (via the autocorrelation matrix C) and from the self-gravitating susceptibility of the system (via the response matrix M). The total diffusion coefficients appear therefore as a collaboration between the strength of the external pertubations and the local strength of the system's amplification. As equation (2.31) describes a resonant diffusion, the external perturbing power spectrum and the system's susceptibility have to be evaluated at the local intrinsic frequency ω = m•Ω. In this sense, this diffusion equation is appropriate to capture the nature of a collisionless system, via its natural frequencies and susceptibility, as well as its nurture, via the structure of the power spectrum of the external perturbations.
In addition, one can also note that the diffusion equation (2.31) takes the form of a strongly anisotropic diffusion equation in action space. It is anisotropic not only because the diffusion coefficients D m (J ) depend on the position in action space, but also because the diffusion associated with one resonance vector m correponds to a diffusion in the preferential direction of the vector m. For a given resonance m, the diffusion is maximum along m and vanishes in the orthogonal directions. A qualitative illustration of the properties of equation (2.31) is given in figure 2.2.1. Finally, note that equation (2.31) is indeed an illustration of the fluctuation-dissipation theorem. The autocorrelation of the fluctuating potential drives the diffusion of the system's orbital structure.
Self-induced collisional dynamics
In the previous section, we considered the collisionless case where a secular diffusion is induced by external perturbations. However, a given self-gravitating system, even when isolated, may also undergo
J 1 J 2 m 1 m 2 D m 1 (J) D m 2 (J)
Figure 2.2.1: Illustration of the strong anisotropy of the diffusion in action space captured by equation (2.31). The background grey domain illustrates the region where the system's DF, F , is present. For a given resonance vector m, one can compute the associated diffusion coefficients Dm(J ), whose level contours are represented with dotted colored lines. In the region where Dm(J ) is maximum, following equation (2.31), one expects the associated flux to be aligned with the direction of m. As a consequence, depending on which resonance vector locally dominates the diffusion, the DF's diffusion can occur along significantly different directions.
a secular evolution as a result of its own intrisinc graininess. This is a collisional evolution sourced by finite-N effects.
The dynamics and thermodynamics of systems with long-range interactions has recently been a subject of active research [START_REF] Campa | [END_REF][START_REF] Campa | Physics of Long-Range Interacting Systems[END_REF], which led to a much better understanding of the equilibrium properties of these systems, their specificities such as negative specific heats (Antonov, 1962;Lynden-Bell & Wood, 1968;Lynden-Bell, 1999), as well as various kinds of phase transitions and ensemble inequivalences. However, the precise description of their dynamical evolution remains to be improved to offer explicit predictions. We refer the reader to Chavanis (2010Chavanis ( , 2013a,b) ,b) for a historical account of the development of kinetic theories of plasmas, stellar systems, and other systems with long-range interactions, but let us briefly recall here the main milestones.
The first kinetic theory focusing on the statistical description of the evolution of a large number of particles was considered by Boltzmann in the case of dilute neutral gases (Boltzmann, 1872). For such systems, particles do not interact except during strong local collisions. The gas is assumed to be spatially homogeneous and Boltzmann equation describes the evolution of the system's velocity distribution f (v, t) as a result of strong collisions. This kinetic equation satisfies a H-theorem, associated with an increase of Boltzmann's entropy.
Boltzmann's approach was extended to charged gases (plasmas) by Landau (Landau, 1936). For plasmas, particles interact via long-range Coulombian forces, but because of electroneutrality and Debye shielding (Debye & Hückel, 1923a,b), these interactions are screened on a lengthscale of the order of the Debye length, and collisions become essentially local. Neutral plasmas are spatially homogeneous, so that the kinetic equation describes again the evolution of the velocity DF f (v, t), driven by close electrostatic encounters. Because the encounters are weak, one can expand the Boltzmann equation in the limit of small deflections and perform a linear trajectory approximation. In the weak coupling approximation, this leads to the so-called Landau equation. The Landau equation exhibits two formal divergences: one at small scales due to the neglect of strong collisions and one logarithmic divergence at large scales due to the neglect of collective effects, i.e. the dressing of particles by their polarisation cloud (a particle of a given charge has the tendency to be surrounded by a cloud of particles of opposite charges). Landau regularised these divergences by introducing a lower cut-off at the impact parameter producing a deflection of 90 • (this is the Landau length) as well as an upper cut-off at the Debye length.
Collective effects were later rigourously taken into account in Balescu (1960) and Lenard (1960), leading to the Balescu-Lenard equation for plasmas. The Balescu-Lenard equation is similar to the Landau equation, except that it includes the square of the dielectric function in the denominator of the potential of interaction in Fourier space. This dieletric function first appeared as a probe of the dynamical stability of plasmas based on the linearised Vlasov equation (Vlasov, 1938(Vlasov, , 1945)). In the Balescu-Lenard equation, the dielectric function accounts for Debye shielding and removes the large scale logarithmic divergence present in the Landau equation. The Landau equation is recovered from the Balescu-Lenard equation by replacing the dressed potential of interaction by its bare expression, i.e. by replacing the dielectric function by unity. In addition, the Balescu-Lenard equation, as given originally by Balescu and Lenard, exhibits a local resonance condition, encapsulated in a Dirac δ D -function. For such systems, resonant contributions are the drivers of the secular evolution. Integrating over this resonance condition leads to the original form of the kinetic equation given by Lindau.
In parallel to the developments of kinetic equations for plasmas, the secular evolution of selfgravitating systems was also investigated. Self-gravitating systems are spatially inhomogeneous, but the first kinetic theories (Jeans, 1929;[START_REF] Chandrasekhar | Principles of Stellar Dynamics[END_REF]Chandrasekhar, , 1943a,b) ,b) were all based on the assumption that collisions (i.e. close encounters) between stars can be treated with a local approximation, as if the system were infinite and homogeneous. Relying on the idea that a given star undergoes a large number of weak deflections, Chandrasekhar (1949) developed an analogy with Brownian motion. He started from a Fokker-Planck writing of the diffusion equation and computed the diffusion and friction coefficients relying on a binary collision theory. This led to a kinetic equation, often called Fokker-Planck equation in astrophysics, which is the gravitational equivalent of the Landau equation from plasmas. This equation exhibits similarly two divergences: one at small scales due to the mishandling of strong collisions, and one at large scales due to the local approximation, i.e. the assumption that the system is infinite and homogeneous. In the treatment of Chandrasekhar, strong collisions are taken into account without having to introduce a cut-off, so that the small scale divergence is regularised at the gravitational Landau length. The large scale divergence is usually regularised by introducing a cut-off at the Jeans length, which is the gravitational equivalent of the Debye length. This gravitational Landau equation is often considered to be relevant to describe the collisional dynamics of spherical systems such as globular clusters. Let us however note that the associated treatment based on the local approximation remains unsatisfactory, in particular because of the unavoidable appearance of a logarithmic divergence at large scales. In addition, within this framework, one cannot account for collective effects, i.e. the dressing of stars by their polarisation cloud, i.e. the fact that the gravitational force being attractive, a given star has the tendency to be surrounded by a cloud of stars. This increases its effective gravitational mass and reduces the collisional relaxation time.
In order to fully account for these properties, the kinetic theory of self-gravitating systems was recently generalised to fully inhomogeneous systems, either when collective effects are neglected (Chavanis, 2010(Chavanis, , 2013b) ) leading to the inhomogeneous Landau equation, or when they are accounted for leading to the inhomogeneous Balescu-Lenard equation (Heyvaerts, 2010;Chavanis, 2012b). These kinetic equations, presented and discussed in detail in the upcoming section, are valid at order 1/N , where N is the number of stars in the system. Having accounted for the finite extension of the system, these equations no longer present divergence at large scales. In order to deal with the system's inhomogeneity, they are written in angle-action coordinates (see section 1.3), which allow for the description of stars' intricate dynamics in spatially inhomogeneous and multi-periodic systems. These equations involve similarly a resonance condition encapsulated in a Dirac δ D -function (see figure 2.3.2), which generalises the one present in the homogeneous Balescu-Lenard equation. Finally, in order to capture collective effects, the inhomogeneous Balescu-Lenard equation also involves the system's response matrix (see equation (2.17)) expressed in angle-action variables. This generalises the dielectric function appearing in the homogeneous Balescu-Lenard equation for plasmas. This dressing accounts for anti-shielding, i.e. the fact that the gravitational mass of a star is enhanced by its polarisation, leading to a reduction of the relaxation time. The upcoming chapters will emphasise how these powerful and predictive kinetic equations may be used in the astrophysical context to probe complex secular regimes.
There are two standard methods to derive kinetic equations for a N -body system with long-range pairwise interactions. The first approach is based on Liouville's equation for the N -body distribution function of the system. One has to write the first two equations of the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy. The hierarchy is then closed by considering only contributions of order 1/N . One may then solve the second equation of the BBGKY hierarchy to express the 2-body correlation function in terms of the system's 1-body DF. One finally substitutes this expression in the first equation of the BBGKY hierarchy to obtain the closed self-consistent kinetic equation satisfied by the 1-body DF. The same results can also be obtained thanks to projection operator techniques. The second method relies on the Klimontovich equation [START_REF] Klimontovich | The statistical theory of non-equilibrium processes in a plasma[END_REF], which describes the dynamics of the system's DF written as a sum of δ D functions. This exact DF is then decomposed in two parts, a smooth component and fluctuations. One can then write two evolution equations, one for the smooth mean component, and one for the fluctuations. This coupled system is then closed by neglecting non-linear terms in the evolution of the fluctuations (quasilinear approximation). The final step in this approach is to solve the equation for the fluctuations to express their properties as a function of the underlying smooth component. Injecting this result in the first evolution equation for the smooth part, one obtains a self-consistent kinetic equation. These two methods are physically equivalent, while technically different. Finally, we recently presented in Fouvry et al. (2016a,b) a third approach based on a functional rewriting of the evolution equations. This approach starts from the first two equations of the BBGKY hierarchy truncated at order 1/N . Introducing auxiliary fields, the evolution of the two coupled dynamical quantities, 1-body DF and 2-body autocorrelation, can then be rewritten as a traditional functional integral. By functionally integrating over the 2-body autocorrelation, one obtains a new contraint connecting the 1-body DF and the auxiliary fields. When inverted, this constraint finally allows for the derivation of the closed non-linear kinetic equation satisfied by the 1-body DF.
In the upcoming sections, we will follow Chavanis (2012b) and present a derivation of the inhomogeneous Balescu-Lenard equation based on the resolution of the Klimontovich equation. We decided to present this derivation in the main text, in order to emphasise the various similarities it shares with the previous collisionless diffusion equation. In Appendix 2.A, we present the derivation of the BBGKY hierarchy. This allows us to revisit in Appendix 2.B the derivation of the inhomogeneous Balescu-Lenard equation first presented by Heyvaerts (2010) and based on the direct resolution of the BBGKY hierarchy. Finally, in Appendix 2.C, we consider the third approach to the derivation of kinetic equations based on a functional integral rewriting.
Evolution equations
Let us consider an isolated system made of N particles of individual mass µ = M tot /N , where M tot is the total active mass of the system, embedded in a physical space of dimension d. We note as (x i , v i ) the position and velocity of particle i in an inertial frame. The individual dynamics of these particles is entirely described by Hamilton's equations which read
µ dx i dt = ∂H ∂v i ; µ dv i dt = - ∂H ∂x i , (2.35)
where the Hamiltonian of the system contains all the binary interactions between particles as
H = µ 2 N i=1 v 2 i + µ 2 N i<j U (|x i -x j |) .
(2.36)
In equation (2.36), we introduced the binary potential of interaction U (|x i -x j |), given by U (|x|) = -G/|x| in the gravitational context. While capturing the exact dynamics of the system, one major drawback of equations (2.35) is that one has to deal with a set of N coupled differential equations. In Appendix 2.A, we show how these equations may be rewritten as an ordered hierarchy of evolution equations, the BBGKY hierarchy. Such a rewriting is at the heart of the derivation of the inhomogeneous Balescu-Lenard equation proposed in Heyvaerts (2010) and revisited in Appendix 2.B. Here, we intend to follow a different route, and rewrite Hamilton's equations (2.35) as a single evolution equation in phase space.
To do so, let us introduce the discrete distribution function F d (x, v, t) as
F d (x, v, t) = µ N i=1 δ D (x-x i (t)) δ D (v-v i (t)) .
(2.37)
Let us also introduce the associated self-consistent potential ψ d as
ψ d (x, v, t) = dx dv U (|x-x |) F d (x , v , t) .
(2.38)
One can show that F d satisfies the Klimontovich equation [START_REF] Klimontovich | The statistical theory of non-equilibrium processes in a plasma[END_REF], given by
∂F d ∂t + F d , H d = 0 , (2.39)
where we introduced the Hamiltonian H d as
H d (x, v, t) = 1 2 v 2 + ψ d (x, t) .
(2.40)
At this stage, note that the Klimontovich equation (2.39) captures the exact same dynamics as Hamilton's equations (2.35), while being defined on a phase space of dimension 2d. Let us assume that the system's DF and potential may be decomposed as the sum of a smooth component and a fluctuating one, so that
F d = F + δF , ψ d = ψ 0 + δψ . (2.41)
Let us emphasise how similar the decompositions from equations (2.1) and (2.41) are. In addition to this decomposition, we assume that the smooth component F only evolves on secular timescales, while the fluctuating component δF evolves much faster on dynamical timescales. We also assume that the mean potential is integrable, so that there exists angle-action coordinates (θ, J ) appropriate for the smooth quasi-stationary potential ψ 0 . Thanks to Jeans theorem, the system's mean DF being quasi-stationary, it can be written as F (x, v, t) = F (J , t). Performing the same timescale decoupling and quasilinear approximation as in equation ( 2.3), equation (2.39) gives two evolution equations. First a secular evolution equation for F as
∂F ∂t = ∂ ∂J • dθ (2π) d δF ∂δψ ∂θ , (2.42)
and an evolution equation for the perturbation δF as
∂δF ∂t + Ω• ∂δF ∂θ - ∂F ∂J • ∂δψ ∂θ = 0 . (2.43)
These two evolution equations govern the evolution of the smooth DF F and the fluctuations δF at order 1/N . They are the direct counterparts of equations (2.4) and (2.5). Here, the system's potential fluctuations are not due to an external forcing, but to the intrinsic finite-N Poisson shot noise. As was assumed in equation (2.7), we place ourselves within the adiabatic approximation so that the time variations of F may be neglected on the timescales for which the fluctuations δF and δψ evolve. In order to be valid, such an approximation requires to have N 1. Finally, as in equation (2.19), we assume that the DF F remains Vlasov stable throughout its evolution, so that its evolution is only governed by correlations and not by dynamical instabilities.
Fast timescale amplification
The first step of our calculation is to study the short timescale evolution equation (2.43), during which perturbations build up. As in equation (2.6), let us perform a Fourier transform w.r.t. to the angles θ.
Let us also define the Laplace transform of the fluctuations with the convention (2.44) where the Bromwich contour B in the complex ω-plane should pass above all the poles of the integrand, i.e. Im[ω] should be large enough. The Fourier-Laplace transform of the DF's fluctuations δF is therefore given by
f (ω) = +∞ 0 dt f (t) e iωt ; f (t) = 1 2π B dω f (ω) e -iωt ,
δ F m1 (J 1 , ω 1 ) = dθ 1 (2π) d +∞ 0 dt e -i(m1•θ1-ω1t) δF (θ 1 , J 1 , t) .
(2.45)
One can perform a similar transformation for the potential fluctuations δψ. Let us define the Fourier transform of the initial value of the DF as
δ F m1 (J 1 , 0) = dθ 1 (2π) d e -i(m1•θ1) δF (θ 1 , J 1 , 0) . (2.46)
Relying on Bogoliubov's ansatz, F = cst., we multiply equation (2.43) by dθ 1 /(2π) d dt e -im1•θ1-ω1t to get
δ F m1 (J 1 , ω 1 ) = m 1 •∂F/∂J 1 m 1 •Ω 1 -ω 1 δ ψ m1 (J 1 , ω 1 )+ δ F m1 (J 1 , 0) i(m 1 •Ω 1 -ω 1 ) .
(2.47) Equation (2.47) relates the fluctuations in the potential δψ to the induced response δF in the system's DF. One now has to account for the fact that these perturbations are self-consistenly generated by the system itself, i.e. δψ corresponds to potential fluctuations generated by the perturbing density δρ associated with the DF δF . To do so, we follow the matrix method introduced in section 2.2.2. Relying on basis elements (ψ (p) , ρ (p) ) as introduced in equation (2.12), we follow equation (2.13) and decompose the selfinduced potential perturbations δψ as
δψ(θ 1 , J 1 , t) = p a p (t) ψ (p) (θ 1 , J 1 ) ; δ ψ m1 (J 1 , ω 1 ) = p a p (ω 1 ) ψ (p) m1 (J 1 ) , (2.48)
where a p (ω) stands for the Laplace transform of the basis coefficients and ψ (p) m1 (J 1 ) for the Fourier transformed basis elements as introduced in equation (2.15). In order to capture this self-consistency, we follow the same method as presented in equation (2.16). We start from δ ρ = dv δ F (x, v), multiply this relation by ψ (p) * (x), integrate it w.r.t. x, and rely on the fact that dxdv = dθdJ as the transformation
(x, v) → (θ, J ) is canonical. Equation (2.47) finally gives a p (ω 1 ) = -(2π) d q I-M(ω 1 ) -1 pq m2 dJ 2 δ F m2 (J 2 , 0) i(m 2 •Ω 2 -ω 1 ) ψ (q) * m2 (J 2 ) .
(2.49)
In equation (2.49), we recover the role played by the system's susceptibility through the system's response matrix M introduced in equation (2.17). Note as well that we assumed the system to be stable, so that one could indeed compute the matrix I-M(ω 1 ) -1 . Let us now introduce the system's dressed susceptibility coefficients 1/D m1,m2 as
1 D m1,m2 (J 1 , J 2 , ω) = p,q ψ (p) m1 (J 1 ) I-M(ω) -1 pq ψ (q) * m2 (J 2 ) , (2.50)
so that equation (2.49) when multiplied by ψ (p) m1 (J 1 ) and summed over "p" gives
δ ψ m1 (J 1 , ω 1 ) = -(2π) d m2 dJ 2 1 D m1,m2 (J 1 , J 2 , ω 1 ) δ F m2 (J 2 , 0) i(m 2 •Ω 2 -ω 1 )
.
(2.51) Equation (2.51) gives the Laplace transform of the response potential as a function of the initial conditions in the DF's fluctuations. It describes the dynamical amplification of the perturbations occurring in the system.
Estimating the collision operator
Thanks to equation (2.51), one may now proceed to the evaluation of the collision operator in the r.h.s. of equation (2.42). As was argued in equation (2.24), let us emphasise that here we are interested in the system's mean evolution averaged over various realisations. We may then take the ensemble average of the evolution equation (2.42). When taking this average, we assume that the response matrix M as well as the DF F and its gradients ∂F/∂J do not change significantly from one realisation to another. Equation (2.42) becomes
∂F ∂t = ∂ ∂J 1 • F tot (J 1 ) , (2.52)
where we introduced the total diffusion flux F tot as
F tot (J ) = dθ (2π) d δF ∂δψ ∂θ , (2.53)
where • stands for the ensemble average operation. Taking a Fourier transform w.r.t. the angles as well as an inverse Laplace transform, equation (2.53) gives
F tot (J 1 ) = - m1 B1 dω 1 2π B2 dω 2 2π im 1 e -iω1t e -iω2t δ F m1 (J 1 , ω 1 ) δ ψ -m1 (J 1 , ω 2 ) , (2.54)
where B 1 (resp. B 2 ) stands for the Bromwich contour associated with the inverse Laplace transform w.r.
F tot (J 1 ) = F (I) tot (J 1 )+F (II) tot (J 1 ) , (2.55)
where the two components F (I) tot (J 1 ) and F
(II) tot (J 1 ) are respectively given by
F (I) tot (J 1 ) = i m1 m 1 B1 dω 1 2π e -iω1t B2 dω 2 2π e -iω2t (2π) 2d m 1 •∂F/∂J 1 m 1 •Ω 1 -ω 1 m2,m3 dJ 2 dJ 3 1 m 2 •Ω 2 -ω 1 × 1 D m1,m2 (J 1 , J 2 , ω 1 ) 1 D -m1,m3 (J 1 , J 3 , ω 2 ) 1 m 3 •Ω 3 -ω 2 δ F m2 (J 2 , 0) δ F m3 (J 3 , 0) , F (II) tot (J 1 ) = -i m1 m 1 B1 dω 1 2π e -iω1t B2 dω 2 2π e -iω2t (2π) d 1 m 1 •Ω 1 -ω 1 × m2 dJ 2 1 D -m1,m2 (J 1 , J 2 , ω 2 ) 1 m 2 •Ω 2 -ω 2 δ F m1 (J 1 , 0) δ F m2 (J 2 , 0) . (2.56)
In order to evaluate the two expressions from equation (2.56), one needs to compute the statistical expectation of the product δ F m1 (J 1 , 0) δ F m2 (J 2 , 0) that we will now evaluate.
Let us recall here that the fluctuations δF introduced in equation (2.41) are given by δF = F d -F , i.e. stand for the difference between the actual discrete DF F d and the smooth mean-field one F . Starting from the expression (2.37) of the discrete distribution function F d and temporarily dropping the time dependence, t = 0, to shorten the notations, one can write
δF (θ 1 , J 1 ) δF (θ 2 , J 2 ) = µ 2 N i,j δ D (θ-θ i ) δ D (J 1 -J i ) δ D (θ 2 -θ j ) δ D (J 2 -J j ) -F (J 1 ) F (J 2 ) . (2.57)
Here we relied on the fact the fluctuations are of zero mean so that δF = 0. Let us now evaluate the first term from equation (2.57) which reads
µ 2 N i,j δ D (θ-θ i ) δ D (J 1 -J i ) δ D (θ 2 -θ j ) δ D (J 2 -J j ) = µ 2 N i δ D (θ 1 -θ i ) δ D (J 1 -J i ) δ D (θ 1 -θ 2 ) δ D (J 1 -J 2 ) + µ 2 N i =j δ D (θ 1 -θ i ) δ D (J 1 -J i ) δ D (θ 2 -θ j ) δ D (J 2 -J j ) = µ F (J 1 ) δ D (θ 1 -θ 2 ) δ D (J 1 -J 2 ) + F (J 1 ) F (J 2 ) , (2.58)
where, to get the last line, we assumed that the particles are initially uncorrelated and used the fact that F d = F . Injecting equation (2.58) into equation (2.57), we get the relation
δF (θ 1 , J 1 ) F (θ 2 , J 2 ) = µ F (J 1 ) δ D (θ 1 -θ 2 ) δ D (J 1 -J 2 ) .
(2.59)
Finally, taking the Fourier transform of equation (2.59), one gets the needed correlations in the initial conditions as
δ F m1 (J 1 , 0) δ F m2 (J 2 , 0) = µ (2π) d δ -m2 m1 δ D (J 1 -J 2 ) F (J 1 ) . (2.60)
The two components of the diffusion flux from equation (2.56) then become
F (I) tot (J 1 ) = -iµ(2π) d m1 m 1 B1 dω 1 2π e -iω1t B2 dω 2 2π e -iω2t m 1 •∂F/∂J 1 m 1 •Ω 1 -ω 1 × m2 dJ 2 1 m 2 •Ω 2 -ω 1 1 D m1,m2 (J 1 , J 2 , ω 1 ) 1 D -m1,-m2 (J 1 , J 2 , ω 2 ) F (J 2 ) m 2 •Ω 2 +ω 2 , F (II) tot (J 1 ) = iµ m1 m 1 B1 dω 1 2π e -iω1t B2 dω 2 2π e -iω2t 1 m 1 •Ω 1 -ω 1 × 1 D -m1,-m1 (J 1 , J 1 , ω 2 ) F (J 1 ) m 1 •Ω 1 +ω 2 .
(2.61)
Let us now proceed to the successive evaluations of both terms in equation (2.61). Let us first evaluate the term F (I) tot (J 1 ), which corresponds to the diffusion component of the kinetic equation. Here the difficulty is to deal with the resonant poles appearing in equation (2.61). We follow the argument presented in equation (51.17) of Pitaevskii & Lifshitz (2012) and note that considering only contributions that do not decay in time, one can perform the substitution
1 m 2 •Ω 2 -ω 1 1 m 2 •Ω 2 +ω 2 -→ (2π) 2 δ D (ω 1 +ω 2 ) δ D (m 2 •Ω 2 -ω 1 ) . (2.62)
This substitution allows us to perform the integrations w.r.t. ω 1 and ω 2 in equation (2.61), so that
F (I) tot becomes F (I) tot (J 1 ) = iµ(2π) d m1 m 1 B1 dω 1 m2 dJ 2 δ D (m 2 •Ω 2 -ω 1 ) ω 1 -m 1 •Ω 1 m 1 •∂F/∂J 1 F (J 2 ) |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 -ω 1 )| 2 , (2.63)
where we relied on the relation [83] in Chavanis (2012b)). In equation (2.63), we may finally perform the integration w.r.t. ω 1 by lowering the contour B 1 to the lower axis and using the Landau prescription m 1 •Ω 1 → m 1 •Ω 1 -i0 + associated with the fact that the contour B 1 has to pass above the pole. We finally rely on Plemelj formula from equation (2.29). Because
1/D -m1,-m2 (J 1 , J 2 , -ω) = 1/D * m1,m2 (J 1 , J 2 , ω) (see note
F (I)
tot is a real quantity, only the Dirac delta remains, and equation (2.63) finally becomes
F (I) tot (J 1 ) = µπ(2π) d m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 )| 2 m 1 • ∂F ∂J 1 F (J 2 ) .
(2.64)
Coming back to equation (2.61), let us now evaluate the second flux component F (II) tot (J 1 ). This term is associated with the drift component of the kinetic equation. To perform the integrations over ω 1 and ω 2 , we distort once again the Bromwich contours B 1 and B 2 towards negative imaginary parts, while still remaining above all the singularities of the integrand. When deformed to large negative imaginary parts, the exponential terms e -iω1t and e -iω2t tend to 0, so that one should only account for the contributions from the poles. For B 1 , we note that there is only one pole in ω 1 = m 1 •Ω 1 and that this pole is located along the real axis. One should also pay a careful attention to the direction of integration, so that here one has dω 1 f (ω 1 )/(ω 1 -ω 0 ) = -2iπf (ω 0 ). For the integration w.r.t. ω 2 , one first notes an obvious pole along the real axis in ω 2 = -m 1 •Ω 1 . In addition, because the system is assumed to be stable, all the singularities associated with the susceptibility coefficients ω 2 → 1/D(ω 2 ) are located below the real axis. Such poles will then be multiplied by a decaying in time exponential. Considering only contributions which do not decay in time, we restrict ourselves only to the real pole in ω 2 = -m 1 •Ω 1 , and pay as well a careful attention to the sign of the residues. Equation (2.61) gives
F (II) tot (J 1 ) = iµ m1 m 1 1 D -m1,-m1 (J 1 , J 1 , -m 1 •Ω 1 +i0 + ) F (J 1 ) = µ m1 m 1 Im 1 D m1,m1 (J 1 , J 1 , m 1 •Ω 1 +i0 + ) F (J 1 ) , (2.65)
where one should pay attention to the small positive imaginary part i0 + which was added following Landau prescription ω 2 → ω 2 +i0 + . This emphasises the fact that the contour B 2 has to pass above the pole. In equation (2.65), we also note that the two time dependences introduced by the two inverse Laplace transforms cancelled out, so that F (II)
tot does not explicitly depend on time. To get the second relation in equation (2.65), we performed the change m 1 → -m 1 , and relied on the fact that F (II) tot is a real quantity, hence the imaginary part. The calculation of the imaginary part in equation (2.65) is presented in Appendix 2.B in equation (2.155). We refer to this calculation, so that we can finally rewrite equation (2.65) as
F (II) tot (J 1 ) = -µπ(2π) d m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 )| 2 F (J 1 ) m 2 • ∂F ∂J 2 .
(2.66)
We have now evaluated the two components of the diffusion flux F tot from equation (2.52). We have therefore derived a closed kinetic equation, the inhomogeneous Balescu-Lenard equation, that will be presented in detail in the upcoming section.
The Balescu-Lenard equation
Combining equations (2.64) and (2.66), one can estimate the total diffusion flux F tot . Equation (2.52) immediately gives the associated closed diffusion equation. This is the inhomogeneous Balescu-Lenard equation which reads
∂F ∂t = π(2π) d µ ∂ ∂J 1 • m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) D m1,m2 (J 1 , J 2 , m 1 •Ω 1 ) 2 × m 1 • ∂ ∂J 1 -m 2 • ∂ ∂J 2 F (J 1 , t) F (J 2 , t) .
(2.67)
Let us now detail the physical content of this diffusion equation. Let us first note that equation (2.67) is written as the divergence of a flux, so that it conserves the total number of particles. The presence of the prefactor µ = M tot /N illustrates the fact that the Balescu-Lenard equation was obtained thanks to a kinetic development at order 1/N . It captures first-order contributions associated with finite-N effects.
In equation ( 2.67), one should note in particular the presence of a resonance condition encapsulated by the Dirac delta
δ D (m 1 •Ω 1 -m 2 •Ω 2 )
, where m 1 , m 2 ∈ Z d are resonance vectors. This is associated with an integration over the dummy variable J 2 scanning action space looking for locations where the resonance condition is satisfied. In angle-action space, the trajectories of the particles are straight lines, with an intrinsic frequency Ω(J). This frequency depends only on the actions and is illustrated by the left-hand curve. Here the frequency associated with the red particle is twice the one of the blue particle: the particles are in resonance. These resonant encounters in angle-action space are the ones captured by the Balescu-Lenard equation (2.67).
qualitatively such a non-local resonance condition in the case of a razor-thin disc. One should note that such resonant encounters are non-local in the sense that they do not require the resonating orbits to be close in position nor in action space. Equation (2.67) finally involves the dressed susceptibility coefficients 1/D m1,m2 (J 1 , J 2 , ω) introduced in equation (2.50). They encode the strength of the selfgravitating amplification within the system. Let us finally note that equation (2.67) scales like 1/(N D 2 ), so that increasing N or increasing the heat content of the system have the same effect by slowing down the diffusion. Equation (2.67) can be rewritten as an anisotropic Fokker-Planck diffusion equation by introducing the relevant drift and diffusion coefficients. It becomes
∂F ∂t = ∂ ∂J 1 • m1 m 1 A m1 (J 1 ) F (J 1 )+D m1 (J 1 ) m 1 • ∂F ∂J 1 , (2.68)
where A m1 (J 1 ) and D m1 (J 1 ) are respectively the drift and diffusion coefficients associated with a given resonance vector m 1 . As the Balescu-Lenard equation describes the self-consistent evolution of the DF F , the drift and diffusion coefficients depend secularly on F . This dependence was not written out The same two orbits in the rotating frame in which they are in resonance -here through an ILR-COR coupling (see figure 3.7.4). Bottom panel: Fluctuations in action space of the system's DF sourced by finite-N effects, exhibiting overdensities for the blue and red orbits. The dashed lines correspond to 3 contour levels of the intrinsic frequency respectively associated with the resonance vector m1 (grey lines) and m2 (black lines). The two sets of orbits satisfy the resonance condition m1 •Ω1 -m2 •Ω2 = 0, and therefore lead to a secular diffusion of the system's orbital structure according to the Balescu-Lenard equation (2.67). Let us emphasise that resonant orbits need not be caught in the same resonance (m1 = m2), be close in position space nor in action space.
explicitly to simplify the notations. Following equation (2.67), the drift coefficients are given by
A m1 (J 1 ) = -π(2π) d µ m2 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) D m1,m2 (J 1 , J 2 , m 1 •Ω 1 ) 2 m 2 • ∂F ∂J 2 , (2.69)
while the diffusion coefficients are given by
D m1 (J 1 ) = π(2π) d µ m2 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) D m1,m2 (J 1 , J 2 , m 1 •Ω 1 ) 2 F (J 2 ) .
(2.70)
Finally, let us introduce the total diffusion flux F tot (J ) as
F tot (J ) = m m A m (J ) F (J )+D m (J ) m• ∂F ∂J , (2.71)
so that the Balescu-Lenard equation becomes
∂F ∂t = div(F tot ) . (2.72)
Here, with this convention, -F tot corresponds to the direction along which individual particles diffuse.
The bare case: the Landau equation
When collective effects are neglected, the Balescu-Lenard equation (2.67) becomes the Landau equation (Polyachenko & Shukhman, 1982;Chavanis, 2007Chavanis, , 2010Chavanis, , 2013b) ) reading
∂F ∂t = π(2π) d µ ∂ ∂J 1 • m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) A m1,m2 (J 1 , J 2 ) 2 × m 1 • ∂ ∂J 1 -m 2 • ∂ ∂J 2 F (J 1 , t) F (J 2 , t) .
(2.73)
In equation (2.73), the dressed susceptibility coefficients 1/|D m1,m2 (J 1 , J 2 , ω)| 2 from equation (2.50) are replaced by the bare ones |A m1,m2 (J 1 , J 2 )| 2 . These are defined as the Fourier transform of the interaction potential U (Lynden- Bell, 1994;Pichon, 1994;Chavanis, 2013b), so that
A m1,m2 (J 1 , J 2 ) = 1 (2π) 2d dθ 1 dθ 2 U (x( θ 1 , J 1 )-x(θ 2 , J 2 ) ) e -i(m1•θ1-m2•θ2) .
(2.74)
In addition, these coefficients satisfy the symmetry relations
A m2,m1 (J 2 , J 1 ) = A -m1,-m2 (J 1 , J 2 ) = A * m1,m2 (J 1 , J 2 ) .
(2.75)
Note that the kinetic equations (2.67) and (2.73) share the same overall structure.
The multi-component case
A crucial strength of the Balescu-Lenard formalism, already emphasised in Heyvaerts (2010) and Chavanis (2013b), is that this formalism also allows for a self-consistent description of the simultaneous evolution of multiple populations of various masses. Let us now detail the structure of such a multicomponent diffusion equation. (See Appendix 6.B for an illustration of how the multi-component Balescu-Lenard equation may be derived in the specific context of quasi-Keplerian systems.) Here, we consider a system made of multiple components, indexed by the letters "a" and "b". The particles of the component "a" have an individual mass µ a and follow the DF F a . Each DF F a is normalised such that dxdv F a = M a tot , where M a tot is the total active mass of component "a". The evolution of each DF is then given by
∂F a ∂t = π(2π) d ∂ ∂J 1 • m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) D m1,m2 (J 1 , J 2 , m 1 •Ω 1 ) 2 × b µ b F b (J 2 ) m 1 • ∂F a ∂J 1 -µ a F a (J 1 ) m 2 • ∂F b ∂J 2 .
(2.76)
In the multi-component case, the susceptibility coefficients are still given by equation (2.50). However, the response matrix now encompasses all the active components of the system, so that
M pq (ω) = (2π) d m dJ m•∂( b F b )/∂J ω-m•Ω ψ (p) * m (J ) ψ (q) m (J ) .
(2.77)
Similarly to equation (2.68), the multi-component Balescu-Lenard equation may also be written as an anisotropic diffusion equation, so that
∂F a ∂t = ∂ ∂J 1 • m1 m 1 b µ a A b m1 (J 1 ) F a (J 1 ) + µ b D b m1 (J 1 ) m 1 • ∂F a ∂J 1 . (2.78)
In equation (2.78), we introduced the multi-component drift and diffusion coefficients A b m1 (J 1 ) and D b m1 (J 1 ). They depend on the location J 1 in action space, the considered resonance m 1 , and the component "b", whose DF is the underlying DF used to estimate them. In analogy with equation (2.69), the multi-component drift coefficients are given by
A b m1 (J 1 ) = -π(2π) d m2 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) D m1,m2 (J 1 , J 2 , m 1 •Ω 1 ) 2 m 2 • ∂F b ∂J 2 , (2.79)
while the diffusion ones, similarly to equation (2.70), read
D b m1 (J 1 ) = π(2π) d m2 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) D m1,m2 (J 1 , J 2 , m 1 •Ω 1 ) 2 F b (J 2 ) .
(2.80)
One should pay attention to the fact that the multi-component drift and diffusion coefficients from equations (2.79) and (2.80) do not have the same dimension as the single component ones. In order to emphasise the process of mass segregation, let us finally rewrite equation (2.78) as
∂F a ∂t = ∂ ∂J 1 • m1 m 1 µ a A tot m1 (J 1 ) F a (J 1 ) + D tot m1 (J 1 ) m 1 • ∂F a ∂J 1 , (2.81)
where we introduced the total drift and diffusion coefficients A tot m1 (J 1 ) and D tot m1 (J 1 ) as
A tot m1 (J 1 ) = b A b m1 (J 1 ) ; D tot m1 (J 1 ) = b µ b D b m1 (J 1 ) . (2.82)
In equation (2.81), let us note that the only differences between the different components is the presence of the mass prefactor µ a in front of the total drift coefficient. This leads to the process of mass segregation, when a spectrum of mass is involved.
H-theorem
Following closely Heyvaerts (2010), let us define the system's entropy S(t) as
S(t) = -dJ 1 s(F (J 1 )) , with s(x) = x log(x) , (2.83)
where s(x) corresponds to Boltzmann's entropy function. Let us follow the definition of the total diffusion flux F tot (J 1 ) from equation (2.71) to rewrite F tot (J 1 ) as
F tot (J 1 ) = m1,m2 m 1 dJ 2 α m1,m2 (J 1 , J 2 ) m 1 • ∂ ∂J 1 -m 2 • ∂ ∂J 2 F (J 1 ) F (J 2 ) , (2.85) with α m1,m2 (J 1 , J 2 ) given by α m1,m2 (J 1 , J 2 ) = π(2π) d µ δ D (m 1 •Ω 1 -m 2 •Ω 2 ) |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 )| 2 ≥ 0 .
(2.86)
Integrating equation (2.84) by parts and ignoring boundary terms, one gets
dS dt = dJ 1 s (F (J 1 )) ∂F ∂J 1 •F tot (J 1 ) . (2.87)
Thanks to the rewriting from equation (2.85), equation (2.87) becomes
dS dt = m1,m2 dJ 1 dJ 2 α m1,m2 (J 1 , J 2 ) s 1 (m 1 •F 1 ) F 2 (m 1 •F 1 )-F 1 (m 2 •F 2 ) , (2.88)
where we used the shortened notations s i = s (F (J i )), F i = F (J i ), and F i = ∂F/∂J i . Equation (2.88) can be symmetrised via the substitutions m 1 ↔ m 2 and J 1 ↔ J 2 . As α m2,m1 (J 2 , J 1 ) = α m1,m2 (J 1 , J 2 ), equation (2.88) finally becomes
dS dt = 1 2 m1,m2 dJ 1 dJ 2 α m1,m2 (J 1 , J 2 ) F 2 s 1 (m 1 •F 1 ) 2 -(m 1 •F 1 )(m 2 •F 2 )(F 1 s 1 +F 2 s 2 )+F 1 s 2 (m 2 •F 2 ) 2 .
(2.89) As Boltzmann's entropy function2 satisfies s (x) = 1/x, the square bracket in equation (2.89) may immediately be factored as (2.90) so that one finally gets dS/dτ ≥ 0. The Balescu-Lenard equation (2.67) therefore satisfies Boltzmann's H-theorem. This entropy increase corresponds to heat generation as the orbital structure of the system secularly rearranges itself driven by self-induced collisional effects. The previous demonstration naturally extends both to the Landau equation (2.73), but also more interestingly to the multi-component . Indeed, defining the system's total entropy S tot as
1 F 1 F 2 F 2 (m 1 •F 1 )-F 1 (m 2 •F 2 ) 2 ≥ 0 ,
S tot (t) = -dJ 1 a 1 µ a s(F a (J 1 )) , (2.91)
and following the same approach, one can show that for s (x) = 1/x, one has dS tot /dt ≥ 0. Let us finally note that this does not necessarily imply that the entropy of each individual component increases.
Conclusion
In this chapter, we presented two important sources of diffusion to induce secular evolution in selfgravitating systems. The first source, presented in section 2.2, considers the case of a collisionless system undergoing external perturbations. The second source, presented in section 2.3, is captured by the Balescu-Lenard equation, which describes the long-term effects of finite-N fluctuations on isolated discrete self-gravitating systems. In our two derivations, we emphasised the strong similarities existing between the two approaches, as can be seen in particular in their similar decoupled evolution equations.
Let us finally underline that both equations (2.31) and (2.67) share the properties that they describe strongly anisotropic diffusion in action space (see figure 2.2.1), account for the system's internal susceptibility (via the response matrix from equation (2.17) and the associated gravitational polarisation, see figure 1.3.6). Because they are sourced by different fluctuations, either external or internal, these two orbital diffusion processes provide the ideal frameworks in which to study the secular evolution of self-gravitating systems.
The rest of the thesis is focused on illustrating for various astrophysical systems how these formalisms allow for a detailed description of their secular dynamics. In chapter 3, we will consider the case of razor-thin stellar discs. In order to obtain simple quadratures for the diffusion fluxes, we will develop a razor-thin WKB formalism (i.e. restriction to radially tightly wound perturbations) providing a straightforward understanding of the regions of maximum amplification within the disc. We will illustrate how the functional form of the diffusion coefficients explains the self-induced formation of resonant ridges in the disc's DF, as observed in numerical simulations. In chapter 4, we will resort to the same razor-thin stellar discs, but will devote our efforts to correctly account for the disc's self-gravity and the associated strong amplification. This will be shown to significantly hasten the diffusion in the disc. In addition, in Appendix 4.D, we will illustrate how the same method may also be applied to study the long-term dynamics of 3D spherical systems such as dark matter haloes. This framework provides a promising way to investigate the secular transformation of dark matter haloes' cusps into cores. In chapter 5, we will extend our WKB approximation to apply it to thickened stellar discs. We will investigate various possible mechanisms of thickening such as the disc's internal Poisson shot noise, a series of central decaying bars, or the joint evolution of giant molecular clouds within the disc. Finally, in chapter 6, we will consider the case of quasi-Keplerian systems, such as galactic centres, for which the presence of a dominating central body imposes a degenerate Keplerian dynamics. Once tailored for such systems, we will detail in particular how the Balescu-Lenard formalism recovers the process of "resonant relaxation" specific to these systems.
Future works
The previous formalisms could be generalised in various ways.
In Appendix 2.C, we presented a new method based on a functional approach to derive the inhomogeneous Landau equation. Because of the simplicity of the required calculations, this throws new light on the complex dynamical processes at play. One could hope to generalise this calculation to account for collective effects and recover the inhomogeneous Balescu-Lenard equation. Such a calculation is expected to be more demanding, as it will involve a Fredholm type equation, such as equation (2.125). Similarly, we showed in Fouvry et al. (2016b) how the same functional approach could also be transposed to the kinetic theory of two-dimensional point vortices (Chavanis, 2012d,c). One should investigate other physical systems for which this approach could also be successful. Finally, it would be of particular interest to apply this method to derive a closed kinetic equation when higher order correlation terms are accounted for. This could for example allow us to describe the dynamics of 1D homogeneous systems, for which the 1/N Balescu-Lenard collision term vanishes by symmetry (Eldridge & Feix, 1963;Kadomtsev & Pogutse, 1970). This is also the case for the Hamiltonian Mean Field model (HMF) (Chavanis et al., 2005;Bouchet & Dauxois, 2005).
Inspired by Pichon & Aubert (2006), the previous approaches could also be extended and developed for open systems, by accounting for possible sources and sinks of particles. Similarly, it could also prove interesting to investigate the Balescu-Lenard equation in a context where the system's number of particles gets to evolve during the secular evolution, to describe for example the progessive dissolution of overdensities, etc. Similarly, as can be seen in the proposed derivations, all these formalisms rely on the fundamental assumption of integrability, i.e. on the existence of angle-action coordinates. It would be of interest to investigate as well how such approaches could be tailored to deal with chaotic behaviours and their associated diffusions. Finally, one could also investigate within these frameworks the role that gas may play on the dynamical properties of the system. Indeed, one crucial property of gas is that it cannot shell-cross, it shocks. This typically means that the gas component is dynamically much colder than its stellar counterpart, which alters the system's dynamical susceptibility.
µ dx i dt = ∂H ∂v i ; µ dv i dt = - ∂H ∂x i , (2.92)
where (x i , v i ) corresponds to the position and velocity of particle i. The total Hamiltonian H appearing in equation (2.92) encompasses all the binary interactions between particles, so that
H = µ 2 N i=1 v 2 i + µ 2 N i<j U (|x i -x j |) , (2.93)
where U (|x|) stands for the interaction potential, e.g., U (|x|) = -G/|x| in the gravitational context. As will be underlined in chapter 6 when considering quasi-Keplerian systems, one can easily add an external potential to this Hamiltonian, and the associated hierarchy equations are straightforward to deduce. While equations (2.92) captures the individual dynamics of the system's components, we are interested in a statistical description of our system. As a consequence, let us introduce the system's N -body probability distribution function (PDF) P N (x 1 , v 1 , ..., x N , v N , t) which gives the probability of finding at time t particle 1 at position x 1 with velocity v 1 , particle 2 at position x 2 with velocity v 2 , etc. We choose the convention that
dΓ 1 dΓ 2 ...dΓ N P N (Γ 1 , ...Γ N , t) = 1 , (2.94)
where we introduced the phase space coordinates Γ i = (x i , v i ), so that dΓ i = dx i dv i . The evolution of P N is given by Liouville's equation (see equation (1.9)) which reads
∂P N ∂t + N i=1 v i • ∂P N ∂x i +µF tot i • ∂P N ∂v i = 0 , (2.95)
where we introduced the total force, F tot i , exerted on particle i as
F tot i = N j =i F ij = - N j =i ∂U ij ∂x i .
(2.96)
In equation (2.96), F ij stands for the force exerted by particle j on particle i. It satisfies
F ij = -∂U ij /∂x i ,
where we wrote the interaction potential as
U ij = U (|x i -x j |). At
∂P n ∂t + n i=1 v i • ∂P n ∂x i + n i=1 n k=1,k =i µF ik • ∂P n ∂v i + (N -n) n i=1 dΓ n+1 ...dΓ N µF i,n+1 • ∂P n+1 ∂v i = 0 . (2.98)
One can note that equation (2.98) is defined on the smaller space (Γ 1 , ..., Γ n ). The three first terms only involve the first n particles, while the last collision term involves the reduced P n+1 of higher order, i.e. the BBGKY hierarchy is not closed. In order to simplify the prefactors present in equation (2.98), let us introduce the reduced distribution functions f n as
f n (Γ 1 , ..., Γ n , t) = µ n N ! (N -n)! P n (Γ 1 , ..., Γ n , t) .
(2.99)
The hierarchy from equation (2.98) immediately becomes
∂f n ∂t + n i=1 v i • ∂f n ∂x i + n i=1 n k=1,k =i µF ik • ∂f n ∂v i + n i=1 dΓ n+1 F i,n+1 • ∂f n+1 ∂v i = 0 .
(2.100) Equation (2.100) corresponds to the traditional writing of the BBGKY hierarchy. In order to emphasise the importance of the contributions arising from correlations between particles, let us introduce the cluster representation of the reduced distribution functions. We therefore define the 2-body correlation
g 2 as f 2 (Γ 1 , Γ 2 ) = f 1 (Γ 1 )f 1 (Γ 2 )+g 2 (Γ 1 , Γ 2 ) , (2.101)
where the dependences w.r.t. t were not written out explicitly to simplify the notations. Similarly, we introduce the 3-body autocorrelation g 3 , so that f 3 reads
f 3 (Γ 1 , Γ 2 , Γ 3 ) = f 1 (Γ 1 )f 1 (Γ 2 )f 1 (Γ 3 )+f 1 (Γ 1 )g 2 (Γ 2 , Γ 3 )+f 1 (Γ 2 )g 2 (Γ 1 , Γ 3 )+f 1 (Γ 3 )g 2 (Γ 1 , Γ 2 )+g 3 (Γ 1 , Γ 2 , Γ 3 ).
(2.102) Thanks to the convention from equation (2.94), it is straightforward to check that one has the normalisations
dΓ 1 f 1 (Γ 1 ) = µN ; dΓ 1 dΓ 2 g 2 (Γ 1 , Γ 2 ) = -µ 2 N ; dΓ 1 dΓ 2 dΓ 3 g 3 (Γ 1 , Γ 2 , Γ 3 ) = 2µ 3 N .
(2.103)
As the mass of the individual particles is given by µ = M tot /N , one immediately gets the scalings w.r.t. the number of particles as f 1 ∼ 1, g 2 ∼ 1/N , and g 3 ∼ 1/N 2 . Thanks to these decompositions, the two first equations of the BBGKY hierarchy from equation (2.100) respectively become
∂f 1 ∂t +v 1 • ∂f 1 ∂x 1 + dΓ 2 F 12 f 1 (Γ 2 ) • ∂f 1 ∂v 1 + dΓ 2 F 12 • ∂g 2 (Γ 1 , Γ 2 ) ∂v 1 = 0 , (2.104) and 1 2 ∂g 2 ∂t +v 1 • ∂g 1 ∂x 1 + dΓ 3 F 13 f 1 (Γ 3 ) • ∂g 2 ∂v 1 +µF 12 • ∂f 1 ∂v 1 f 1 (Γ 2 )+ dΓ 3 F 13 g 2 (Γ 2 , Γ 3 ) • ∂f 1 ∂v 1 +µF 12 • ∂g 2 ∂v 1 + dΓ 3 F 13 ∂g 3 (Γ 1 , Γ 2 , Γ 3 ) ∂v 1 +(1 ↔ 2) = 0 , (2.105)
where (1 ↔ 2) stands for the permutation of indices 1 and 2, and applies to all preceding terms. When considering the long-term evolution induced by discreteness effects, one may perform a truncation at order 1/N of the two equations (2.104) and (2.105). This requires to rely on the scalings from equation (2.103), as well as on the fact that µ ∼ 1/N and F ij ∼ 1. In equation (2.104), all the terms are at least of order 1/N , so that they should all be conserved. In equation (2.105), all the terms on the first line are of order 1/N and have to be conserved, while all the terms on the second line are of order 1/N 2 and may therefore be neglected. 3 In addition to these truncations, and in order to consider quantities of order 1, let us introduce the system's 1-body DF F and the 2-body correlation function C as
F = f 1 ; C = g 2 µ .
(2.106)
It is straightforward to note that F ∼ 1 and C ∼ 1. When truncated at order 1/N , the first two equations (2.104) and (2.105) finally take the form
∂F ∂t +v 1 • ∂F ∂x 1 + dΓ 2 F 12 F (Γ 2 ) • ∂F ∂v 1 +µ dΓ 2 F 12 • ∂C(Γ 1 , Γ 2 ) ∂v 1 = 0 , (2.107)
and
1 2 ∂C ∂t +v 1 • ∂C ∂x 1 + dΓ 3 F 13 F (Γ 3 ) • ∂C ∂v 1 +F 12 • ∂F ∂v 1 F (Γ 2 ) + dΓ 3 F 13 C(Γ 2 , Γ 3 ) • ∂F ∂v 1 +(1 ↔ 2) = 0 . ( 2
2.B Derivation of the Balescu-Lenard equation via the BBGKY hierarchy
In this Appendix, we revisit the derivation of the Balescu-Lenard equation (2.67) following the method presented in Heyvaerts (2010). This method, based on the direct resolution of the BBGKY hierarchy is complementary to the second approach subsequently proposed by Chavanis (2012b), based on the Klimontovich equation and already presented in section 2.3. As already shown in Appendix 2.A, at order 1/N , the dynamics of a self-gravitating system made of N identical particles is fully characterised by its 1-body DF F and 2-body autocorrelation C. These two dynamical quantities are coupled by the two first truncated equations of the BBGKY hierarchy, namely equations (2.107) and (2.108). They can be rewritten as
∂F ∂t + v 1 • ∂ ∂x 1 + dΓ 2 F 12 F (Γ 2 ) • ∂ ∂v 1 F = -µ dΓ 2 F 12 • ∂C(Γ 1 , Γ 2 ) ∂v 1 , (2.109)
and
∂C ∂t + v 1 • ∂ ∂x 1 + dΓ 3 F 13 F (Γ 3 ) • ∂ ∂v 1 C + v 2 • ∂ ∂x 2 + dΓ 3 F 23 F (Γ 3 ) • ∂ ∂v 2 C + dΓ 3 F 13 C(Γ 2 , Γ 3 ) • ∂F ∂v 1 + dΓ 3 F 23 C(Γ 1 , Γ 3 ) • ∂F ∂v 2 = -F 12 • ∂ ∂v 1 - ∂ ∂v 2 F (Γ 1 )F (Γ 2 ) . (2.110)
2.B.1 Solving for the autocorrelation
S 2 (Γ 1 , Γ 2 , t) = -F 12 • ∂ ∂v 1 - ∂ ∂v 2 F (Γ 1 )F (Γ 2 ) .
(2.111) Equation (2.110) can be solved for C(Γ 1 , Γ 2 , t) by working out the Green's function
G (2) (Γ 1 , Γ 2 , Γ 1 , Γ 2 , τ )
of the linear differential operator in its l.h.s. Indeed, the solution for C(Γ 1 , Γ 2 , t) can be written as (2) . It reads
C(Γ 1 , Γ 2 , t) = +∞ 0 dτ dΓ 1 dΓ 2 G (2) (Γ 1 , Γ 2 , Γ 1 , Γ 2 , τ ) S 2 (Γ 1 , Γ 2 , t-τ ) . ( 2
∂G (2) ∂τ + v 1 • ∂ ∂x 1 + dΓ 3 F 13 F (Γ 3 ) • ∂ ∂v 1 G (2) + v 2 • ∂ ∂x 2 + dΓ 3 F 23 F (Γ 3 ) • ∂ ∂v 2 G (2) + dΓ 3 F 13 G (2) (Γ 3 , Γ 2 , Γ 1 , Γ 2 , τ ) • ∂F ∂v 1 + dΓ 3 F 23 G (2) (Γ 1 , Γ 3 , Γ 1 , Γ 2 , τ ) • ∂F ∂v 2 = 0 , (2.113)
where we assumed that the source term S 2 (t) was effectively turned on only for t ≥ 0, so that S 2 (t < 0) = 0. Moreover, the Green's function initially satisfies
G (2) (Γ 1 , Γ 2 , Γ 1 , Γ 2 , 0) = δ D (Γ 1 -Γ 1 )δ D (Γ 2 -Γ 2 ).
Once the autocorrelation has been expressed as a function of F , i.e. C = C[F ], one may finally proceed to the evaluation of the collision operator C [F ] appearing in the r.h.s. of equation (2.109), which reads (2) as the product of two 1-body Green's function G (1) , so that
C [F ] = -µ dΓ 2 F 12 • ∂C[F ](Γ 1 , Γ 2 ) ∂v 1 . ( 2
G (2) (Γ 1 , Γ 2 , Γ 1 , Γ 2 , τ ) = G (1) (Γ 1 , Γ 1 , τ ) G (1) (Γ 2 , Γ 2 , τ ) , (2.115)
where the 1-body Green's function G (1) satisfies the linearised 1-body Vlasov equation, namely
∂G (1) (Γ 1 , Γ 1 , τ ) ∂τ + v 1 • ∂ ∂x 1 + dΓ 2 F 12 F (Γ 2 ) • ∂ ∂v 1 G (1) (Γ 1 , Γ 1 , τ ) + dΓ 2 G (1) (Γ 2 , Γ 1 , τ )F 12 • ∂F ∂v 1 = 0 , (2.116)
with the initial condition
G (1) (Γ 1 , Γ 1 , 0) = δ D (Γ 1 -Γ 1 )
. 4 Because of the causality requirement, one needs to solve equation (2.116) only for τ ≥ 0. To do so, we rely on Bogoliubov's ansatz, which assumes that the system's 1-body DF F only evolves on a slow secular timescale, while the fluctuations and correlations evolve on a fast dynamical timescale. As a consequence, in equation (2.116), which describes the evolution of fluctuations, one may assume F to be frozen. Because of this decoupling, the correlations at a given time t can be seen as functionals of F evaluated at the very same time. To solve equation (2.116), let us perform a Laplace transform following the conventions from equation (2.44). One gets
-iω G (1) (Γ 1 , Γ 1 , ω)+ v 1 • ∂ ∂x 1 + dΓ 2 F 12 F (Γ 2 ) • ∂ ∂v 1 G (1) (Γ 1 , Γ 1 , ω) + dΓ 2 G (1) (Γ 2 , Γ 1 , ω)F 12 • ∂F ∂v 1 = δ D (Γ 1 -Γ 1 ) .
(2.117)
2.B.2 Application to inhomogeneous systems
Let us now assume that the system's mean potential is integrable, so that the physical phase space coordinates (x, v) may be remapped to angle-action ones (θ, J ). Such a mapping allows for a simple description of the intricate trajectories of individual particles. This change of coordinates is canonical and the infinitesimal volumes are conserved, i.e. dΓ = dxdv = dθdJ . Thanks to the adiabatic approximation (Heyvaerts, 2010;Chavanis, 2012bChavanis, , 2013b)), let us also assume that the system's 1-body DF is a quasi-stationary solution of the collisionless dynamics, so that F (θ, J , t) = F (J , t). The angle-action coordinates satisfy two important additional properties. First, the derivatives along the mean motion take the simple form
v 1 • ∂ ∂x 1 + dΓ 2 F 12 F (Γ 2 ) • ∂ ∂v 1 = Ω 1 • ∂ ∂θ 1 , (2.118)
where Ω 1 are the intrinsic frequencies of motion associated with the mean potential. Secondly, the Poisson brackets are invariant under the change of coordinates (x, v) → (θ, J ), so that for any functions
L 1 (x, v) and L 2 (x, v), one has ∂L 1 ∂x • ∂L 2 ∂v - ∂L 1 ∂v • ∂L 2 ∂x = ∂L 1 ∂θ • ∂L 2 ∂J - ∂L 1 ∂J • ∂L 2 ∂θ .
(2.119)
With these transformations, equation (2.117) becomes
-iω G (1) (Γ 1 , Γ 1 , ω)+Ω 1 • ∂ G (1) (Γ 1 , Γ 1 , ω) ∂θ 1 -dΓ 2 G (1) (Γ 2 , Γ 1 , ω) ∂U 12 ∂θ 1 • ∂F ∂J 1 = δ D (Γ 1 -Γ 1 ) . ( 2
-iω G (1) m1 (J 1 , Γ 1 , ω)+im 1 •Ω 1 G (1) m1 (J 1 , Γ 1 , ω) -(2π) d im 1 • ∂F ∂J 1 m2 dJ 2 G (1) m2 (J 2 , Γ 1 , ω) A m1,m2 (J 1 , J 2 ) = e -im1•θ 1 (2π) d δ D (J 1 -J 1 ) , (2.121)
where the bare susceptibility coefficients A m1,m2 (J 1 , J 2 ) were introduced in equation (2.74). Equation (2.121) can easily be rewritten as
G (1) m1 (J 1 , Γ 1 , ω)+(2π) d m 1 •∂F/∂J 1 ω-m 1 •Ω 1 m2 dJ 2 G (1) m2 (J 2 , Γ 1 , ω) A m1,m2 (J 1 , J 2 ) = i (2π) d e -im1•θ 1 ω-m 1 •Ω 1 δ D (J 1 -J 1 ) . (2.122)
At this stage, let us note that equation (2.122) takes the form of a Fredholm equation, as the Green's function appears twice on the l.h.s, in particular once as an integral term. The method to solve such an equation is to rely on Kalnaj's matrix method (Kalnajs, 1976). Let us therefore introduce a basis of potential and densities (ψ (p) , ρ (p) ) as in equation (2.12), thanks to which the potential perturbations may be decomposed. Let us first develop the interaction potential U on these elements. We consider the function x 1 → U (|x 1 -x 2 |) and decompose it on the basis elements ψ (p) (x 1 ). This takes the form
U (|x 1 -x 2 |) = p u p (x 2 ) ψ (p) (x 1 )
, where the coefficients u p (x 2 ) are given by
u p (x 2 ) = -dx 1 U (|x 1 -x 2 |) ρ (p) * (x 1 ) = -ψ (p) * (x 2 ) .
(2.123)
Because they were defined as the Fourier transform in angles of the interaction potential, the bare susceptibility coefficients from equation (2.74) can immediately be rewritten as
A m1,m2 (J 1 , J 2 ) = - p ψ (p) m1 (J 1 ) ψ (p) * m2 (J 2 ) .
(2.124)
In order to invert the l.h.s. of equation (2.122), let us perform on G
(1) m1 the same operations than the ones operating on G
(1) m2 . This amounts to multiplying equation (2.122) by (2π) d m1 dJ 1 ψ (q) * m1 (J 1 ), so that it becomes
(2π) d m1 dJ 1 ψ (q) * m1 (J 1 ) G (1) m1 (J 1 , Γ 1 , ω) - p (2π) d m1 dJ 1 m 1 •∂F/∂J 1 ω-m 1 •Ω 1 ψ (p) m1 (J 1 ) ψ (q) * m1 (J 1 ) (2π) d m2 dJ 2 G (1) m2 (J 2 , Γ 1 , ω) ψ (p) * m2 (J 2 ) = m1 i e -im1•θ 1 ω-m 1 •Ω 1 ψ (q) * m1 (J 1 ) .
(2.125)
In order to clarify equation (2.125), let us introduce the notations
K p (Γ 1 , ω) = (2π) d m dJ G (1) m (J , Γ 1 , ω) ψ (p) * m (J ) ; L p (Γ 1 , ω) = m i e -im•θ 1 ω-m•Ω 1 ψ (p) * m (J 1 ) . (2.126)
Recalling also the expression of the response matrix from equation (2.17), we may finally rewrite equation (2.125) under the shortened form
K p (Γ 1 , ω) - q M pq (ω) K q (Γ 1 , ω) = L p (Γ 1 , ω) .
(2.127) Assuming that the system considered is dynamically stable, so that [I-M(ω)] can be inverted, equation (2.127) finally leads to
K p (Γ 1 , ω) = q I-M(ω) -1 pq L q (Γ 1 , ω) .
(2.128)
Thanks to equation (2.128), one can finally rewrite equation (2.122) as
G (1) m1 (J 1 , Γ 1 , ω) = 1 (2π) d i e -im1•θ 1 ω-m 1 •Ω 1 δ D (J 1 -J 1 )+ m 1 •∂F/∂J 1 ω-m 1 •Ω 1 m 1 1 D m1,m 1 (J 1 , J 1 , ω) i e -im 1 •θ 1 ω-m 1 •Ω 1 , (2.129)
where the dressed susceptibility coefficients, 1/D m1,m 1 , have been introduced in equation (2.50).
Thanks to the inverse Fourier transform from equation (2.6), one can finally obtain the expression of G (1) (Γ 1 , Γ 1 , ω) as
G (1) (Γ 1 , Γ 1 , ω) = m1,m 1 i e i(m1•θ1-m 1 •θ 1 ) ω-m 1 •Ω 1 δ m 1 m1 (2π) d δ D (J 1 -J 1 ) + m 1 •∂F/∂J 1 (ω-m 1 •Ω 1 ) D m1,m 1 (J 1 , J 1 , ω) = m1,m 1 G (1) m1,m 1 (J 1 , J 1 , ω) e i(m1•θ1-m 1 •θ 1 ) .
(2.130)
2.B.3 Rewriting the collision operator
C [F ] = +∞ 0 dτ dΓ 2 dΓ 1 dΓ 2 B dω 2π B dω 2π e -i(ω+ω )τ × µ F 12 • ∂ ∂v 1 G (1) (Γ 1 , Γ 1 , ω) G (1) (Γ 2 , Γ 2 , ω ) F 1 2 • ∂ ∂v 1 - ∂ ∂v 2 F (Γ 1 ) F (Γ 2 ) , (
F 12 • ∂ G (1) (Γ 1 ) ∂v 1 = - ∂U 12 ∂θ 1 • ∂ G (1) (Γ 1 ) ∂J 1 + ∂U 12 ∂J 1 • ∂ G (1) (Γ 1 ) ∂θ 1 = - ∂ ∂J 1 • dθ 1 (2π) d ∂U 12 ∂θ 1 G (1) (Γ 1 ) , (2.132)
where we used the shortened notation
G (1) (Γ 1 ) = G (1) (Γ 1 , Γ 1 , ω).
To obtain the second line of equation (2.132), we relied on Schwarz' theorem. We also relied on the fact that during the secular diffusion, the 1-body DF is of the form F = F (J 1 , t), allowing us to perform an angle average w.r.t. θ 1 . Similarly, one can write
F 1 2 • ∂ v 1 - ∂ v 2 F (Γ 1 )F (Γ 2 ) = - ∂U 1 2 ∂θ 1 • ∂F ∂J 1 F (J 2 )+ ∂U 2 1 ∂θ 2 • ∂F ∂J 2 F (J 1 ) . ( 2
C [F ] = +∞ 0 dτ dJ 2 dJ 1 dJ 2 B dω 2π B dω 2π e -i(ω+ω )τ µ(2π) 3d × ∂ ∂J 1 • m1,m2 m 1 ,m 2 G (1) m1,m 1 (ω) G (1) m2,m 2 (ω ) m 1 A -m1,m2 × A m 1 ,-m 2 m 1 • ∂F ∂J 1 F (J 2 ) + A m 2 ,-m 1 m 2 • ∂F ∂J 2 F (J 1 ) , (2.134)
where we used the shortened notations G
(1) m1,m 1
(ω) = G (1) m1,m 1 (J 1 , J 1 , ω) and A m1,m2 = A m1,m2 (J 1 , J 2 ).
Let us now use the explicit expression of the Fourier coefficients of the 1-body Green's function from equation (2.130). Equation (2.134) becomes
C [F ] = - +∞ 0 dτ dJ 2 dJ 1 dJ 2 B dω 2π B dω 2π e -i(ω+ω )τ µ(2π) d × ∂ ∂J 1 • m1,m2 m 1 ,m 2 1 ω-ω 1 1 ω -ω 2 m 1 A -m1,m2 × δ m 1 m1 δ D (J 1 -J 1 )+(2π) d m 1 •∂F/∂J 1 (ω-ω 1 )D m1,m 1 (ω) δ m 2 m2 δ D (J 2 -J 2 )+(2π) d m 2 •∂F/∂J 2 (ω -ω 2 )D m2,m 2 (ω ) × A m 1 ,-m 2 m 1 • ∂F ∂J 1 F (J 2 ) + A m 2 ,-m 1 m 2 • ∂F ∂J 2 F (J 1 ) , (2.135)
where we used the shortened notations
1/D m1,m 1 (ω) = 1/D m1,m 1 (J 1 , J 1 , ω), as well as ω 1/2 = m 1/2 •Ω 1/2 and ω 1/2 = m 1/2 •Ω 1/2
. The next step of the calculation is to deal with the integration and sum w.r.t. J 2 and m 2 . One can write
m2 dJ 2 A -m1,m2 ω -ω 2 δ m 2 m2 δ D (J 2 -J 2 )+(2π) d m 2 •∂F/∂J 2 (ω -ω 2 ) D m2,m 2 (ω ) = - 1 ω -ω 2 1 D -m1,m 2 (ω )
, (2.136)
where we relied on the intrinsic definition of the dressed susceptibility coefficients 1/D m1,m2 given by
1 D m1,m2 (J 1 , J 2 , ω) = -A m1,m2 (J 1 , J 2 ) -(2π) d m3 dJ 3 m 3 •∂F/∂J 3 ω-m 3 •Ω 3 A m1,m3 (J 1 , J 3 ) D m3,m2 (J 3 , J 1 , ω) .
(2.137) Equation (2.137) is straightforward to obtain thanks to the basis decompositions of the susceptibility coefficients from equations (2.50) and (2.124), and the definition of the response matrix from equation (2.17). Equation (2.135) becomes
C [F ] = +∞ 0 dτ dJ 1 dJ 2 B dω 2π B dω 2π e -i(ω+ω )τ µ(2π) d × ∂ ∂J 1 • m1 m 1 ,m 2 1 ω-ω 1 1 ω -ω 2 m 1 1 D -m1,m 2 (ω ) × δ m 1 m1 δ D (J 1 -J 1 )+(2π) d m 1 •∂F/∂J 1 (ω-ω 1 )D m1,m 1 (ω) × A m 1 ,-m 2 m 1 • ∂F ∂J 1 F (J 2 ) + A m 2 ,-m 1 m 2 • ∂F ∂J 2 F (J 1 ) . (2.138)
The next step of the calculation is to perform the integration and the sum w.r.
[F ] = m 1 dJ 1 δ m 1 m1 δ D (J 1 -J 1 )+(2π) d m 1 •∂F/∂J 1 (ω-ω 1 )D m1,m 1 (ω) A m 1 ,-m 2 m 1 • ∂F ∂J 1 F (J 2 ) = - 1 D m1,-m 2 (ω) m 1 • ∂F ∂J 1 F (J 2 ) .
(2.139)
The second contribution C 2 [F ] takes the form
C 2 [F ] = m 1 dJ 1 δ m 1 m1 δ D (J 1 -J 1 )+(2π) d m 1 •∂F/∂J 1 (ω-ω 1 )D m1,m 1 (ω) A m 2 ,-m 1 m 2 • ∂F ∂J 2 F (J 1 ) = A m 2 ,-m1 m 2 • ∂F ∂J 2 F (J 1 ) + m 1 • ∂F ∂J 1 m 2 • ∂F ∂J 2 (2π) d m 1 dJ 1 F (J 1 ) A m 2 ,-m 1 (ω-ω 1 )D m1,m 1 (ω)
.
(2.140)
Let us now rewrite equation (2.138) by relying on the matrix method, i.e. by using the basis elements ψ (p) . The bare and dressed susceptibility coefficients take the form
A m1,m2 (J 1 , J 2 ) = -ψ (α) m1 (J 1 ) ψ (α) * m2 (J 2 ) ; 1 D m1,m2 (J 1 , J 2 , ω) = ψ (α) m1 (J 1 ) ε -1 αβ (ω) ψ (β) * m2 (J 2 ) , (2.141)
where we introduced the matrix ε(ω) = I-M(ω), with M the response matrix from equation (2.17) and I the identity matrix. In equation (2.141) and the following, all the sums over the greek indices are implied. Let us finally define the matrix H(ω) as
H αβ (ω) = (2π) d m dJ F (J ) ω-m•Ω ψ (α) * m (J ) ψ (β) * -m (J ) .
(2.142)
Gathering the two contributions from equations (2.139) and (2.140), and after some straightforward calculations, one can rewrite equation (2.138) as (2.144)
C [F ] = - +∞ 0 dτ B dω 2π B dω 2π e -i(ω+ω )τ µ ∂ ∂J 1 • m1 1 ω-ω 1 m 1 × ψ (α) -m1 (J 1 ) ε -1 αβ (ω ) H βδ (ω ) ε -1 γδ (ω) ψ (γ) m1 (J 1 ) m 1 • ∂F ∂J 1 + ψ (α) -m1 (J 1 ) ε -1 αγ (ω )-δ αγ ψ (γ) * -m1 (J 1 ) F (J 1 ) + ψ (α) -m1 (J 1 ) ε -1 αγ (ω ) ε -1 δλ (ω) H λγ (ω) ψ (δ) m1 (J 1 ) m 1 • ∂F ∂J 1 -ψ (α) -m1 (J 1 ) ε -1 δλ (ω) H λα (ω) ψ (δ) m1 (J 1 ) m 1 • ∂F ∂J 1 . ( 2
The integration over τ is straightforward provided that ω+ω has a negative imaginary part. Introducing p > 0, we perform the substitution ω+ω → ω+ω -ip, and evaluate the integration over τ as
(2.144) = lim p→0 B dω 2π -i ω+ω -ip g(ω, ω ) .
(2.145)
As the system is supposed to be stable, the poles of the function ω → g(ω, ω ) are all in the lower-half complex plane, and the Bromwich contour B has to pass above all these singularities. The only pole in ω which remains is then ω = -ω+ip, which is in the upper plane. The integration on ω is then carried out thanks to the residue theorem by closing the contour B in the upper half complex plane -this is possible because the integrand decreases sufficiently fast at infinity like
C [F ] = lim p→0 - B dω 2π µ ∂ ∂J 1 • m1 1 ω-ω 1 m 1 × ψ (α) -m1 (J 1 ) ε -1 αγ (-ω+ip)-δ αγ ψ (γ) * -m1 (J 1 ) F (J 1 ) +ψ (α) -m1 (J 1 ) ε -1 αβ (-ω+ip) ε -1 γδ (ω) ψ (γ) m1 (J 1 ) H βδ (-ω+ip)+H δβ (ω) m 1 • ∂F ∂J 1 .
(2.147)
Let us now evaluate the term within brackets in the second term of equation (2.147). It reads
H βδ (-ω + ip)+H δβ (ω) = (2π) d m2 dJ 2 ψ (δ) * m2 (J 2 ) ψ (β) * -m2 (J 2 ) F (J 2 ) 1 ω-ω 2 - 1 ω-(ω 2 +ip) , (2.148)
where we used the notation ω 2 = m 2 •Ω(J 2 ). As one takes the limit p → 0, a naive reading of equation (2.148) would indicate that equation (2.148) vanishes. However, one should be careful with the two poles ω = ω 2 and ω = ω 2 +ip, as these two poles are on opposite sides of the prescribed integration contour B. Indeed, when lowering the integration B to the real axis, the pole ω = ω 2 remains below the contour, while the one in ω = ω 2 +ip is above it. Relying on Plemelj formula from equation (2.29), equation (2.148) becomes
H βδ (-ω + ip)+H δβ (ω) = (2π) d m2 dJ 2 ψ (δ) * m2 (J 2 ) ψ (β) * -m2 (J 2 ) F (J 2 ) 1 ω-ω 2 +i0 - 1 ω-ω 2 -i0 = -2πi(2π) d m2 dJ 2 ψ (δ) * m2 (J 2 ) ψ (β) * -m2 (J 2 ) F (J 2 ) δ D (ω-ω 2 ) .
(2.149)
When lowering the contour B to the real axis, one can also compute the integration w.r.t. ω for the first term in equation (2.147). Because the system is stable, the poles of -1 αγ (-ω+ip) are all located in the upper half plane and there remains only one pole on the real axis in ω = ω 1 . The Bromwich contour B is then closed in the lower half plane and only encloses this second pole. Paying attention to the direction of integration, the residue gives a factor -2iπ, and equation (2.147) becomes
C [F ] = iµ ∂ ∂J 1 • m1 m 1 ψ (α) -m1 (J 1 ) ε -1 αγ (-ω 1 +i0) -δ αγ ψ (γ) * -m1 (J 1 ) F (J 1 ) + (2π) d m1,m2 m 1 dJ 2 ψ (α) -m1 (J 1 ) ε -1 αβ (-ω 2 ) ψ (β) * -m2 (J 2 ) × ψ (γ) m1 (J 1 ) ε -1 γδ (ω 2 ) ψ (δ) * m2 (J 2 ) m 1 •∂F/∂J 1 F (J 2 ) ω 2 -ω 1 +i0 , (2.150)
where one should pay attention to the small positive imaginary part in the pole 1/(ω 2 -ω 1 +i0) associated with the fact that the contour B passes above the pole ω = ω 1 . Relying on the expression of the susceptibility coefficients from equation (2.141), one can rewrite equation (2.150) as
C [F ] = iµ ∂ ∂J 1 • - m1 m 1 1 D m1,m1 (J 1 , J 1 , ω 1 +i0) +A m1,m1 (J 1 , J 1 ) F (J 1 ) + (2π) d m1,m2 m 1 dJ 2 1 D -m1,-m2 (J 1 , J 2 , -ω 2 ) 1 D m1,m2 (J 1 , J 2 , ω 2 ) m 1 •∂F/∂J 1 F (J 2 ) ω 2 -ω 1 +i0 , (2.151)
where we performed the change m 1 → -m 1 in the first term. Note that A m1,m1 (J 1 , J 1 ) is real, thanks to equation (2.141). Let us now rely on the fact that the collision term C [F ] is real. As a consequence, because of the prefactor "i", in equation (2.150), we may restrict ourselves only to the imaginary part of the terms within brackets. The first term requires us to study
Im 1 D m1,m1 (J 1 , J 1 , ω 1 +i0) = 1 2i ψ (α) m1 (J 1 ) ε -1 αβ (ω 1 +i0)-ε -1 * βα (ω 1 +i0) ψ (β) * m1 (J 1 ) . (2.152)
In order to compute the term within brackets, we rely on the identity Heyvaerts ( 2010)
ε -1 -(ε -1 ) † = ε -1 (ε † -ε) (ε † ) -1 .
(2.153)
The term within parenthesis in equation (2.153) can be evaluated and reads
ε † -ε γδ (ω 1 +i0) = -(2π) d m2 dJ 2 m 2 • ∂F ∂J 2 ψ (γ) * m2 (J 2 ) ψ (δ) m2 (J 2 ) 1 ω 1 -ω 2 +i0 * - 1 ω 1 -ω 2 +i0 = -2πi(2π) d m2 dJ 2 δ D (ω 1 -ω 2 ) m 2 • ∂F ∂J 2 ψ (γ) * m2 (J 2 ) ψ (δ) m2 (J 2 ) .
(2.154)
Combining equations (2.152) and (2.154), one finally gets the relation .155) This contribution corresponds to the drift term in the Balescu-Lenard equation. To evaluate the second term in equation (2.151), we rely on the relation Chavanis (2012b)). Thanks to Plemelj formula, it immediately gives the contribution
Im 1 D m1,m1 (J 1 , J 1 , ω 1 +i0) = -π(2π) d m2 dJ 2 δ D (ω 1 -ω 2 ) |D m1,m2 (J 1 , J 2 , ω 1 )| 2 m 2 • ∂F ∂J 2 . ( 2
1/D -m1,-m2 (J 1 , J 2 , -ω) = 1/D * m1,m2 (J 1 , J 2 , ω) (see note [83] in
Im (2π) d m1,m2 m 1 dJ 2 1 D -m1,-m2 (J 1 , J 2 , -ω 2 ) 1 D m1,m2 (J 1 , J 2 , ω 2 ) m 1 •∂F/∂J 1 F (J 2 ) ω 2 -ω 1 +i0 = -π(2π) d m1,m2 m 1 dJ 2 δ D (ω 1 -ω 2 ) |D m1,m2 (J 1 , J 2 , ω 1 )| 2 m 1 • ∂F ∂J 1 F (J 2 ) . (2.156)
This contribution corresponds to the diffusion term in the Balescu-Lenard equation. Gathering the two contributions from equations (2.155) and (2.156) and paying a careful to the signs of the various terms, one gets the final expression of the collision term C [F ] as
C [F ] = π(2π) d µ ∂ ∂J 1 • m1,m2 m 1 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 )| 2 m 1 • ∂ ∂J 1 -m 2 • ∂ ∂J 2 F (J 1 ) F (J 2 ) . (2.157)
This allows us to recover the inhomogeneous Balescu-Lenard equation (2.67).
2.C Functional approach to the Landau equation
The work presented in this Appendix is based on Fouvry et al. (2016a).
The previous sections presented two complementary derivations of the Balescu-Lenard equation, respectively based on the Klimontovich equation and the BBGKY hierarchy. In this Appendix, let us present an alternative approach based on a functional integral rewriting of the dynamics. In a littleknown seven-page paper, [START_REF] Jolicoeur | [END_REF] presented how the general functional integral framework [START_REF] Faddeev | Gauge Fields: An Introduction to Quantum Theory[END_REF] was suited for the study of classical kinetic theory. Using this formalism and starting from Liouville's equation, they recovered the BBGKY hierarchy. More importantly, they illustrated how this approach allows for a simple derivation of the homogeneous Balescu-Lenard equation (Balescu, 1960;Lenard, 1960) of plasma physics. In the context of inhomogeneous systems, we presented in Fouvry et al. (2016a) how this same functional approach may be used to recover the inhomogeneous Landau equation (2.73). Relying on the analogy between self-gravitating systems and 2D systems of point vortices [START_REF] Chavanis | Dynamics and thermodynamics of systems with long range interactions[END_REF], a similar derivation in the context of 2D hydrodynamics was also presented in Fouvry et al. (2016b). In order to offer some new insights on the content of the collisional kinetic equations (2.67) and (2.73), we will now present this alternative derivation in the context of inhomogeneous systems.
2.C.1 Functional integral formalism
As previously, let us consider a system made of N identical particles. At order 1/N , its dynamics is fully described by the two first truncated equations of the BBGKY hierarchy (2.107) and (2.108), which involve the system's 1-body DF F , and the 2-body autocorrelation C. The first step of the present derivation is to rewrite these two coupled evolution equations under a functional form. As an illustration of this method, let us consider a dynamical quantity f depending on time t and defined on a phase space Γ. We assume that this quantity follows an evolution equation of the form [∂ t +L]f = 0, where L is a differential operator. Let us now introduce an auxiliary field λ defined on the same space than f , to rewrite the evolution constraint of f as a functional integral of the form (see [START_REF] Jolicoeur | [END_REF]; Fouvry et al. (2016a) for more details)
1 = Df Dλ exp i dtdΓ λ[∂ t +L]f .
(2.158)
In equation (2.158), we define the action S[F, λ] = i dtdΓ λ[∂ t +L]f as the argument of the exponential. 5It is important to note that the evolution equation satisfied by f corresponds to the quantity by which the auxiliary field λ is multiplied in the action.
When considering the two coupled evolution equations (2.107) and (2.108), one may proceed to a similar transformation. Let us define the phase space coordinates as Γ = (x, v). By introducing two auxiliary fields λ 1 (t, Γ 1 ) and λ 2 (t, Γ 1 , Γ 2 ), respectively associated with F and C, equations (2.107) and (2.108) can be rewritten under the compact functional form
1 = DF DCDλ 1 Dλ 2 exp i dtdΓ 1 λ 1 (A 1 F +B 1 C) + i 2 dtdΓ 1 dΓ 2 λ 2 (A 2 C +D 2 C +S 2 ) .
(2.159)
In equation (2.159), we introduced the operators A 1 , B 1 , A 2 , D 2 , and S 2 as
A 1 F = ∂ ∂t +v 1 • ∂ ∂x 1 + dΓ 2 F 12 F (Γ 2 ) • ∂ ∂v 1 , B 1 C = µ dΓ 2 F 12 • ∂C(Γ 1 , Γ 2 ) ∂v 1 , A 2 C = ∂ ∂t +v 1 • ∂ ∂x 1 +v 2 • ∂ ∂x 2 + dΓ 3 F (Γ 3 ) F 13 • ∂ ∂v 1 +F 23 • ∂ ∂v 2 C(Γ 1 , Γ 2 ) , D 2 C = dΓ 3 F 13 C(Γ 2 , Γ 3 ) • ∂F ∂v 1 + (1 ↔ 2) , S 2 = F (Γ 2 ) F 12 • ∂F ∂v 1 + (1 ↔ 2) .
(2.160)
In equation (2.159), we did not write explicitly the dependence w.r.t. t to simplify the notations. In the expression of B 1 C, let us emphasise the presence of the small factor µ = M tot /N , which illustrates the fact that we consider a kinetic development at order 1/N . Finally, the prefactor 1/2 in equation (2.159) was only added for later convenience and does not play any role for the final expression of the evolution equation, since it was added as a global prefactor. Let us recall here the physical content of the various terms appearing in equation (2.159). Here, A 1 F corresponds to the 1-body Vlasov advection term, and
B 1 C to the 1/N
1 = DF DCDλ 1 Dλ 2 exp i dtdΓ 1 λ 1 (Γ 1 )A 1 F (Γ 1 ) + i 2 dtdΓ 1 dΓ 2 λ 2 (Γ 1 , Γ 2 ) G(Γ 1 , Γ 2 ) - i 2 dtdΓ 1 dΓ 2 C(Γ 1 , Γ 2 ) E(Γ 1 , Γ 2 ) , (2.161)
where it is crucial to note that all the dependences w.r.t. C were gathered in the prefactor of the second line. In equation (2.161), we introduced the quantity G(Γ 1 , Γ 2 ) as
G(Γ 1 , Γ 2 ) = F 12 • F (Γ 2 ) ∂F ∂v 1 -F (Γ 1 ) ∂F ∂v 2 , (2.162)
for which we used the relation F 21 = -F 12 . In equation (2.161), we also introduced the quantity
E(Γ 1 , Γ 2 ) as E(Γ 1 , Γ 2 ) = A 2 λ 2 (Γ 1 , Γ 2 )+ dΓ 3 F 13 λ 2 (Γ 2 , Γ 3 )+F 23 λ 2 (Γ 1 , Γ 3 ) • ∂F ∂v 3 +µF 12 • ∂λ 1 ∂v 1 - ∂λ 1 ∂v 2 . (2.163)
Equation (2.163) was obtained thanks to an integration by parts. In order to invert the time derivative ∂C/∂t present in the term λ 2 A 2 C from equation (2.159), we assumed t ∈ [0; T ], where T is an arbitrary upper temporal bound, along with the boundary conditions C(t = 0) = 0 (the system is supposed to be initially uncorrelated) and λ 2 (T ) = 0 (we are free to impose a condition on λ 2 ). As presented in Fouvry et al. (2016a), let us now neglect collective effects. This amounts to neglecting the contributions associated with the term D 2 C in equation (2.159), so that equation (2.163) becomes
E(Γ 1 , Γ 2 ) = A 2 λ 2 (Γ 1 , Γ 2 )+µF 12 • ∂λ 1 ∂v 1 - ∂λ 1 ∂v 2 . ( 2
= λ 2 [F, λ 1 ].
One may then substitute this relation in equation (2.161), to obtain a functional equation which only involves F and λ 1 . The final step is then to functionally integrate this equation w.r.t. λ 1 , to obtain a closed kinetic equation involving F only. Let us now show how this alternative approach allows for the derivation of the inhomogeneous Landau equation.
2.C.2 Application to inhomogeneous systems
As in section 2.3, let us assume that the system's mean potential is integrable, so that one may always remap the physical phase space coordinates (x, v) to the angle-action ones (θ, J ). Relying on the adiabatic approximation (Heyvaerts, 2010;Chavanis, 2012bChavanis, , 2013b)), we assume that the 1-body DF is a quasi-stationary solution of the Vlasov equation, so that F (θ, J ) = F (J ), where the dependence w.r.t.
t has not been written out to shorten the notations. Since λ 1 is the auxiliary field associated with F , one also has λ 1 (θ, J ) = λ 1 (J ), while the second auxiliary field λ 2 (θ 1 , J 1 , θ 2 , J 2 ) still fully depends on all angle-action coordinates. Let us note that for homogeneous systems, the system's invariance by translation would impose λ
2 (x 1 , v 1 , x 2 , v 2 ) = λ 2 (x 1 -x 2 , v 1 , v 2 ).
Relying on the angle-action properties from equations (2.118) and (2.119), one may now rewrite the various operators appearing in equation (2.161). Equation (2.160) gives
A 1 F = ∂F ∂t .
(2.165)
Similarly, equation (2.162) can be rewritten as
G(Γ 1 , Γ 2 ) = -F (J 2 ) ∂U 12 ∂θ 1 • ∂F ∂J 1 +F (J 1 ) ∂U 21 ∂θ 2 • ∂F ∂J 2 .
(2.166)
Finally, the constraint E(Γ 1 , Γ 2 ) = 0 from equation (2.164) takes the form
∂λ 2 ∂t +Ω 1 • ∂λ 2 ∂θ 1 +Ω 2 • ∂λ 2 ∂θ 2 -µ ∂U 12 ∂θ 1 • ∂λ 1 ∂J 1 + ∂U 21 ∂θ 2 • ∂λ 1 ∂J 2 = 0 .
(2.167)
2.C.3 Inverting the constraint
In order to invert equation (2.167), we once again rely on Bogoliubov's ansatz by assuming that the fluctuations (such as C and λ 2 ) evolve much faster than the mean dynamical orbit-averaged quantities (such as F and λ 1 ). As a consequence, on the timescale on which λ 2 evolves, one can assume F and λ 1 to be frozen, while on the timescale of secular evolution, one can assume λ 2 to be equal to the asymptotic value associated with the current value of F and λ 1 . As defined in equation (2.6), let us perform a Fourier transform w.r.t. the angles θ. We decompose the interaction potential U 12 as
U 12 = U (x(θ 1 , J 1 )-x(θ 2 , J 2 )) = m1,m2 A m1,m2 (J 1 , J 2 ) e i(m1•θ1-m2•θ2) , (2.168)
where the bare susceptibility coefficients A m1,m2 (J 1 , J 2 ) were already introduced in equation (2.74).
Multiplying equation (2.167) by 1/(2π) 2d e i(m1•θ1-m2•θ2) and integrating it w.r.t. θ 1 and θ 2 , we obtain
∂λ -m1,m2 ∂t -i∆ωλ -m1,m2 = -iµA * m1,m2 m 1 • ∂λ 1 ∂J 1 -m 2 • ∂λ 1 ∂J 2 , (2.169)
where we used the shortening notations λ -m1,m2 = λ -m1,m2 (J 1 , J 2 ), A m1,m2 = A m1,m2 (J 1 , J 2 ), and
∆ω = m 1 •Ω 1 -m 2 •Ω 2 .
Thanks to the boundary condition λ 2 (T ) = 0 introduced in equation (2.163), and relying on the adiabatic approximation that λ 1 is frozen, one can straightforwardly solve the differential equation (2.169) as
λ -m1,m2 (t) = µA * m1,m2 m 1 • ∂λ 1 ∂J 1 -m 2 • ∂λ 1 ∂J 2
1-e i∆ω(t-T ) ∆ω .
(2.170)
In order to consider only the forced regime of evolution, let us now assume that the arbitrary temporal bound T is large compared to the considered time t. Therefore, we place ourselves in the limit
T →+∞ λ -m1,m2 (t) = iπµA * m1,m2 m 1 • ∂λ 1 ∂J 1 -m 2 • ∂λ 1 ∂J 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) .
(2.172)
Thanks to Bogoliubov's ansatz, we therefore inverted the constraint E[F, λ 1 , λ 2 ] = 0 from equation (2.167), to obtain
λ 2 = λ 2 [F, λ 1 ].
2.C.4 Recovering the Landau collision operator
Let us now substitute the inverted expression from equation (2.172) into the functional integral from equation (2.161), which then only involves F and λ 1 . The remaining action term S[F, λ 1 ] reads
S[F, λ 1 ] = i dtdΓ 1 λ 1 A 1 F + i 2 dtdΓ 1 dΓ 2 λ 2 [F, λ 1 ] G(Γ 1 , Γ 2 ) . (2.173)
Thanks to the expressions of A 1 and G from equations (2.165) and (2.166), and using a Fourier transform in angles as in equation (2.6), one can rewrite equation (2.173) as
S[F, λ 1 ] = i dtdΓ 1 λ 1 (Γ 1 ) ∂F ∂t + i 2 dtdΓ 1 dΓ 2 m1,m2 Im A m1,m2 λ -m1,m2 m 1 • ∂F ∂J 1 F (J 2 )-m 2 • ∂F ∂J 2 F (J 1 ) . (2.174)
Thanks to the inversion from equation (2.172), one immediately has
Im A m1,m2 λ -m1,m2 = πµδ D (m 1 •Ω 1 -m 2 •Ω 2 ) A m1,m2 2 m 1 • ∂λ 1 ∂J 1 -m 2 • ∂λ 1 ∂J 2 .
(2.175)
Injecting this result in equation (2.174), one gets
S[F, λ 1 ] = i dtdΓ 1 λ 1 ∂F ∂t + i 2 dtdΓ 1 dΓ 2 m1,m2 πµδ D (m 1 •Ω 1 -m 2 •Ω 2 ) A m1,m2 2 × m 1 • ∂λ 1 ∂J 1 -m 2 • ∂λ 1 ∂J 2 m 1 • ∂F ∂J 1 F (J 2 )-m 2 • ∂F ∂J 2 F (J 1 ) . (2.176)
The final step of the calculation is to rewrite the second term of equation (2.176) under the form dtdΓ 1 λ 1 (Γ 1 )... This is a straightforward calculation, which requires to use an integration by parts and to permute accordingly the indices 1 ↔ 2. Equation (2.176) can finally be rewritten as
S[F, λ 1 ] = i dtdΓ 1 λ 1 (Γ 1 ) ∂F ∂t -π(2π) d µ ∂ ∂J 1 • m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) A m1,m2 2 × m 1 • ∂F ∂J 1 F (J 2 )-m 2 • ∂F ∂J 2 F (J 1 ) , (2.177)
where the additional prefactor (2π) d comes from the transformation dΓ 2 f (J 2 ) = (2π) d dJ 2 f (J 2 ). Integrating functionally equation (2.177) w.r.t. λ 1 , one finally obtains a closed form expression for the kinetic equation as
∂F ∂t = π(2π) d µ ∂ ∂J 1 • m1,m2 m 1 dJ 2 δ D (m 1 •Ω 1 -m 2 •Ω 2 ) A m1,m2 (J 1 , J 2 ) 2 × m 1 • ∂ ∂J 1 -m 2 • ∂ ∂J 2 F (J 1 , t)F (J 2 , t) .
(2.178)
As a conclusion, relying on a functional integral formalism, we were able to exactly recover the inhomogeneous Landau equation (2.73). Such a new calculation provides additional insights on the origin of these diffusion equations. A natural next step would be to show how it may be used to account for collective effects and recover the inhomogeneous Balescu-Lenard equation (2.67). Such a derivation is expected to be more involved, as one will have to deal with a self-consistent Fredholm equation associated with the polarisation dressing of the potential fluctuations (similar to the one obtained in equation (2.122)). As illustrated in the two derivations from section 2.3 and Appendix 2.B, this requires to rely on Kalnajs matrix method (Kalnajs, 1976) and to introduce potential-density basis elements. [START_REF] Jolicoeur | [END_REF] managed to develop such a self-consistent calculation in the homogeneous context of plasma physics, where both the resonance condition and the Fredholm equation are simpler. The generalisation of this method to inhomogeneous systems will be the subject of a future work. Finally, because of its alternative point of view, this approach may also turn out fruitful to tackle the question of obtaining closed kinetic equations when higher order correlation terms are taken into account.
Introduction
Most stars, perhaps all, are born in stellar discs. Major mergers destroyed some of these discs quite early in the history of the universe, but some have survived up to the present day, including the Milky Way. Understanding the secular dynamics of stellar discs appears therefore as an essential ingredient of cosmology, as the discs' cosmological environments are now firmly established in the ΛCDM paradigm (Planck Collaboration et al., 2014). Self-gravitating stellar discs are cold responsive dynamical systems in which rotation provides an important reservoir of free energy and where orbital resonances play a key role. The availability of free energy leads to some stimuli being strongly amplified, while resonances tend to localise their dissipation, with the net result that even a very small perturbation can lead to discs evolving to significantly distinct equilibria. Stellar discs are submitted to various sources of gravitational noise, such as Poisson shot noise arising from the finite number of stars in the disc, or from the finite number of giant molecular clouds in the interstellar medium or sub-haloes orbiting around the galaxy. Spiral arms in the gas distribution also provide another source of fluctuations, while the central bar of the disc offers another source of stimulus more systematic than noisy. The history of a stellar disc likely comprises the joint responses to all these various stimuli.
One can find in the solar neighbourhood at least three illustrations of such effects. First, the random velocity of each coveal cohort of stars increases with the cohort's age (Wielen, 1977;Aumer & Binney, 2009). In addition, the velocity distribution around the Sun exhibits several "streams" of stars (Dehnen, 1998). Each of these streams contains stars of various ages and chemistries, which are all responding to some stimulus in a similar fashion [START_REF] Famaey | [END_REF]. Finally, in the two-dimensional action space (J φ , J r ), where J φ stands for the angular momentum and J r for a measure of the star's radial excursion (see section 3.2), the distribution of stars shows elongated features. The density of stars is indeed depressed near J r = 0, i.e. near circular orbits, but enhanced at larger J r , such that the whole disturbed region forms a curve that is consistent with being given by a resonant condition such as 2Ω φ -Ω r = cst. (Sellwood, 2010;McMillan, 2013). Such features are called resonant ridges and will play an important role in our upcoming discussions of the secular dynamics of razor-thin stellar discs, as already argued for example in Sellwood & Carlberg (2014).
Direct numerical simulations of razor-thin stellar discs are very challenging because their twodimensional geometry combined with their responsiveness causes discreteness noise to be important unless a large number of particles is employed. It is only recently that it became possible to simulate a disc with a sufficient number of particles for Poisson shot noise to be dynamically unimportant for many orbital times, such as in the simulations presented in Sellwood (2012). In addition, it is all the more difficult to simulate accurately a stellar disc that is embedded in a cosmological environment and therefore exposed to cosmic noise. Such experiments are essential to understand how the orbital structure of a disc may restructure on secular times. However, the reliability of numerical simulations over numerous dynamical times is an issue which calls for alternative probes, hence the need for analytical frameworks such as the ones presented in chapter 2.
In the present chapter, we attempt to explain the origin of these resonant ridges in razor-thin discs while relying on two competing processes of secular diffusion, either collisionless (section 2.2) for which the source of fluctuations is imposed by an external source, or collisional (section 2.3) for which the source of fluctuations is self-induced and due to the system's own discreteness. Two main difficulties are encountered when implementing these diffusion equations. First, one has to explicitly construct the mapping (x, v) → (θ, J ), as the diffusion occurs in action space. In the context of galactic dynamics, these coordinates are now being increasingly used to construct equilibrium models of stellar systems [START_REF] Binney | Dynamics of secular evolution[END_REF]Piffl et al., 2014) or study the dynamics of stellar streams [START_REF] Helmi | [END_REF]Sellwood, 2010;Eyre & Binney, 2011;McMillan, 2013;Sanders & Binney, 2013). When considering a stellar disc, if one assumes the disc to be sufficiently tepid (i.e. the stars' orbits are not too eccentric), one can rely on the epicyclic approximation to construct such a mapping, as presented in section 3.2.
The second difficulty arises when accounting for the system's self-gravity. Indeed, this requires to compute the system's response matrix M from equation (2.17), which asks for the introduction of potential and density basis elements as in equation (2.12). In order to ease the analytical inversion of [I-M], one may rely on the Wentzel-Kramers-Brillouin (WKB) approximation [START_REF] Liouville | [END_REF]Toomre, 1964;Kalnajs, 1965;[START_REF] Lin | Proc. Natl. Acad. Sci. USA[END_REF][START_REF] Palmer | [END_REF][START_REF] Binney | Galactic Dynamics: Second Edition[END_REF], which amounts to considering only the diffusion sustained by radially tightly wound spirals. This transforms Poisson's equation into a local equation and leads to a diagonal response matrix. Such an application of the WKB formalism in the context of the secular diffusion of razor-thin axisymmetric discs relies on the construction of tailored WKB basis elements presented in section 3.3. As will be noted in section 3.6, this WKB approximation also allows for an explicit calculation of the resonant condition . Based on Fouvry et al. (2015a); Fouvry & Pichon (2015); Fouvry et al. (2015b), section 3.7 finally computes via the WKB approximation the collisionless and collisional diffusion fluxes to investigate radial diffusion in razor-thin axisymmetric discs, as observed in the simulations from Sellwood (2012).
δ D (m 1 •Ω 1 -m 2 •Ω 2 ) appearing in the collisional
Angle-action coordinates and epicyclic approximation
In order to investigate secular evolutions, a first step is to build up an explicit mapping (x, v) → (θ, J ) to angle-action coordinates. To do so, we assume that the disc is sufficiently cold, i.e. that the radial velocity dispersion is sufficiently small, and rely on the epicyclic approximation. Let us introduce the polar coordinates (R, φ), as well as their associated momenta (p R , p φ ). For a razor-thin axisymmetric disc, the stationary Hamiltonian of the system takes the form
H 0 = 1 2 p 2 R + p 2 φ R 2 + ψ 0 (R) , (3.1)
where ψ 0 is the stationary axisymmetric background potential in the disc. The Hamiltonian being independent of φ, p φ is a conserved quantity. This is the azimuthal action of the system, the angular momentum J φ , which reads
J φ = 1 2π dφ p φ = p φ = R 2 φ . (3.2)
For a given value of J φ , the radius R of the particle evolves according to
R = - ∂ψ eff ∂R , (3.3)
where we introduced the effective potential
ψ eff (R) = ψ 0 (R)+J 2 φ /(2R 2
). Since we assume that the radial excursions of the particles are small, we may place ourselves in the vicinity of circular orbits. For a given J φ , we define the guiding radius R g via the implicit relation
0 = ∂ψ eff ∂R Rg = ∂ψ 0 ∂R Rg - J 2 φ R 3 g . (3.4)
Here R g (J φ ) corresponds to the radius of stars with an angular momentum equal to J φ , which are on exactly circular orbits. One should note that the mapping between R g and J φ is bijective and unambiguous (up to the sign of J φ , i.e. whether stars are prograde or retrograde). In addition, this circular orbit is described at the azimuthal frequency Ω φ given by
Ω 2 φ (R g ) = 1 R g ∂ψ 0 ∂R Rg = J 2 φ R 4 g . (3.5)
In the vicinity of circular orbits, the Hamiltonian from equation (3.1) may be approximated as
H 0 = p 2 R 2 +ψ eff (R g , 0)+ κ 2 2 (R-R g ) 2 , (3.6)
where we introduced the radial epicyclic frequency κ as
κ 2 (R g ) = ∂ 2 ψ eff ∂R 2 Rg = ∂ 2 ψ 0 ∂R 2 Rg + 3 J 2 φ R 4 g . (3.7)
In equation (3.6), the radial motion takes the form of a harmonic libration. Up to an initial phase, there exists an amplitude A R such that R = R g +A R cos(κt). The associated radial action J r is then given by
J r = 1 2π dR p R = 1 2 κA 2 R . (3.8)
Here, J r = 0 corresponds to circular orbits, and the larger J r , the more eccentric the orbit. Let us also emphasise that the two intrinsic frequencies of motion only depend on J φ , so that Ω(J ) = (Ω φ (J φ ), κ(J φ )). This will play an important role in the resonance condition appearing in the Balescu-Lenard equation (2.67). The epicyclic approximation is illustrated in figure
3.2.1. One can finally explicitly construct R ψ eff ψ epi E R p R a R g Figure 3.
2.1: Illustration of the epicyclic approximation in a razor-thin axisymmetric disc. The guiding radius Rg corresponds to the location of the minimum of the effective potential ψ eff . The epicylic approximation amounts to approximating ψ eff in the vicinity of its minimum by a harmonic potential ψepi. In this limit, the star then undergoes radial harmonic librations between the pericentre Rp and apocentre Ra of its trajectory.
the mapping between (R, φ, p R , p φ ) and (θ R , θ φ , J r , J φ ) (Lynden-Bell & [START_REF] Lynden-Bell | [END_REF][START_REF] Palmer | Stability of Collisionless Stellar Systems[END_REF][START_REF] Binney | Galactic Dynamics: Second Edition[END_REF]. At first order in the radial amplitude, it reads
R = R g + A R cos(θ R ) ; φ = θ φ - 2Ω φ κ A R R g sin(θ R ) . (3.9)
An illustration of an epicyclic orbit constructed with the mappings from equation (3.9) is given in figure 3.2.2. Let us note that numerous improvements of the epicyclic approximation have been proposed in the literature (Kalnajs, 1979;Dehnen, 1999;Lynden-Bell, 2010).1 Finally, we assume that initially the DF of the system takes the form of a quasi-isothermal DF (Binney & McMillan, 2011) of the form
F (R g , J r ) = Ω φ (R g )Σ(R g ) πκ(R g )σ 2 r (R g ) exp - κ(R g )J r σ 2 r (R g ) , (3.10)
where Σ(R g ) is the surface density of the disc and σ 2 r (R g ) represents the local radial velocity dispersion of the stars at a given radius. Larger values of σ 2 r correspond to hotter discs, which are therefore more stable. Such a DF becomes the Schwarzschild DF in the epicycle limit (see equation (4.153) in [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF]). action mapping from equation (3.9). Such an orbit is the combination of an azimuthal oscillation at the mean frequency Ω φ , and a harmonic libration between the star's pericentre Rp and apocentre Ra at the frequency κ. We highlighted in bold the azimuthal increase ∆φ = 2πΩ φ /κ during one radial oscillation. For degenerate orbits, such as the Keplerian ones (see chapter 6), ∆φ is a multiple of 2π, i.e. the frequencies Ω φ and κ are in a rational ratio, which leads to a closed orbit (see figure 1.3.2).
The razor-thin WKB basis
As we are considering the 2D case of a razor-thin disc, the basis elements introduced in equation (2.12) must be written as ψ (p) (R, φ) in polar coordinates, and are associated with the surface densities
Σ (p) (R, φ).
Here, as we will show, relying on the WKB approximation amounts to building up local basis elements thanks to which the response matrix M from equation (2.17) becomes diagonal. Let us introduce the basis elements
ψ [k φ ,kr,R0] (R, φ) = A e i(k φ φ+krR) B R0 (R) , (3.11)
where the radial window function B R0 (R) reads
B R0 (R) = 1 (πσ 2 ) 1/4 exp - (R-R 0 ) 2 2σ 2 . (3.12)
The basis elements from equation (3.11) depend on three indices: k φ is an azimuthal number which characterises the angular dependence of the basis elements, R 0 is the radius around which the Gaussian window B R0 is centred, while k r gives the radial frequency of the basis elements. One should also note the introduction of a scale-separation parameter σ, which will ensure the biorthogonality of the basis elements as detailed later on. We also introduced an amplitude A which will be tuned later on to correctly normalise the basis elements. The somewhat unsual normalisation of B R0 was chosen for later convenience, to ensure that the amplitude A is independent of σ. kr,R0] . To do so, we extend in the z-direction the WKB potential from equation (3.11) using the ansatz Poisson's equation in vacuum (i.e. Laplace's equation) ∆ψ [k φ ,kr,R0] = 0 immediately leads to
ψ [k φ ,kr,R0] (R, φ, z) = A e i(k φ φ+krR) B R0 (R) Z(z) . (3.13) R ψ R p 0 σ 1/k p r R q 0 σ 1/k q r Figure 3.
Z Z = k 2 r 1- i k r R +2i R-R 0 σ 2 1 k r + R-R 0 R 1 (σk r ) 2 + 1 (σk r ) 2 + k 2 φ (k r R) 2 - R-R 0 σ 2 1 k r 2 . (3.14)
At this stage, let us introduce explicitly our WKB assumption that all perturbations are radially tightly wound. Defining the typical size of the system by R sys , we assume that
k r R 1 ; k r σ R sys σ . (3.15)
For azimuthal wavenumbers k φ of order unity, equation (3.14) then becomes
Z Z = k 2 r . (3.16)
As a conclusion, within the WKB limit, the extended potential from equation (3.13) takes the form (3.17) where we ensured that for z → ±∞, the potential tends to 0. Equation (3.17) introduces a discontinuity for ∂ψ/∂z in z = 0. Gauss' theorem then gives the associated surface densities as
ψ [k φ ,kr,R0] (R, φ, z) = ψ [k φ ,kr,R0] (R, φ) e -kr|z| ,
Σ(R, φ) = 1 4πG lim z→0 + ∂ψ ∂z -lim z→0 - ∂ψ ∂z , (3.18) so that Σ [k φ ,kr,R0] (R, φ) = - |k r | 2πG ψ [k φ ,kr,R0] (R, φ) . (3.19)
The next step of the construction of the WKB basis elements is to ensure that the potentials and densities from equations (3.11) and (3.19) form a biorthogonal basis, i.e. that one has
δ k q φ k p φ δ k q r k p r δ R q 0 R p 0 = -dR R dφ ψ [k p φ ,k p r ,R p 0 ] (R, φ) Σ [k q φ ,k q r ,R q 0 ] (R, φ) * . (3.20)
One can rewrite the r.h.s. of equation (3.20) as
(3.20) = |k q r | 2πG A p A q √ πσ 2 dφ e i(k p φ -k q φ )φ dR R e i(k p r -k q r )R exp - (R-R p 0 ) 2 2σ 2 exp - (R-R q 0 ) 2 2σ 2 . (3.21)
The integration on φ is straightforward and gives 2πδ
k q φ k p φ .
To perform the integration on R, we must now introduce additional assumptions to ensure the biorthogonality of the basis. The peaks of the two Gaussians in equation (3.21) may be assumed as separated if ∆R = R p 0 -R q 0 satisfies the separation condition
∆R σ if R p 0 = R q 0 . (3.22)
The term from equation (3.21) can then be assumed to be non-zero only for R p 0 = R q 0 . Equation (3.21) becomes
δ k q φ k p φ δ R q 0 R p 0 |k q r | G A p A q √ πσ 2 dR R e i(k p r -k q r )R exp - (R-R p 0 ) 2 σ 2 . (3.23)
The remaining integration on R now takes a form similar to the radial Fourier transform of a Gaussian of spread σ at the frequency ∆k r = k p r -k q r , and is therefore proportional to exp[-(∆k r ) 2 /(4/σ 2 )]. As a consequence, let us assume that the frequency spread ∆k r of the WKB basis satisfies
∆k r 1 σ if k p r = k q r . (3.24)
With this additional assumption, equation (3.23) is non-zero only for k p r = k q r . Therefore, as imposed by equations (3.22) and (3.24), in order to have a biorthogonal basis, one has to consider a spread σ, central radii R 0 , and radial frequencies k r such that
∆R 0 σ 1 ∆k r . (3.25)
With these constraints, the r.h.s. of equation (3.20) is non-zero only for k p φ = k q φ , k p r = k q r , and R p 0 = R q 0 . The last step of the calculation is to explicitly estimate the amplitude A in order to correctly normalise the basis elements. Equation (3.20) imposes
|k r | G A 2 √ πσ 2 dR R exp - (R-R 0 ) 2 σ 2 = 1 . (3.26)
Thanks to the WKB assumptions from equation (3.15), this integration is straightforward and gives
A = G |k r |R 0 . (3.27)
Once the basis elements from equation (3.11) have been fully specified, one may compute ψ (p) m (J ), their Fourier transform w.r.t. the angles, as defined in equation (2.6). Thanks to the explicit epicyclic mapping from equation (3.9), this takes the form
ψ [k φ ,kr,R0] m (J ) = Ae ikrRg (2π) 2 dθ φ dθ R e -im φ θ φ e -imrθ R e ik φ θ φ × e i[krA R cos(θ R )-k φ 2Ω φ κ A R Rg sin(θ R )] B R0 (R g +A R cos(θ R )) .
(3.28)
The integration on θ φ is straightforward and gives 2πδ k φ m φ . Regarding the dependence on θ R in the complex exponential, we write
k r A R cos(θ R )-k φ 2Ω φ κ A R R g sin(θ R ) = H k φ (k r ) sin(θ R +θ 0 R ) , (3.29)
where we introduced the amplitude H k φ (k r ) and the phase shift θ 0 R as
H k φ (k r ) = A R |k r | 1+ Ω φ κ 2k φ k r R g 2 ; θ 0 R = tan -1 - κ Ω φ k r R g 2k φ .
(3.30)
For typical galactic discs, one has 1/2 ≤ Ω φ ≤ κ [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF]. Assuming that k φ is of order unity and relying on the WKB assumptions from equation (3.15), one can simplify equation (3.30) as
H k φ (k r ) A R |k r | ; θ 0 R - π 2 . (3.31)
As we assumed the disc to be tepid, the radial oscillations of the stars are small so that A R R g . In equation (3.28), we may then get rid of the dependence on
A R in B R0 (R g +A R cos(θ R ))
and replace it with B R0 (R g ). The only remaining dependence with A R in equation (3.28) is then in the complex exponential, and we are now in a position to explicitly perform the last integration on θ R . To do so, let us recall the sum decomposition formula of the Bessel functions of the first kind J , which reads
e iz sin(θ) = ∈Z J [z] e i θ .
(3.32)
The expression of the Fourier transformed WKB basis elements finally reads
ψ [k φ ,kr,R0] m (J ) = δ k φ m φ e ikrRg e imrθ 0 R A J mr H m φ (k r ) B R0 (R g ) .
(3.33)
WKB razor-thin amplification eigenvalues
After having explicitly constructed the WKB basis from equation (3.11), one may now compute the system's response matrix M from equation (2.17). Thanks to the Fourier transformed WKB basis elements from equation (3.33), one has to evaluate an expression of the form
M [k p φ ,k p r ,R p 0 ],[k q φ ,k q r ,R q 0 ] (ω) = (2π) 2 m dJ m•∂F/∂J ω-m•Ω δ k p φ m φ δ k q φ m φ e i(k q r -k p r )Rg A p A q × J mr 2Jr κ k p r J mr 2Jr κ k q r B R p 0 (R g ) B R q 0 (R g ) . (3.34)
Let us now illustrate how in the WKB limit, the response matrix becomes diagonal. One should first note how equation (3.34) is very similar to equation (3.21) where we discussed the biorthogonality of the WKB basis. In equation (3.34), the azimuthal Kronecker symbols impose k p φ = k q φ . Moreover, thanks to our assumption from equation (3.25) on the step distances of the basis elements, the product of the two Gaussian windows in R g imposes R p 0 = R q 0 to have a non-negligible contribution. In order to shorten temporarily the notations, let us introduce the function h(R g ) defined as (3.35) which encompasses all the additional slow radial dependences from equation (3.34). When estimated for R p 0 = R q 0 , equation (3.34) becomes
h(R g ) = dJ φ dR g m•∂F/∂J ω-m•Ω A p A q J mr 2Jr κ k p r J mr 2Jr κ k q r ,
dR g h(R g ) e iRg(k q r -k p r ) exp - (R g -R p 0 ) 2 σ 2 . (3.36)
This takes the form of a radial Fourier transform F R at the frequency ∆k r = k p r -k q r . When rewritten as the convolution of two radial Fourier transforms, it becomes
(3.36) ∼ dk F R [h](k ) exp - (∆k r -k ) 2 4/σ 2 . (3.37)
Because of the WKB assumption from equation (3.25) and the Gaussian from equation (3.37), one can note that if ∆k r = 0, the contribution from F R [h](k ) will come from the region k ∼ ∆k r 1/σ. We assume that the properties of the disc are slowly varying with the radius, so that the function h has a radial Fourier transform limited to the frequency region |k | 1/σ. Non-negligible contributions to the response matrix can then only be obtained for ∆k r = k p r -k q r = 0. As a conclusion, we have shown that within the WKB framework, the response matrix from equation (2.17) can be assumed to be diagonal. To shorten the notations, let us denote the matrix eigenvalues as
λ [k φ ,kr,R0] (ω) = M [k φ ,kr,R0],[k φ ,kr,R0] (ω) .
(3.38)
The last step of the present computation is to compute the integrals over J φ and J r in equation (3.34) to obtain an explicit expression of the response matrix diagonal coefficients. First, the WKB scale decoupling assumption allows us to replace the Gaussian from equation (3.36) by a Dirac delta δ D (R g -R p 0 ) (one should pay a careful attention to the correct normalisation of the Gaussian). Equation (3.34) then becomes
λ [k φ ,kr,R0] (ω) = (2π) 2 A 2 dJ φ dR g R0 m δ k φ m φ dJ r m•∂F/∂J ω-m•Ω J 2 mr 2Jr κ k r . (3.39)
Here, the azimuthal Kronecker symbol allows us to execute the sum on m φ . In addition, the intrinsic frequencies from equations (3.5) and (3.7) immediately give
dJ φ dR g R0 = R 0 κ 2 2Ω φ . (3.40)
As the disc is assumed to be tepid, we may assume that |∂F/∂J φ | |∂F/∂J r |, so that only the DF's gradient w.r.t. the radial action may be kept in equation (3.39). Using the expression of the quasi-isothermal DF from equation (3.10) and the basis amplitude from equation (3.27), equation (3.39) becomes
λ [k φ ,kr,R0] (ω) = 2πGΣ|k r | κ 2 κ 4 k 2 r σ 4 r mr dJ r -m r exp[-κJ r /σ 2 r ] ω-k φ Ω φ -m r κ J 2 mr 2Jr κ k r . (3.41)
We now rely on the integration formula (see formula (6.615) in [START_REF] Gradshteyn | Table of integrals, series, and products[END_REF])
+∞ 0 dJ r e -αJr J 2 mr β J r = e -β 2 /(2α) α I mr β 2 2α , (3.42)
where α > 0, β > 0, m r ∈ Z, and I mr are modified Bessel functions of the first kind. We apply this formula with α = κ/σ 2 r and β = 2k 2 r /κ. We also introduce the notation
χ r = σ 2 r k 2 r κ 2 , (3.43) so that equation (3.41) becomes λ [k φ ,kr,R0] (ω) = 2πGΣ|k r | κ 2 κ χ r mr -m r e -χr I mr [χ r ] ω-k φ Ω φ -m r κ . (3.44)
Let us finally introduce the dimensionless shifted frequency s as
s = ω-k φ Ω φ κ . (3.45)
Using the property I -mr [χ r ] = I mr [χ r ], we may finally rewrite equation (3.44) introducing the reduction factor (Kalnajs, 1965;[START_REF] Lin | Proc. Natl. Acad. Sci. USA[END_REF])
F(s, χ r ) = 2(1-s 2 ) e -χr χ r +∞ mr=1 I mr [χ r ] 1-[s/m r ] 2 . (3.46)
This allows for a final rewriting of the response matrix eigenvalues in the tightly wound limit as
M [k p φ ,k p r ,R p 0 ],[k q φ ,k q r ,R q 0 ] (ω) = δ k q φ k p φ δ k q r k p r δ R q 0 R p 0 2πGΣ|k r | κ 2 (1-s 2 ) F(s, χ r ) . (3.47)
These eigenvalues are in full agreement with the seminal results from Kalnajs (1965) and [START_REF] Lin | Proc. Natl. Acad. Sci. USA[END_REF], which derived a WKB dispersion relation for razor-thin axisymmetric discs. In order to deal with the singularity of the previous expression when evaluated for s = n ∈ Z, one should add a small imaginary part to ω, so that s = n+iη. As long as η is small compared to the imaginary part of the least damped mode of the disc, adding this complex part makes a negligible contribution to Re(λ). Equation (3.47) is an important result, as it allows us to estimate straightforwardly the strength of the tightly wound self-gravitating amplification in the disc.
In order to illustrate the physical content of equation (3.47), let us now briefly describe how these amplification eigenvalues allow for the recovery of Toomre's stability parameter Q (Toomre, 1964). This parameter characterises the local stability of an axisymmetric razor-thin stellar disc w.r.t. local tightly wound axisymmetric perturbations. We are interested in the stability w.r.t. axisymmetric modes, so that we impose k φ = 0. We place ourselves at the stability limit given by ω = 0, so that equation (3.47) imposes s = 0. We now seek a criterion on the disc's parameters such that there exists no k r > 0 for which λ(k r ) = 1, i.e. so that the disc is stable. Thanks to equation (3.47), one has
λ(k r ) = 2πGΣk r κ 2 F(0, χ r ) = 2πGΣ κσ r K 0 (χ r ) , (3.48)
where we introduced the function K 0 max 0.534 for χ 0 max 0.948. As a consequence, one always has λ(k r ) ≤ (2πGΣK 0 max )/(κσ r ). Noting that 2πK 0 max 3.36, we may finally introduce the local stability parameter Q as
K 0 (χ r ) = √ χ r F(0, χ r ) = (1-e -χr I 0 [χ r ])/ √ χ r . The shape of the func- tion χ r → K 0 (χ r ) is illustrated in figure 3.4.1. From figure 3.4.1, one can note that K 0 reaches a maximum
Q(J φ ) = σ r (J φ )κ(J φ ) 3.36 G Σ(J φ ) . (3.49)
Here Q corresponds to the local razor-thin Toomre's parameter (Toomre, 1964), which for Q > 1 ensures the local stability of a razor-thin stellar disc w.r.t. axisymmetric tightly wound perturbations.
The straightforward derivation of this stability parameter starting from the amplification eigenvalues obtained in equation (3.47), illustrates how the razor-thin WKB basis introduced in equation (3.11) is in full agreement with previous seminal results on the WKB linear theory of razor-thin discs.
WKB limit for the collisionless diffusion
Let us now illustrate how the previous WKB calculations allow for the calculation of the secular collisionless diffusion coefficients introduced in equation (2.31). In order to simplify the notations, the WKB basis elements from equation (3.11) will be noted as
ψ (p) = ψ [k p φ ,k p r ,R p 0 ] . (3.50)
We have shown in equation (3.47) that within the WKB limit, the response matrix is diagonal. Introducing its eigenvalues as λ p , one has M pq = λ p δ q p . The matrix [I-M] -1 is then diagonal and the diffusion coefficients from equation (2.32) take the form
D m (J ) = 1 2 p,q ψ (p) m (J ) ψ (q) * m (J ) 1 1-λ p 1 1-λ q C pq (m•Ω) , (3.51)
where the sums on p and q run over the WKB basis elements. We recall that the basis elements ψ
(p) m
as well as the matrix eigenvalues λ p do not change from one realisation to another, so that using the definition of the perturbation autocorrelation from equation (2.26), we may rewrite equation (3.51) as
D m (J ) = 1 2π dω 1 2 p,q ψ (p) m (J ) ψ (q) * m (J ) 1 1-λ p 1 1-λ q b p (m•Ω) b * q (ω ) . (3.52)
In equation (3.52), note that the amplification eigenvalues λ p , λ q , and the basis coefficient b p are both evaluated at the intrinsic frequency m•Ω, while b * q is evaluated at the dummy frequency ω . In order to shorten the upcoming calculations, the frequencies of evaluation, when obvious, will not be explicitly written out. Let us now rely on the explicit expression of the Fourier transformed WKB basis elements from equation (3.33), so that equation (3.52) becomes
D m (J ) = 1 2π dω k p r ,k q r ,R p 0 ,R q 0 1 2 G R p 0 R q 0 1 |k p r k q r | J mr 2Jr κ k p r J mr 2Jr κ k q r e iRg(k p r -k q r ) × 1 1-λ p 1 1-λ q 1 √ πσ 2 exp - (R g -R p 0 ) 2 2σ 2 exp - (R g -R q 0 ) 2 2σ 2 b p b * q . (3.53)
Note that in equation (3.53), we got rid of the sum on k p φ and k q φ as equation (3.33) imposes m φ = k p φ = k q φ . The next step of the calculation is to rewrite equation (3.53) so as to be independent from the exact choice of the WKB basis, i.e. the precise value of σ. To do so, one should replace the basis coefficients b p by expressions involving only the true external potential perturbation δψ e . Relying on the biorthogonality property of the basis elements imposed in equation (2.12), the basis coefficients b p are immediately given by
b p (ω) = -dx Σ (p) * (x) δ ψ e (x, ω) , (3.54)
where the . corresponds to the temporal Fourier transform as defined in equation (2.9). Thanks to the explicit expression of the WKB surface density elements obtained in equation (3.19), some simple algebra (see Fouvry et al. (2015d) for details) easily leads to the relation
b p (ω) = |k p r |R p 0 G 2π (πσ 2 ) 1/4 e -iR p 0 k p r δ ψ e m φ ,k p r [R p 0 , ω] . (3.55)
In equation (3.55), the exterior potential δ ψ e has been transformed according to two transformations: (i) an azimuthal Fourier transform of indice m φ , (ii) a local radial Fourier transform centred around R p 0 at the frequency k p r . These two transformations are defined as
(i): f m φ = 1 2π dφ f [φ] e -im φ φ , (ii): f kr [R 0 ] = 1 2π dR e -ikr(R-R0) exp - (R-R 0 ) 2 2σ 2 f [R] . (3.56)
Equation (3.55) therefore allowed us to express the basis coefficients b p as a function of the exterior perturbing potential δ ψ e . Using this relation and disentangling the sums on (k p r , R p 0 ) and (k q r , R q 0 ), equation (3.53) can be rewritten as
D m (J ) = 1 2π dω g(m•Ω) g * (ω ) , (3.57)
where we defined the function g(ω) as
g(ω) = 2π k p r ,R p 0 g s (k p r , R p 0 , ω) e i(Rg-R p 0 )k p r G r (R g -R p 0 ) . (3.58) In equation (3.58), G r (R) = 1/ √ 2πσ 2 e -R 2 /(2σ 2
) is a normalised Gaussian of width σ, and g s encompasses all the slow dependences of the diffusion coefficients w.r.t. the radial position so that
g s (k p r , R p 0 , ω) = J mr 2Jr κ k p r 1 1-λ k p r δ ψ e m φ ,k p r [R p 0 , ω] . (3.59)
Next, let us replace the sums on k p r and R p 0 appearing in equation (3.58) by continuous integrals. To do so, we rely on Riemann sum formula f (x)∆x dxf (x). One can note in the discrete sums from equation (3.58) that the basis elements are separated by step distances ∆k r and ∆R 0 . We suppose that generically k p r and R p 0 are given by k p r = n k ∆k r , and R p 0 = R g +n r ∆R 0 , where n k is a strictly positive integer and n r is an integer that can be both positive or negative. In addition, one can note in equation (3.58) the presence of a rapidly evolving complex exponential, which may cancel out the diffusion coefficients if the basis step distances are not chosen carefully. When summed over the basis elements, this complex exponential has the dependence
exp i(R g -(R g +n r ∆R 0 ))n k ∆k r = exp -in r n k ∆R 0 ∆k r .
(3.60)
As a consequence, since n r n k is an integer, in order not to have any contributions from the complex exponential in equation ( 3.58), one should choose the step distances so that
∆R 0 ∆k r = 2π . (3.61)
Such a choice corresponds to a critical sampling condition [START_REF] Gabor | [END_REF][START_REF] Daubechies | Information Theory[END_REF]. As illustrated in equation (3.60), this allows us to leave out the complex exponential from equation (3.58) when performing the change to continuous expressions. This transformation is a subtle stage of the calculation, since the step distances should be simultaneously large to comply with the WKB constraints from equation (3.25) and small to justify the use of Riemann sum formula. In this process, as the radial Gaussian in equation (3.58) is sufficiently peaked and correctly normalised, it may be replaced δ D (R g -R p 0 ). Equation (3.58) finally becomes
g(ω) = dk p r g s (k p r , R g , ω) . (3.62)
Let us now define the autocorrelation C δψ e of the external perturbations as
C δψ e [m φ , ω, R g , k p r , k q r ] = 1 2π dω δ ψ e m φ ,k p r [R g , ω] δ ψ e * m φ ,k q r [R g , ω ] . (3.63)
The expression (3.57) of the diffusion coefficients then takes the form
D m (J ) = dk p r J mr 2Jr κ k p r 1 1-λ k p r dk q r J mr 2Jr κ k q r 1 1-λ k q r C δψ e [m φ , m•Ω, R g , k p r , k q r ] , (3.64)
where the amplification eigenvalues, λ kr , are given by equation (3.47) and read
λ kr [R g , m•Ω] = 2πGΣ|k r | κ 2 (1-s 2 ) F(s, χ) . (3.65)
Let us now further simplify the diffusion coefficients from equation (3.64) by assuming some stationarity properties on the stochasticity of the external perturbations δψ e . We assume that these are spatially quasi-stationary and satisfy
δψ e m φ [R 1 , t 1 ] δψ e * m φ [R 2 , t 2 ] = C[m φ , t 1 -t 2 , (R 1 +R 2 )/2, R 1 -R 2 ] , (3.66)
where the dependence of the autocorrelation function C w.r.t. (R 1 +R 2 )/2 is supposed to be slow. Thanks to some simple algebra (see Appendix C of Fouvry et al. (2015d) for details), one can write
δ ψ e m φ ,k 1 r [R g , ω 1 ] δ ψ e * m φ ,k 2 r [R g , ω 2 ] = 2πδ D (ω 1 -ω 2 ) δ D (k 1 r -k 2 r ) C[m φ , ω 1 , R g , k 1 r ] , (3.67)
where C[...] has been transformed twice, according to a temporal Fourier transform as defined in equation (2.9), and according to a local radial Fourier transform as in equation (3.56) of spread
√ 2σ w.r.t. R 1 -R 2 in the neighbourhood of R 1 -R 2 = 0 and (R 1 +R 2 )/2 = R g .
Here, note that in equation (3.67), the autocorrelation was diagonalised w.r.t. ω and k r , as can be seen from the two Dirac deltas. The diffusion coefficients from equation (3.64) then take the simple form
D m (J ) = dk r J 2 mr 2Jr κ k r 1 1-λ kr 2 C[m φ , m•Ω, R g , k r ] . (3.68)
This explicit expression of the collisionless diffusion coefficients is the main result of this section. Equation (3.68) is indeed a simple quadrature involving the power spectrum of the external fluctuations at the resonant frequencies boosted by the eigenvalues of the gravitational susceptibility squared.
In some situations, one may further simplify equation (3.68), thanks to the so-called approximation of the small denominators, which amounts to focusing on the waves that yield the maximum amplification. Indeed, let us assume that the function k r → λ(k r ) is a sharp function reaching a maximum value ω). One can then introduce the two frequency bounds k inf r and k sup r , such that λ(k inf/sup r ) = λ max /2. The characteristic spread of the region of maximum amplification is then given by ∆k λ (R g , ω) k sup r -k inf r . Focusing only on this region, equation (3.68) can be approximated as
λ max (R g , ω = m•Ω) for k r = k max r (R g ,
D m (J ) = ∆k λ J 2 mr 2Jr κ k max r 1 1-λ max 2 C[m φ , m•Ω, R g , k max r ] .
(3.69)
The previous approximation can also be improved by performing the integration for k r ∈ k inf r ; k sup r . Such an approach is more numerically demanding but does not alter the conclusions drawn in the applications presented in section 3.7. In equation (3.69), one should note that the external perturbation autocorrelation C, which sources the diffusion coefficients, depends on four different parameters: the azimuthal wavenumber m φ , the local intrinsic frequency of the system m•Ω, the location in the disc via R g , and finally the radial frequency k max r of the most amplified tightly wound perturbation at this location. As a conclusion, thanks to the explicit WKB basis introduced in equation (3.11), we obtained in equations (3.68) and (3.69) explicit expressions for the system's externally induced diffusion coefficients, whose evaluations are now straightforward. In section 3.7, we illustrate how this formalism may be applied to recover the important features observed in numerical simulations of the long-term evolution of stable quasi-stationary isolated and self-gravitating stellar discs.
WKB limit for the collisional diffusion
In this section, let us now emphasise how the previous WKB calculations also allow for the calculation of the dressed susceptibility coefficients, as well as the collisional drift and diffusion coefficients, appearing in the inhomogeneous Balescu-Lenard equation (2.67). Here, rather than considering a situation where the disc evolves as a result of external stochastic perturbations, we consider the collisional case, where the source of secular evolution is finite-N fluctuations. In this context, we will especially emphasise how the WKB approximation allows us to deal with the resonance condition present in the Balescu-Lenard equation.
A crucial property of the WKB basis from equation (3.11) is that the response matrix, M, becomes diagonal, as shown in equation (3.47). Using the same shortening notations as in equation (3.50), one can rewrite the Balescu-Lenard susceptibility coefficients from equation (2.50) as
1 D m1,m2 (J 1 , J 2 , ω) = p ψ (p) m1 (J 1 ) 1 1-λ p (ω) ψ (p) * m2 (J 2 ) . (3.70)
Thanks to the expression of the Fourier transformed WKB basis elements from equation (3.33), this becomes
1 D m1,m2 (J 1 , J 2 , ω) = k p φ ,k p r ,R p 0 δ k p φ m φ 1 δ k p φ m φ 2 G k p r R p 0 1 1-λ p J m r 1 2J 1 r κ1 k p r J m r 2 2J 2 r κ2 k p r e ik p r (R1-R2) e iθ 0p R (m r 1 -m r 2 ) × 1 √ πσ 2 exp - (R 1 -R p 0 ) 2 2σ 2 exp - (R 2 -R p 0 ) 2 2σ 2 , (3.71)
where we used the shortening notations κ i = κ(J i ) and R i = R g (J i ). The azimuthal Kronecker symbols immediately impose (3.72) so that the sum on k p φ is limited to only one term. Before proceeding further with the evaluation of the susceptibility coefficients, let us first emphasise an additional consequence of the WKB assumptions, which is the restriction to local resonances.
m φ 1 = m φ 2 = k p φ ,
Note that the Balescu-Lenard drift and diffusion coefficients from equations (2.69) and (2.70) involve an integration over the dummy variable J 2 . For given values of J 1 , m 1 , and m 2 , this should be seen as a scan of the entire action space, searching for resonant regions, where the resonance constraint
m 1 •Ω 1 -m 2 •Ω 2 = 0 is satisfied (see figure 2.3.2).
As we placed ourselves within the epicyclic approximation, the intrinsic frequencies Ω = (Ω φ , κ) from equations (3.5) and (3.7) only depend on the action J φ . This significantly simplifies the resonance condition. For a given value of R 1 = R g (J 1 ), m 1 , and m 2 , one has to find the resonant radii R r 2 such that the resonance condition f (R r 2 ) = 0 is satisfied, where we defined the function f (R r 2 ) as
f (R r 2 ) = m 1 •Ω(R 1 )-m 2 •Ω(R r 2 ) . (3.73)
After having identified the resonance radii R r 2 , one can then rely on the rule for the composition of a Dirac delta and a smooth function, which reads
δ D (f (x)) = y∈Z f δ D (x-y) |f (y)| , (3.74)
where
Z f = {y | f (y) = 0}
is the set of all the poles of f . Equation (3.74) also assumes that all the poles of f are simple (i.e. non-degenerate), which in our context amounts to assuming that
d(m 2 •Ω) dR R r 2 = 0 . (3.75)
As long as the rates of change of the two intrinsic frequencies are not in a rational ratio, resonance poles will be simple. Note that the harmonic case, for which κ = 2Ω φ , and the Keplerian case, for which κ = Ω φ , are in this sense degenerate. Such dynamical degeneracies, which occur for example in the vicinitiy of super massive black holes or for protoplanetary discs, require a more involved evaluation of the Balescu-Lenard collision operator, and will be considered in detail in chapter 6. As noted in equation (3.72), in order to have non-zero susceptibility coefficients, one must have m φ 1 = m φ 2 . As a consequence, the resonance requirement from equation (3.73) takes the form
m φ 1 Ω φ (R 1 ) + m r 1 κ(R 1 ) = m φ 1 Ω φ (R r 2 ) + m r 2 κ(R r 2 ) . (3.76) Note in equation (3.71) the presence of narrow radial Gaussians in R 1 and R 2 . As a consequence, the relevant resonant radii R r 2 must necessarily be close to R 1 , so that |∆R| = |R r 2 -R 1 | (few) σ.
In this limit, one can rewrite equation (3.76) as
m φ 2 dΩ φ dR +m r 2 dκ dR ∆R = m r 1 -m r 2 κ(R 1 ) . (3.77)
On the one hand, in the l.h.s. of equation (3.77), the term within brackets is non-zero, because of our assumption of non-degeneracy from equation (3.75). Moreover, because of the WKB scale decoupling approach, the additional prefactor ∆R is small. On the other hand, the r.h.s. of equation (3.77) is discrete: it is either zero, or at least of the order of κ(R 1 ). To be satisfied, equation (3.77) therefore imposes that its two sides should be equal to 0. As a consequence, within the WKB limit, only local resonances are allowed so that
R r 2 = R 1 ; m r 2 = m r 1 . (3.78)
This is an important consequence of the WKB approximation. This forbids distant orbits to resonate, and allows for an explicit calculation of the collision operator.
As a result of this restriction, let us proceed with the evaluation of the dressed susceptibility coefficients from equation (3.71), by restricting ourselves only to
m 2 = m 1 and R 2 = R 1 . Equation (3.71) becomes 1 D m1,m1 = k p r ,R p 0 G k p r R p 0 1 1-λ p J m r 1 2J 1 r κ1 k p r J m r 1 2J 2 r κ1 k p r 1 √ πσ 2 exp - (R 1 -R p 0 ) 2 σ 2 , (3.79)
where we introduced the shortening notation
1/D m1,m1 = 1/D m1,m1 (R 1 , J 1 r , R 1 , J 2 r , ω).
As in equation (3.62), the next step of our calculation is to replace the discrete sums on the indices k p r and R p 0 in equation (3.79) by continuous integrals. As previously, we rely on Riemann sum formula, and assume that the step distances of the WKB basis ∆R 0 and ∆k r satisfy the critical sampling condition from equation (3.61), i.e. one has ∆R 0 ∆k r = 2π. We also note in equation (3.79) the presence of a narrow radial Gaussian in (R 1 -R p 0 ). As it is correctly normalised, we may replace it with a Dirac delta
δ D (R 1 -R p 0 ). Equation (3.79) becomes 1 D m1,m1 = 1 2π G R 1 +∞ 1/σ k dk r 1 k r 1 1-λ kr (R 1 , ω) J m r 1 2J 1 r κ1 k r J m r 1 2J 2 r κ1 k r , (3.80)
where we introduced a cut-off at 1/σ k for the integration on k r . This bound is justified by the WKB constraint from equation (3.25), which imposes that the radial frequency k r is bounded from below and avoids the divergence associated with the factor 1/k r . It is also important to recall that the susceptibility coefficients should only be evaluated at R 2 = R 1 and m 2 = m 1 , as a result of the restriction to local resonances obtained in equation (3.78). The explicit expression of the susceptibility coefficients from equation (3.80) constitutes the main result of this section. Equation (3.80) also implies that only orbits with similar J 1 r and J 2 r contribute significantly, i.e. the resonances are local. Finally, following equation (3.69), one can further simplify equation (3.80) by relying on the approximation of the small denominators. This amounts to assuming that the biggest contribution to the susceptibility coefficients comes from the tightly wound waves with the largest λ kr . With the same notations than equation (3.69), one can write
1 D m1,m1 = 1 2π G R 1 ∆k λ k max r 1 1-λ max J m r 1 2J 1 r κ1 k max r J m r 1 2J 2 r κ1 k max r .
(3.81)
While still focusing on the most amplified waves, one can improve the approximation of equation (3.81). Indeed, starting from equation (3.80), one can instead perform the k r -integration for k r ∈ [k inf r ; k sup r ], where the frequency bounds are defined by λ(k inf/sup r ) = λ max /2. This approach is numerically more demanding, but allows for a more precise determination of the secular diffusion flux properties.
Once the susceptibility coefficients have been estimated, one may finally evaluate the Balescu-Lenard drift and diffusion coefficients from equations (2.69) and (2.70). Thanks to the restriction to local resonances obtained in equation (3.78), the sum on m 2 in equations (2.69) and (2.70) is limited to the only term m 2 = m 1 . Relying on the formula from equation (3.74), one can immediately perform the integration w.r.t. J 2 φ , which adds a prefactor of the form
1/|∂(m 1 •Ω 1 )/∂J φ |. Let us finally introduce the shortening notation 1 (m 1 •Ω 1 ) = 1 ∂ ∂J φ [m 1 •Ω 1 ] J 1 φ , (3.82)
so that the drift coefficients from equation (2.69) become
A m1 (J 1 ) = - 4π 3 µ (m 1 •Ω 1 ) dJ 2 r m 1 •∂F/∂J (J 1 φ , J 2 r ) |D m1,m1 (J 1 φ , J 1 r , J 1 φ , J 2 r , m 1 •Ω 1 )| 2 , (3.83)
while the diffusion coefficients from equation (2.70) become
D m1 (J 1 ) = 4π 3 µ (m 1 •Ω 1 ) dJ 2 r F (J 1 φ , J 2 r ) |D m1,m1 (J 1 φ , J 1 r , J 1 φ , J 2 r , m 1 •Ω 1 | 2 .
(3.84)
In both equations (3.83) and (3.84), the susceptibility coefficients are given by equation (3.80), or by equation (3.81) within the approximation of the small denominators. These explicit expressions of the drift and diffusion coefficients constitute an important result of this section. Let us emphasise that this WKB Balescu-Lenard formalism is self-contained and does not require any ad hoc fittings of the fluctuations occurring in the system. Except for the explicit calculation of the amplification eigenvalues in equation (3.47), the previous calculations are not limited to the quasi-isothermal DF from equation (3.10). Indeed, these drift and diffusion coefficients are valid for any tepid disc, provided one may rely on the epicyclic angle-action mapping from equation (3.9).
Application to radial diffusion
Let us now apply the previous razor-thin WKB collisionless and collisional diffusion equations to investigate how shot noise may induce radial diffusion in razor-thin axisymmetric stellar discs. In section 3.7.1, we present a model of razor-thin disc model, while in section 3.7.2, we investigate how the previous diffusion fluxes allow us to qualitatively understand the diffusion features observed in direct numerical simulations.
A razor-thin disc model
Recently, Sellwood (2012) (hereafter S12) investigated the secular evolution of a razor-thin disc, via tailored and careful numerical simulations. After letting this disc evolve for hundreds of dynamical times, S12 observed an irreversible diffusion of the disc's DF in action space along narrow resonant ridges (see figure 3.7.5). This evolution was sustained by the spontaneous generation of transient spiral waves in the disc, as we will later detail. The disc considered by S12 is a razor-thin Mestel disc (Mestel, 1963), for which the circular speed v 2 φ = R∂ψ M /∂R = V 2 0 is independent of the radius, where ψ M is initial total potential in the system. One interest of such a simple analytical model is that it reproduces fairly well the observed flat rotation curves of galaxies. The stationary background potential ψ M and its associated surface density Σ M are given by
ψ M (R) = V 2 0 ln R R max ; Σ M (R) = V 2 0 2πGR , (3.85)
where R max is a scale parameter. Because ψ M is scale invariant, the relationship from equation (3.4) between the angular momentum J φ and the guiding radius R g takes the simple form
J φ = R g V 0 .
(3.86)
Within the epicyclic approximation, it is also straightforward to obtain from equation (3.4) that the intrinsic frequencies of motion Ω epi φ and κ epi take the simple form
Ω epi φ (J φ ) = V 2 0 J φ ; κ epi (J φ ) = √ 2 Ω epi φ (J φ ) . (3.87)
Note that the Mestel disc appears as an intermediate non-degenerate disc for which κ epi /Ω epi φ = √ 2, between the Keplerian case (κ/Ω φ = 1) and the harmonic one (κ/Ω φ = 2). Following Toomre (1977b) and [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF], a self-consistent DF for such a Mestel disc is given by
F M (E, J φ ) = C M J q φ exp[-E/σ 2 r ] , (3.88)
where the exponent q and the normalisation prefactor C M are given by
q = V 2 0 σ 2 r -1 ; C M = V 2 0 2 1+q/2 π 3/2 Gσ q+2 r Γ[ 1 2 + q 2 ]R q+1 max . (3.89)
In equations (3.88) and (3.89), we introduced σ r as the constant radial velocity dispersion within the disc. Relying on the epicyclic approximation, the DF from equation (3.88) may be approximated by a quasi-isothermal DF as in equation (3.10), where the intrinsic frequencies are given by equation (3.87), the velocity dispersion σ r is constant throughout the disc, and the surface density is given by Σ star , the active surface density of the disc. In order to deal with the central singularity of the Mestel disc and its infinite extent, one introduces two tapering functions T inner and T outer as
T inner (J φ ) = J νt φ (R i V 0 ) νt +J νt φ ; T outer (J φ ) = 1+ J φ R o V 0 µt -1 , (3.90)
where the two power indices ν t and µ t control the sharpness of the two tapers, while the radii R i and R o are two scale parameters. These tapers intend to mimic the presence of a bulge and the outer truncation of the stellar disc. In addition to these taperings, we also assume that only a fraction ξ (with 0 ≤ ξ ≤ 1) of the disc is indeed active, i.e. self-gravitating, while the missing component will correspond to a static contribution from the dark matter halo. As a consequence, the active DF F star is given by
F star (E, J φ ) = ξ F M (E, J φ ) T inner (J φ ) T outer (J φ ) . (3.91)
One may also rewrite the active surface density Σ star of the disc as
Σ star (J φ ) = ξ Σ M (J φ ) T inner (J φ ) T outer (J φ ) . (3.92)
We place ourselves in the same unit system as S12, so that this disc is scale invariant (except for the presence of the tapering functions from equation (3.90)), the local Toomre's parameter Q (Toomre, 1964), rederived in equation (3.49), becomes almost independent of the radius, especially in the intermediate regions of the disc far from the tapers. As illustrated in figure 3.7.3, one has Q 1.5 between the tapers and it increases strongly in the tapered regions. At this stage, let us emphasise that S12 restricted perturbations to the sole harmonic sector m φ = 2 in order to clarify the dynamical mechanisms at play and avoid any decentring effects prohibitive for its code based on a polar grid. We note that the expressions (2.33) and (2.71) of the collisionless and collisional diffusion flux require us to sum on all the resonance vectors m = (m φ , m r ). Following S12's restriction, we may therefore limit ourselves to the only case m φ = 2. Throughout our calculations, in addition to this azimuthal restriction, we will more drastically restrict the resonance vectors to only three different resonances, namely the inner Lindblad resonance (ILR), m = (2, -1), the corotation resonance (COR), m = (2, 0), and finally the outer Lindbald resonance, m = (2, 1). Figure 3.7.4 illustrates how these three resonances can be interpreted when considering stars' individual orbits. All the calculations presented in the upcoming calculations were also performed while accounting for the contributions from the resonances m r = ±2, which were checked to be subdominant.
V 0 = G = R i =
When simulating the previous razor-thin Mestel disc, S12 invariably observed sequences of transient spirals, even if the disc was specifically tailored to be isolated, stable, and quasi-stationary. On long timescales, this led to an irreversible diffusion in action space of the system's DF as illustrated in figure 3.7.5. Indeed, figure 3.7.5 illustrates the late time formation of a resonant ridge of particles of larger radial actions in the inner regions of disc. This narrow ridge along a very specific resonant direction is a signature of secular evolution occurring in the system. This caused a long-term aperiodic evolution of the disc, during which small resonant and cumulative effects add up in a coherent way. It generically encompasses both processes of churning and blurring (Schönrich & Binney, 2009a). We will explain below in section 3.7.2 the formation of this feature thanks to the previously derived razor-thin WKB diffusion coefficients. In chapter 4, we will revisit the exact same problem while properly accounting for the system's self-gravity which can dress and strongly amplify the fluctuations within the system. Because the simulated disc was isolated, the origin of these small effects, amplified via collective effects, must come from finite-N effects, i.e. be induced by the system's own discreteness.
Let us now illustrate in figure 3.7.6, the dependence of the system's response with the number of particles used to represent the disc. In figure 3.7.6, the disc's evolution is characterised by the peak overdensity δ max = δΣ star /Σ star , which offers an estimation of how much the disc has evolved compared to its initial quasi-stationary state. Important remarks can be made from figure 3.7.6. First, because of the unavoidable Poisson shot noise in the initial conditions, the larger the number of particles, the smaller the initial value of δ max , with an expected scaling given by δ max ∝ 1/ √ N . See the left panel of figure 4.4.2 for a detailed confirmation of this prediction. One can also note an initial systematic steep rise in δ max in the very first dynamical times. This corresponds to the initial swing amplification (see figure 3.7.14) of the initial Poisson shot noise. The quieter the initial sampling (see Sellwood (1983) for [START_REF] Kormendy | Secular Evolution in Disk Galaxies[END_REF]. Illustration of stellar orbits -within the epicyclic approximation -and some associated resonances as seen in the rotating frame attached to a mp = 2 pattern rotating anticlockwise at the pattern frequency Ωp (see top arrow). In this rotating frame, the pattern remains fixed, while, because of differential shearing (i.e. the fact the orbital frequencies decay as stars move outwards), stars will drift w.r.t. it. Inside the corotation, stars drift forward (anticlockwise) w.r.t. the pattern, i.e. they have an azimuthal frequency larger than the one of the pattern. Outside the corotation, stars drift backwards (clockwise) w.r.t. the pattern, as they have a smaller azimuthal frequency. In addition to their azimuthal oscillations at the frequency Ω φ , stars also undergo a harmonic libration at the frequency κ. At corotation (COR, pink orbit), for which Ωp = Ω φ , because of the radial motion, stars move clockwise along a closed ellipse. At the inner Lindblad resonance (ILR, blue orbit), defined as Ωp = Ω φ -κ/2, the stellar epicyclic orbit in the rotating frame is a closed ellipse: the star executes two radial oscillations for every forward (anticlockwise) azimuthal revolution around the centre. At the outer Lindblad resonance (OLR, orange orbit), for which Ωp = Ω φ +κ/2, the orbit is also closed in the rotating frame: the star executes two radial oscillations for every backwards (clockwise) azimuthal revolution around the centre. For other guiding radii, illustrated with grey orbits, the stellar orbits are not closed: the stars are not at resonance with the pattern. a presentation of the quiet start sampling procedure used in S12), i.e. the closer the system is from equilibrium, the weaker this initial phase. See section 4.4 and especially figure 4.4.2 for a more thorough investigation of the dependence of the system's response w.r.t. the number of particles. Right after this initial amplification, the system undergoes two successive dynamical regimes. The first stage is a stage of slow evolution, during which δ max slowly increases. Then, at a later stage, for δ max 0.02, the growth of δ max becomes much steeper and eventually reaches a saturation. As discussed in detail in section 4.4.4 and the associated figure 4.4.6, the first stage of slow evolution corresponds to a regime of secular collisional dynamics during which the system evolves as a result of dressed finite-N effects. As for the second regime of fast growth, it corresponds to unstable collisionless dynamics. This can hardly be seen in figure 3.7.6, but one expects the growth rate of δ max in the first initial slow phase to decrease Figure 3.7.5: Extracted from Sellwood (2012). Ilustration of the evolution of the active stellar DF Fstar from equation (3.91) in action space (J φ , Jr). Left panel: Initial contours of Fstar(J φ , Jr) for t = 0. Contours are spaced linearly between 95% and 5% of the function maximum. One can note how the inner taper from equation (3.90) suppresses the system's density for low angular momentum. Right panel: Same as in the left panel but at a much later stage of the evolution t = 1400. In the inner regions of the disc, one can note the formation on secular timescales of a narrow ridge of enhanced radial actions Jr. This is a signature of secular evolution. √ N . The initial systematic steep rises in δmax in the very first dynamical times correspond to the swing amplification of the system's initial Poisson shot noise. Two phases can then be identified in the growth of δmax. The first slow phase, up to δmax 0.02, corresponds to a slow collisional dynamics driven by finite-N effects, which gets slower as the number of particles increases. The second faster phase, for δmax 0.02, corresponds to an unstable collisionless evolution whose growth rate is independent of the number of particles used. See section 4.4 for a detailed discussion on these various dependences.
as N gets larger, while the growth rate of δ max in the second faster regime is independent of the values of N used in the simulation. All the various properties inferred from figure 3.7.6 will be discussed and recovered in detail in section 4.4.
Having detailed the main results from S12's long-term simulations of razor-thin stable discs, let us now investigate in section 3.7.2 how the razor-thin WKB limits of the collisionless and collisional diffu-sion equations, obtained in sections 3.5 and 3.6, allow us to explain the formation of the narrow ridge of resonant orbits observed in the direct N -body simulations from figure 3.7.5.
Shot noise driven radial diffusion
In order to compute the diffusion fluxes associated with the collisionless and collisional razor-thin WKB diffusion equations, let us first investigate the disc's self-gravitating amplification. This is captured by the razor-thin WKB amplification eigenvalues λ(k r ) obtained in equation (3.47). The behaviour of this amplification is indeed essential to implement the approximation of the small denominators needed to estimate the disc's diffusion properties as in equations (3.69) and (3.81). For a given position J φ and a given resonance m, figure 3.7.7 illustrates the behaviour of the function k r → λ(k r ). This figure allows ). This domain corresponds to the regions over which the integration for the approximation of the small denominators will be performed in equations (3.69) and (3.81).
us to determine what are the wave frequencies that yield locally the maximum amplification. Such waves sustain the system's WKB diffusion. Note that because equation (3.47) only depends on s 2 , the ILR and OLR resonances will always have the same amplification eigenvalues. Thanks to figure 3.7.7, one can determine the frequency of maximum amplification k max r such that λ(k max r ) = λ max . We also define the domain of maximum amplification k r ∈ [k inf r , k sup r ], such that λ max /2 ≤ λ(k r ), over which the integrations on k r may be performed in equations (3.69) and (3.81). Let us note that because of the scale-invariance of the razor-thin Mestel disc, it is straightforward to show that k max r ∝ 1/J φ , as well as
k inf/sup r ∝ 1/J φ . Figure 3.7.8 illustrates the behaviour of the amplification factor 1 → 1/(1-λ max (m, J φ ))
for the different resonances m. We note that the COR resonance is always more amplified than the ILR and OLR resonances (see figure 3.7.10 for a discussion of one consequence of such an ordering), but the overall maximum WKB amplification (∼ 3 for the COR and ∼ 1.5 for the ILR and OLR) remains sufficiently small, for the system's diffusion not to be dictated only by the properties of the disc's selfgravity. Having estimated the system's susceptibility, one can now estimate in turn the collisionless diffusion flux (section 3.7.2.1) as well as the collisional one (section 3.7.2.2) to recover the formation of the radial resonant ridge observed in figure 3.7.5.
Collisionless forced radial diffusion
The aim of this section is to understand the formation of the radial ridge observed in figure 3.7.5. A first approach is to rely on the razor-thin WKB limit of the collisionless diffusion equation obtained in section 3.5. Let us already emphasise that S12's simulations modelled an isolated stellar disc in the absence of any external perturbations. As a consequence, in order to rely on our collisionless formalism, one has to assume some form for the perturbation power spectrum C[m φ , ω, R g , k r ] appearing in the diffusion coefficients from equation (3.68). Here we will assume that the "exterior" potential felt by the system represents the inevitable source of noise caused by the finite number of stars in the disc. Of course, such perturbations originate from the disc itself, but could also mimic the effects that massive compact gas clouds have on the disc. Because of Poisson shot noise, potential fluctuations scale like δψ e ∝ √ Σ star , so that we may say that, up to a normalisation, the system undergoes perturbations following a power spectrum given by
C[m φ , ω, R g , k r ] = δ 2 m φ Σ star (R g ) . (3.93)
Such an approximation is relatively crude as we only accounted for the dependence of the noise with J φ , and neglected any dependence w.r.t. ω and k r . In equation (3.93), we also added an azimuthal Kronecker symbol to account for the fact that the perturbing forces in the system were restricted to the sole harmonic sector m φ = 2. For a system perturbed by a more realistic exterior source, one expects the spectrum of the external perturbations to be more coloured and to depend on the full statistical properties of the perturbers. We also note that the absence of dependence w.r.t. ω in equation (3.93) implies that the three resonances ILR, COR, and OLR will undergo the same perturbations when considered for the same location in the disc, even if they are associated with different local frequencies m•Ω. Looking at the shape of the active surface density Σ star in figure 3.7.1, one can note that the inner region of the disc (J φ 1.5) in the vicinity of the inner taper will be the most perturbed region. Let us also emphasise that the disc's self-induced shot noise fluctuations are not external perturbations, so that one should rely on the Balescu-Lenard formalism from section 3.6 to account self-consistently for the system internal graininess. This will be the focus of section 3.7.2.2. Having estimated in figure 3.7.8 the characteristics of the disc's WKB amplification eigenvalues, and having specified in equation (3.93) our model to describe the spectral properties of the system's internal shot noise fluctuations, we may now compute the system's razor-thin WKB collisionless diffusion coefficients given by equation (3.69), and then the associated collisionless diffusion flux F tot . Figure 3.7.9 illustrates the initial behaviour of the diffusion flux norm |F tot |. In figure 3.7.9, the dark contours show the magnitude of the collision diffusion flux F tot generated by the contributions from the two Lindbald resonances and the coration resonance. The grey arrow shows the direction of particles' individual diffusion at the location of the peak flux (the direction is similar at neighbouring points). In figure 3.7.9, one can note the presence of only one maximum peak of diffusion located in (J φ , J r ) (1, 0.01). Note that the position of the peak of diffusion is slightly offset from the one observed in S12's simulations illustrated with the background contours of figure 3.7.9. This difference is due both to the crude noise model of equation (3.93), as well as to the intrinsic limitations of the explicit razor-thin WKB formalism which prevents us from correctly describing the diffusion regime associated with loosely wound transient spirals. See section 4.3.3 for a full justification of why these contributions are indeed essential for the secular formation of the resonant ridge. However, our analytical results remain in good qualitative agreement with the numerical experiments from S12. We also note in figure 3.7.9 that the dominant net The grey vector gives the direction of the particle's diffusion vector associated with the norm maximum (arbitrary length). The background thin lines correspond to the diffused distribution from S12, which exhibits a narrow resonant ridge of diffusion.
flux makes an angle of 111 • with the J φ -axis, while the diffusion associated with the ILR resonance is inclined by 153 • w.r.t the J φ -axis. This corresponds to the direction associated with the resonance vector m = (-2, 1). These two similar inclinations illustrate the dominant role played by waves at the ILR resonance in the inner region of the disc, where the DF peaks. Finally, we note quite surprisingly that despite having assumed in equation (3.93) that the driving fluctuations are white noise, we recovered in figure 3.7.9 that the norm of the diffusion flux is sharply peaked in action space. This is a clear illustration of the localisation of the disc's inner taper (which can be seen in figure 3.7.1). Because of this sharp tapering, one expects the DF's gradients ∂F/∂J to be significant in these regions, which naturally enhances the collisionless diffusion flux F tot from equation (2.33).
Following our characterisation of the collisionless diffusion sourced by the Poisson shot noise from equation (3.93), let us now explore how the disc's gravitational susceptibility may impact its secular evolution. This is illustrated in figure 3.7.10, where we investigate the effect of changing the disc's active fraction ξ, as introduced in equation (3.91). In figure 3.7.10, one can note that as the disc's active fraction increases, the disc's susceptibility gets larger, so that the diffusion gets hastened because perturbations are more amplified. In addtion to this acceleration, one can also note that the contours of the norm of the collisionless diffusion flux also change qualitatively of behaviour. Indeed, as ξ increases, one observes a transition between an ILR-dominated heating in the inner region of the disc (J φ 1), to a regime a radial migration of quasi-circular orbits in more intermediate regions of the disc (J φ 2). Such a transition can be understood from figure 3.7.8, where we note that the COR resonance is always more amplified than the ILR and OLR resonances. As can be seen from equation (3.47), increasing the active fraction ξ immediately leads to an increase of the amplification eigenvalues λ. As a consequence, as ξ increases, both λ ILR max and λ COR max increase. However, since one has λ ILR max < λ COR max < 1, for λ COR max close to 1, because the effective amplification is given by the factor 1/(1-λ max ), the COR resonance gets much more amplified than the ILR and OLR resonances. This is the reason for the transition of diffusion regime observed in figure 3.7.10. As ξ increases, the COR resonance ends up being dominant leading to the transition observed in figure 3.7.10. With such higher active fractions of the disc, the system's diffusion regime is dictated by the higher intrinsic susceptibility of the disc, whose effect is indeed captured by the WKB collisionless diffusion coefficients from equation (3.69). 95% and 5% of the norm maximum. As the active fraction ξ increases, the disc's gravitational susceptibility gets stronger. This leads to a transition in the diffusion regime of the disc from a regime of heating in the inner regions through the ILR resonance for small values of ξ, to a regime of radial migration of quasi-circular orbits through the corotation resonance in more external regions of the disc.
Collisional radial diffusion
In the previous section, we applied the razor-thin WKB collisionless formalism from section 3.5 to understand the spontaneous formation of the radial resonant ridge observed in figure 3.7.5. The main assumption of the previous section was to treat the disc's internal Poisson shot noise as an exterior perturbation and model it according to equation (3.93). The self-induced fluctuations associated with the finite number of particles in the disc, because they are created by the disc itself, should ideally not be treated as imposed by an external perturber, but should be accounted for self-consistently. This is the main purpose of the Balescu-Lenard equation (2.67), whose razor-thin WKB limit was obtained in section 3.6. Given the knowledge of the disc's WKB amplification eigenvalues obtained in figure 3.7.8, one can straightforwardly proceed to the evaluation of the razor-thin WKB susceptibility coefficients derived in equation (3.81). The associated collisional drift and diffusion coefficients from equations (3.83) and (3.84) may then be estimated. This allows us to estimate the system's total collisional diffusion flux F tot from equation (2.71) and its associated divergence div(F tot ). As previously discussed, we restrict here the sums on the resonance vectors m only to the ILR, COR and OLR resonances. Since the Balescu-Lenard formalism self-consistently accounts for the system's internal graininess, these calculations do not require any ad hoc fittings or assumptions on the spectral properties of the system's internal fluctuations. Because the individual mass of the particles scales like µ = M tot /N , let us rather consider the quantity N div(F tot ) which is independent of the number of particles. This allows for a quantitative comparison with the results obtained in figure 3.7.5 via S12's numerical simulations.
Figure 3.7.11 illustrates the initial contours of N div(F tot ) in action space as predicted by the razorthin WKB limit of the Balescu-Lenard equation. In figure 3.7.11, red contours are associated with regions for which N div(F tot ) < 0, so that following the convention from equation (2.72), they correspond to regions where the razor-thin WKB Balescu-Lenard equation predicts a decrement of the disc's DF. In contrast, blue contours are associated with regions for which N div(F tot ) > 0, i.e. regions where the DF will increase. The overall diffusion obtained in figure 3.7.11 involves two simultaneous diffusions, namely the beginning of the formation of a resonant ridge towards larger radial actions in the vicinity of (J φ , J r ) (1.0, 0.1), and the formation of an overdensity along the J φ -axis around (J φ , J r ) (1.8, 0). This first diffusion feature is in fact consistent with S12's early time measurements, as shown in figure 3.7.12, and with the similar late time ones, as shown in figure 3.7.13. The qualitative agreements observed in figures 3.7.12 and 3.7.13 are in fact surprisingly good, given all the approximations needed in the derivation of the WKB theory, and the fact that the collisional diffusion flux was only computed at the initial time t = 0 + . Interestingly, note that the early time measurement reported in figure 3.7.12 also displays the hint of the formation of an overdensity along the J r = 0 axis, in agreement with the second diffusion process discussed previously. We note that the late time measurement from figure 3.7.13 suggests that this overdensity has split, with the hint of the formation of a second ridge. The disappearance COR and OLR). Red contours, for which N div(F tot) < 0, correspond to regions from which the orbits will be depleted, while blue contours, for which N div(F tot) > 0, correspond to regions where the secular diffusion will tend to increase the value of the DF. The net fluxes involve both heating near (J φ , Jr) (1, 0.1), and radial migration near
(J φ , Jr) (1.8, 0).
Figure 3.7.12: Overlay of the WKB predictions for the divergence of the diffusion flux N div(F tot) and the differences between the initial and the evolved DF in S12's simulation. The opaque contours correspond to the differences in action-space for the DF in S12 between the late time tS12 = 1000 and the initial time tS12 = 0 (see the upper panel of S12's figure 10). The red opaque contours correspond to negative differences, so that these regions are emptied from their orbits, while blue opaque contours are associated with positive differences, i.e. regions where the value of the DF has increased as a result of secular diffusion. The transparent contours correspond to the predicted values of N div(F tot) from the WKB limit of the Balescu-Lenard equation, using the same conventions as in figure 3.7.11. One can note the overlap between the predicted transparent contours and the measured solid ones. of this overdensity is most likely to be explained by the integration forward in time of the Balescu-Lenard equation, while here we limited ourselves to the sole computation of the diffusion flux at the initial time.
Thanks to the explicit computation of N div(F tot ) in figure 3.7.11, let us now study the typical timescale of diffusion associated with the collisional diffusion predicted by the razor-thin WKB Balescu-Lenard equation. ). These contours are spaced linearly between 95% and 5% and clearly exhibit the appearance of a resonant ridge. The coloured transparent contours correspond to the predicted values of N div(F tot), within the WKB approximation, using the same conventions as in figure 3.7.11. One can note that the late time developed ridge is consistent with the predicted depletion (red) and enrichment (blue) of orbits.
Diffusion timescale
The previous estimation of the collisional diffusion flux N div(F tot ) allows us to compare explicitly the timescale of appearance of finite-N effects captured by the Balescu-Lenard equation with the duration of S12's simulations. One can note that the Balescu-Lenard equation (2.67) depends on the total number N of particles only through the particles' individual mass µ = M tot /N . As a consequence, let us rewrite the Balescu-Lenard equation (2.67) as
∂F ∂t = 1 N C BL [F ] , (3.94)
where The ridge observed in figure 3.7.5 was obtained after letting a disc made of N = 50×10 6 particles evolve for a time ∆t S12 = 1400. Following equation (3.95), the ridge was therefore observed in S12's simulation after a rescaled time ∆τ S12 = ∆t S12 /N 3×10 -5 . One may then compare this time with the typical time required to form a resonant ridge within the WKB Balescu-Lenard formalism. Starting from the map of N div(F tot ) obtained in figure 3.7.11, one can estimate the typical time needed for such a flux to lead to the features observed in S12's simulations. The contours presented in figure 3.7.5 are separated by a value 0.1×F max 0 , where F max 0 0.12 corresponds to the maximum of the initial normalised DF from equation (3.91). In order to observe the resonant ridge, the DF should therefore change by an amount of the order of ∆F 0 0.1×F max 0 . In figure 3.7.11, we obtained that the maximum of the norm of the divergence of the collisional diffusion flux is given by |N div(F tot )| max 0.4. Thanks to equation (3.96), one can then write ∆F 0 ∆τ WKB |N div(F tot )| max , where ∆τ WKB is the typical time during which the WKB Balescu-Lenard should be considered in order to allow for the development of a ridge. With the previous numerical values, we obtain ∆τ WKB 3×10 -2 . Comparing the timescale ∆τ S12 measured in N -body simulations and the timescale ∆τ WKB predicted by the razor-thin WKB Balescu-Lenard equation, one gets ∆τ S12 ∆τ WKB 10 -3 .
C BL [F ] = N div(F tot ) is the N -independent Balescu-
(3.97)
In equation (3.97), the direct application of the razor-thin WKB Balescu-Lenard equation does not allow us to predict and recover the observed timescale of appearance of diffusion features in the numerical simulations. Indeed, the timescale of collisional diffusion predicted by our present WKB formalism appears as much larger than the time during which the numerical simulation was effectively performed. This discrepancy is also strengthened by the use of a softening length in the numerical simulations, which induces an effective thickening of the disc and therefore a slowdown of the collisional relaxation.
Let us now discuss the origin of this discrepancy.
Interpretation
In order to interpret S12's simulation in the light of a collisional diffusion equation such as the Balescu-Lenard equation (2.67), let us first emphasise the undisputed presence of collisional effects in the simulations. This is especially noticeable in figure 3.7.6, where we note that as the number of particles increases, the growth of the density fluctuations is delayed. The larger the number of particles, the later the effects of the secular evolution. Such a dependence indubitably underlines the role played by discreteness as the seed for the appearance of the diffusion features obsersed in figure 3.7.5. Sellwood & Kahn (1991) have argued that sequences of causally connected transient waves in the disc could occur subject to a (possibly non-local) resonant condition between successive spirals. The Balescu-Lenard equation captures precisely such sequences, in the sense that it integrates over dressed correlated potential fluctuations subject to relative resonant conditions, but does not preserve causality nor resolve correlations on dynamical timescales. As emphasised in the derivation of the Balescu-Lenard equation, the system's exact initial phases are not relevant in this formalism, which focuses on describing the system's mean orbit-averaged secular evolution. This independence of the intial phases was already emphasised in figure 5 of S12, where it is shown that even after redistributing randomly the stars' azimuthal phases at a given time of the simulation, the growth trends observed in figure 3.7.6 remain the same, and the disc still develops the resonant ridge of figure 3.7.5. As a conclusion, the resonant secular features observed in S12's simulation corresponds to a process induced by the finite number of particles and independent of the disc's particular initial phases. This corresponds exactly to the grounds on which the Balescu-Lenard equation (2.67) was derived, so that it should be the master equation to understand and capture the features observed in figure 3.7.5.
While we had qualitatively recovered in sections 3.7.2.1 and 3.7.2.2 the formation of the resonant ridge, we noted in equation (3.97) a timescale discrepancy, whose origin remains to be understood. The main assumption in the application of the Balescu-Lenard equation was the use of the razor-thin WKB basis from equation (3.11). We therefore argue that the timescale discrepancy observed in equation (3.97) is caused by the incompleteness of this basis. Indeed, the WKB basis elements, thanks to which the dresed susceptibility coefficients were estimated in equation (3.81), do not form a complete set as they can only represent correctly tightly wound spirals. As emphasised in equation (3.78), they also enforce local resonances, so that they do not allow for remote orbits to resonate, or wave packets to propagate between such non-local resonances. The seminal works from Goldreich & Lynden-Bell (1965); Julian & Toomre (1966) showed that any leading spiral wave during its unwinding to a trailing wave undergoes a significant amplification, coined swing amplification and illustrated in figure 3.7.14. Because it involves open spirals, this linear amplifying mechanism is not captured by the WKB razor-thin formalism. This important additional dressing is expected to increase the susceptibility of the disc and therefore accelerate the system's long-term diffusion, so that the timescale discrepancy from equation (3.97) should become less restrictive. Following the notations from [START_REF] Toomre | Structure and Evolution of Normal Galaxies[END_REF], the razor-thin tapered disc presented Figure 3.7.14: From dust to ashes -extracted from figure 8 of [START_REF] Toomre | Structure and Evolution of Normal Galaxies[END_REF]. Illustration of the swing amplification process occurring in a razor-thin Mestel disc. Here, one observes the transient strong amplification of a leading perturbation as it unwinds due to the disc's differential shearing. Such a linear process involves loosely wound perturbations and cannot therefore be captured by the WKB formalism from section 3.3. in section 3.7.1 is such that Q 1.5 and X 2, so that figure 7 from [START_REF] Toomre | Structure and Evolution of Normal Galaxies[END_REF] shows that significant swing amplification of around a factor 10 or more is to be expected. We note in the Balescu-Lenard equation (2.67) that the dressed susceptibility coefficients come squared, so that the amplification associated with swing amplification will be even larger, hastening even more the disc's secular evolution.
We showed in section 3.7.2.2 that the WKB Balescu-Lenard equation captures qualitatively the main features of the disc's diffusion process. In order to reconcile the timescale discrepancy from equation (3.97), one should get rid of the WKB approximation, and evaluate the Balescu-Lenard diffusion flux while fully accounting for the disc's susceptibility, to capture the missing mechanism of swing amplification. This is the purpose of the next chapter, where we show that the Balescu-Lenard equation associated with a complete evaluation of the disc's self-gravity recovers the resonant ridge (section 4.3.1), matches the diffusion timescale (section 4.3.2), and confirms that swing amplification is indeed the main driver of the disc's evolution (section 4.3.3).
Conclusion
In this chapter we implemented the collisionless and collisional diffusion equations in the context of razor-thin stellar discs. In order to seek straightforward estimations of the diffusion fluxes, we relied on the epicyclic approximation to construct angle-action coordinates (section 3.2) and on a tailored WKB basis to deal with the system's self-gravity (sections 3.3 and 3.4). Following this approach, we obtained simple quadratures for both the collisionless diffusion equation (section 3.5) and the collisional one (section 3.6). In particular, these simple WKB expressions yield, to our knowledge, the first non-trivial explicit expressions of the Balescu-Lenard drift and diffusion coefficients in the astrophysical context. They are therefore certainly useful to provide insight into the physical processes at play during the secular diffusion of self-gravitating razor-thin stellar discs.
In section 3.7, we applied these two formalisms to describe the shot noise driven radial diffusion occurring spontaneously in razor-thin stellar discs when considered on secular timescales. We illustrated how the calculation in the WKB limit of the full diffusion flux recovered most of the secular features observed in the direct simulations from Sellwood (2012), especially the hints for the formation of a resonant ridge, i.e. the depletion and enrichment of orbits along a narrow preferred direction in action. We also noted that as Q → 1, the corotation resonance of waves becomes more important, as self-gravity amplifies perturbations at corotation very strongly. This leads to a transition between an ILR-dominated diffusion to a regime of radial migration in more external regions of the disc (see figure 3.7.10). These various qualitative agreements are impressive given the level of approximation involved in the WKB limit.
The timescale comparison proposed in equation (3.97) highlighted however a significant quantitative overestimation w.r.t. the numerical observations. In section 3.7.4, we interpreted this discrepancy as being due to the intrinsic limitations of the WKB formalism, which cannot account for swing amplification, during which unwinding transient spirals get strongly amplified. This additional amplification, which involves non-local waves absorption and emission, appears therefore as the missing contribution required to reconcile quantitatively our predictions and the numerical ones. One venue is to compute numerically exactly the Balescu-Lenard equation (2.67) in action, without assuming tightly wound spirals or epicylic orbits. This is the topic of the next chapter.
Future works
In the light of the upcoming GAIA data, our previous description of the secular dynamics of razorthin stellar discs could be significantly developed by extending for example the system's DF with an additional degree of freedom, namely the metallicity Z of the stars. The previous formalisms could then straightforwardly be tailored to describe the diffusion of such extended DFs F (J , Z). In order to account for the disc stellar history, one would also add a source term in the diffusion equation associated with the time dependent birth of new stars throughout the lifetime of the galaxy, while keeping track of the time and radial evolution of the gas metallicity. Such extended DFs have recently been considered in, e.g., Schönrich & Binney (2009a); [START_REF] Binney | Setting the scene for Gaia and LAMOST[END_REF]; Sanders & Binney (2015), in the context of Galactic archeology (Binney, 2013a). Let us briefly detail here how one can proceed.
We introduce an extended DF F Z (Z, J , t), so that F Z dZdJ is proportional to the mass of stars with a metallicity in the range [Z, Z +dZ] and an action J in the volume dJ . Let us also introduce the traditional reduced DF, F , as
F (J , t) = dZ F Z (Z, J , t) .
(3.98)
We assume that at a given time t and position R g = R g (J φ ) in the disc, the metallicity of the interstellar medium (ISM) is known and characterised by the function Z g (J φ , t). See, e.g., Sanders & Binney (2015) for an example of Z g for the Milky Way. We also assume that the star formation rate (SFR) is a known function of position and time written as SFR(J φ , t). When a new star is formed, it satisfies two conditions. First, stars are like time capsules and preserve the state of the ISM at the remote epoch of their formation. Moreover, stars are born on the cold orbits of the gas, so that they initially have J r 0 (and J z 0 in the case of thickened discs, see chapter 5). Up to a normalisation, the source term describing the birth of new stars, F s , then reads (Teyssier, 2002). The two snapshots were taken at the same time and are centred on the same dark matter halo. The left-hand panel corresponds to a cubic region of extension 500kpc, while the right-hand panel only extends up to 100kpc. The halo was chosen to be quiet, i.e. did not undergo any recent major mergers. On large scales, one can note the presence of various clumps in the dark matter density, which get much fainter as one gets closer to the centre of the halo. Here, any infalling clump gets rapidly dissolved by dynamical friction (see figure 1.3.6). On the scale of the inner galactic disc (approximately 10kpc), these clumps are therefore expected to be screened by the dark matter halo, and the disc shielded from them. Such simulations seem to indicate that the perturbations induced by the dark matter halo are weak and will not trigger a strong diffusion in the disc.
∂F s (Z, J , t) ∂t = SFR(J φ , t) δ D (Z -Z g (J φ , t)) δ D (J r ) . ( 3
∂F Z ∂t = Diff[F, F Z ] + ∂F s ∂t . ( 3
been characterised, the associated collisionless diffusion can be computed and so can its effects on the disc's orbital structure. Characterising the incoming cosmic flux of perturbations could therefore in turn constrain the ΛCDM scenario on galactic scales.
Chapter 4
Razor-thin discs and swing amplification
The work presented in this chapter is based on Fouvry et al. (2015c).
Introduction
In chapter 3, we investigated the formation of a narrow ridge of resonant orbits in action space appearing spontaneously on secular timescales in razor-thin stable isolated self-gravitating stellar discs. These ridges are the orbital counterparts of the processes of churning and blurring (Schönrich & Binney, 2009a), when considered in idealised N -body simulations. In order to understand the origin of this feature, we considered two possible approaches, either based on the collisionless diffusion equation (introduced in section 2.2) or the collisional Balescu-Lenard equation (introduced in section 2.3). In addition, in order to obtain simple and tractable quadratures for the associated diffusion fluxes, we relied on the assumptions that the disc's transient response could be described with tightly wound spirals via the WKB and epicyclic approximations. These simple expressions provided insight into the physical processes at work during the secular diffusion of self-gravitating discrete discs. We also reached a qualitative agreement with the results from numerical simulations, recovering the presence of an enhanced diffusion in the inner regions of the disc.
However, the WKB approximation is quantitatively questionable to capture the phase during which transient spirals unwind and undergo a strong amplification. This discrepancy was quantified in equation (3.97) when comparing the timescale of diffusion predicted by the WKB Balescu-Lenard formalism and the timescale inferred from numerical simulations. In section 3.7.4, we blamed it on the incompleteness of the WKB basis, as it can only represent correctly tightly wound perturbations. The WKB approximation also led us to consider only resonances between orbits that are close one to another in radius. This prevented remote orbits to resonate, or wave packets to propagate between such non-local resonances. As illustrated in figure 3.7.14, the seminal works from Goldreich & Lynden-Bell (1965); Julian & Toomre (1966); [START_REF] Toomre | Structure and Evolution of Normal Galaxies[END_REF] have shown that any leading spiral wave undergoes a significant amplification as it unwinds to become a trailing wave, and this is not captured by the WKB approximation investigated in chapter 3.
In the present chapter, we avoid such approximations by relying on the matrix method (Kalnajs, 1976), in order to estimate the whole self-gravitating amplification of the disc as well as to account for the roles possibly played by non-local resonances. Once the disc's susceptibility is estimated, one can compute numerically the drift and diffusion coefficients appearing in the collisional Balescu-Lenard equation. The associated diffusion predictions can then be compared to crafted sets of numerical simulations, allowing us to estimate ensemble averaged secular responses of a sizeable number of simulations, from which we extract robust predictions on the scalings of the disc's response w.r.t. the number of particles or the disc to halo mass fraction.
In section 4.2, we detail one implementation of the matrix method to compute the Balescu-Lenard diffusion flux in a razor-thin disc. In Appendix 4.D, we specify how this same approach may straightforwardly be generalised to 3D spherical systems, whose secular dynamics can also be probed in the same manner. Section 4.3 computes numerically the collisional drift and diffusion coefficients in action space for a razor-thin truncated Mestel disc, and compares the divergence of the corresponding flux to the results obtained in the direct N -body simulations from Sellwood (2012). It also illustrates how the correct timescales of diffusion are recovered. In the same section, we also emphasise how the strong self-gravitating amplification of loosely wound perturbations is indeed the main driver of the disc's secular evolution. Section 4.4 presents our N -body simulations of the same setting, compares the scalings of the flux w.r.t. the number of particles and the active fraction of the disc, and illustrates the late-time unstable phase transition induced in the disc as a result of the slow collisional evolution.
Calculating the Balescu-Lenard diffusion flux
In order to compute the Balescu-Lenard diffusion flux, three main difficulties have to be overcome. First, one has to construct explicitly the mapping (x, v) → (θ, J ), as the collisional drift and diffusion coefficients are associated with a diffusion in action space. Fortunately, for a razor-thin axisymmetric system, the integrability of the potential is imposed by symmetry and angle-action coordinates can be determined as shown in section 4.2.1. The second difficulty arises from the non-locality of Poisson's equation and the associated complexity of the system's response matrix M. As already noted in equation (2.12), this requires the use of a biorthogonal basis of potentials ψ (p) , which must then be integrated over the whole action space along with functions possessing a pole 1/(ω-m•Ω) as in equation (2.17). This cumbersome evaluation has to be performed numerically, along with the inversion of [I-M] required to estimate the collisional dressed susceptibility coefficients from equation (2.50). We will show in sections 4.2.2, 4.2.3 and 4.2.4 how these various numerical evaluations may be performed. Finally, a third difficulty in the Balescu-Lenard equation comes from the resonance condition δ D (m 1 •Ω 1 -m 2 •Ω 2 ), which requires to determine how orbits resonate one with another. Once the intrinsic orbital frequencies of the stars have been determined, we will show in section 4.2.5 how these resonances can be dealt with.
Calculating the actions
Razor-thin axisymmetric potentials are guaranteed by symmetry to be integrable. Following [START_REF] Lynden-Bell | [END_REF]; Tremaine & Weinberg (1984), the two natural actions J = (J 1 , J 2 ) of the system are given by
J 1 = J r = 1 π ra rp dr 2(E -ψ 0 (r))-L 2 /r 2 ; J 2 = J φ = L , (4.1)
where we introduced as r p and r a the pericentre and apocentre of the trajectory, while E and L are the energy and angular momentum of the considered star. Here, the action J r encodes the amount of radial energy of the star, so that J r = 0 corresponds to exactly circular orbits. The two intrinsic frequencies of motion can then be introduced as Ω 1 = κ associated with the radial libration, and Ω 2 = Ω φ , associated with the azimuthal rotation. The radial frequency Ω 1 is given by
2π Ω 1 = 2 ra rp dr 2(E -ψ 0 (r))-J 2 2 /r 2 , (4.2)
while the azimuthal frequency Ω 2 satisfies
Ω 2 Ω 1 = J 2 π ra rp dr r 2 2(E -ψ 0 (r))-J 2 2 /r 2 . (4.3)
At this stage, one can note that various coordinates may be used to represent the 2D action space. Indeed, for a given background potential ψ 0 , any orbit can equivalently be represented by the pairs (r p , r a ) ↔ (E, L) ↔ (J r , J φ ). However, determing the actions associated with one set (r p , r a ) only requires to compute an integral as in equation (4.1), while determining the pericentre and apocentre associated with a set of actions (J 1 , J 2 ) requires the inversion of the same non-trivial implicit relation. In addition, as the peri/apocentres are the two roots of the equation 2(E -ψ 0 (r))-L 2 /r 2 = 0, one gets that for a given value of r p and r a , the energy E and the angular momentum L are straightforwardly obtained as
E = r 2 a ψ a -r 2 p ψ p r 2 a -r 2 p ; L = 2(ψ a -ψ p ) r -2 p -r -2 a , (4.4)
where we used the shortening notations ψ p/a = ψ 0 (r p/a ). As a consequence, in the upcoming applications, we use (r p , r a ) as the representatives variables of the 2D action space.
The basis elements
We assume that the considered 2D basis elements depend on two indices spanning the two degrees of freedom of a razor-thin disc. Let us therefore define
ψ (p) (R, φ) = ψ n (R, φ) = e i φ U n (R) , (4.5)
where U n is a real radial function and (R, φ) are the usual polar coordinates. Similarly, the associated surface densities are of the form
Σ (p) (R, φ) = Σ n (R, φ) = e i φ D n (R) , (4.6)
where D n is a real radial function. In equations (4.5) and (4.6), the basis elements depend on two indices ≥ 0 and n ≥ 0. In all the upcoming numerical calculations, we use the explicit radial functions from Kalnajs (1976) presented in Appendix 4.A. Once the basis elements have been specified, one may compute their Fourier transform w.r.t. the angles θ = (θ 1 , θ 2 ). Indeed, the expression of the response matrix from equation (2.17) requires the use of ψ (p) m (J ) computed for the resonance vector m = (m 1 , m 2 ). Following the convention from equation (2.6), one has
ψ (p) m (J ) = 1 (2π) 2 dθ 1 dθ 2 ψ (p) (R, φ) e -im1θ1 e -im2θ2 .
(4.7)
Lynden-Bell & [START_REF] Lynden-Bell | [END_REF] gives us that the angles θ 1 and θ 2 associated with the actions from equation (4.1) read
θ 1 = Ω 1 C1 dr 1 2(E -ψ 0 (r))-J 2 2 /r 2 ; θ 2 = φ + C1 dr Ω 2 -J 2 /r 2 2(E -ψ 0 (r))-J 2 2 /r 2 , (4.8)
where C 1 is a contour starting from the pericentre r p and going up to the current position r = r(θ 1 ) along the radial oscillation. Following the notations from Tremaine & Weinberg (1984), one can straightforwardly show that equation (4.7) becomes
ψ (p) m (J ) = δ p m2 W m1 p m2n p (J ) , (4.9)
where W m1 p m2n p (J ) is given by
W m1 p m2n p (J ) = 1 π ra rp dr dθ 1 dr U p n p (r) cos m 1 θ 1 [r]+m 2 (θ 2 -φ)[r] . (4.10)
One should note that the integration boundaries in equation (4.10) are given by the peri/apocentre r p/a associated with the actions J . Such a property illustrates once more why (r p , r a ) appear as natural coordinates to parametrise the 2D action space. Equation (4.10) was written as an integral over r, thanks to the change of variables θ 1 → r, which satisfies p (-m2)n p = W m1 p m2n p , which offers a significant reduction of the effective number of coefficients to compute.
dθ 1 dr = Ω 1 2(E -ψ 0 (r))-J 2 2 /r 2 . ( 4
Computing the response matrix
Thanks to the computation of the Fourier transformed basis elements in equation (4.9), one may now evaluate the response matrix from equation (2.17). One should note that its definition involves an integration over the dummy variable J , which as discussed previously, will be performed in the 2D (r p , r a )-space. The first step is to perform the change of variables J = (J 1 , J 2 ) → (E, L), whose Jacobian is given by
∂(E, L) ∂(J 1 , J 2 ) = ∂E ∂J 1 ∂E ∂J 2 ∂L ∂J 1 ∂L ∂J 2 = Ω 1 Ω 2 0 1 = Ω 1 , (4.12)
so that one has dJ 1 dJ 2 = dEdL/Ω 1 . Thanks to the expression (4.9), the response matrix may then be rewritten as
M pq (ω) = (2π) 2 δ q p m1 dEdL 1 Ω 1 (m 1 , p )•∂F/∂J ω-(m 1 , p )•Ω W m1 p p n p (J ) W m1 p p n q (J ) , (4.13)
where the sum on m 2 has been executed thanks to the Kronecker delta from equation (4.9). In addition, we also dropped the conjugate on W m1 p p n p as it is real. Performing an additional change of variables (E, L) → (r p , r a ), one can finally rewrite equation (4.13) as
M pq (ω) = δ q p m1 dr p dr a g p n p n q m1 (r p , r a ) h ω m1 p (r p , r a ) , (4.14)
where the functions g p n p n q m1
(r p , r a ) and h ω m1 p (r p , r a ) are defined as
g p n p n q m1 (r p , r a ) = (2π) 2 ∂(E, L) ∂(r p , r a ) 1 Ω 1 (m 1 , p )• ∂F ∂J W m1 p p n p (J ) W m1 p p n q (J ) , (4.15)
and
h ω m1 p (r p , r a ) = ω-(m 1 , p )•Ω . (4.16)
The Jacobian ∂(E, L)/∂(r p , r a ) of the transformation (E, L) → (r p , r a ) appearing in equation (4.15) can straightforwardly be computed from the expressions (4.4) of E(r p , r a ) and L(r p , r a ). Finally, if ever the system's DF was not defined as F = F (J ), but rather as F = F (E, L), its gradients are immediately given by
m• ∂F ∂J = m 1 Ω 1 ∂F ∂E L + m 2 Ω 2 ∂F ∂E L + ∂F ∂L E .
(4.17)
Sub-region integration
The next step of the calculation is to perform the remaining integrations over (r p , r a ) in equation (4.14). However, because of the presence of the pole 1/h ω m1 p , such integrations have to be performed carefully. To do so, we cut out the integration domain (r p , r a ) in various subregions indexed by i. The i th region is centred around the position (r i p , r i a ) and corresponds to the square domain r p ∈ [r i p -∆r/2; r i p +∆r/2], and r a ∈ [r i a -∆r/2; r i a +∆r/2], where ∆r characterises the extension of the subregions. Such a truncation is illustrated in figure 4.2.1. The smaller ∆r, the more accurate the estimation of the response matrix. Within the i th region, one may perform limited developments of the functions g and h from equations (4.15) and (4.16) around the centre (r i p , r i a ), so as to write g(r i p +∆r p , r i a +∆r a ) a i g +b i g ∆r p +c i g ∆r a ; h(r i p +∆r p , r i a +∆r a ) a i h +b i h ∆r p +c i h ∆r a , (4.18)
where we dropped the indices dependences to simplify the notations. The coefficients a i g , b i g and c i g (similarly for h) are given by
a i g = g(r i p , r i a ) ; b i g = ∂g ∂r p (r i p ,r i a )
;
c i g = ∂g ∂r a (r i p ,r i a )
.
(4.19)
In order to minimise the number of evaluations of g required in the numerical implementation, the coefficients involving partial derivatives are estimated by finite differences, so that one has for instance
b g (r i p , r i a ) = g(r i p +∆r, r i a )-g(r i p -∆r, r i a ) 2∆r
.
(4.20)
One can now perform an approximated integration on each subregion. It takes the form
i dr p dr a g(r p , r a ) h(r p , r a ) ∆r 2 -∆r 2 ∆r 2 -∆r 2 dx p dx a a i g +b i g x p +c i g x a a i h +b i h x p +c i h x a +iη = ℵ(a i g , b i g , c i g , a i h , b i h , c i h , η, ∆r) , (4.21)
where ℵ is an analytical function depending only on the coefficients obtained in the limited development of equation (4.18). It is important to note here that in order to have a well-defined integral, we added an imaginary part η > 0 to the temporal frequency ω, so that ω = ω 0 +iη. When investigating unstable modes in discs, this imaginary part η corresponds to the growth rate of the unstable modes (see Appendix 4.C).
Finally, let us note that one always has
a i g , b i g , c i g ∈ R, as well as a i h , b i h , c i h ∈ R.
The effective calculation of the function ℵ is briefly presented in Appendix 4.B. Thanks to the approximation from equation (4.21), the response matrix from equation (4.14) finally becomes , so that the sum on m 1 is only limited to |m 1 | ≤ m max 1 . As it requires to truncate the action space in various subregions, the calculation of the response matrix remains a daunting task, in particular to ensure appropriate numerical convergence. One natural way to validate this calculation is to recover known unstable modes of razor-thin disc. A seminal example is given by truncated Mestel discs (Zang, 1976;Evans & Read, 1998b;Sellwood & Evans, 2001). An illustration of the validation of the present method of computation of the response matrix, based on such discs, is presented in Appendix 4.C. Once the response matrix M has been estimated, the calculation of the dressed susceptibility coefficients 1/|D| 2 from equation (2.50) is immediate and only involves summations over the basis elements, whose Fourier transforms in angles have already been computed.1
M pq (ω) = δ q p m1 i ℵ(a i g , b i g , c i g , a i h , b i h , c i h , η, ∆r) . ( 4
Critical resonant lines
Once the response matrix and the dressed susceptibility coefficients have been estimated, one may proceed to the evaluation of the collisional drift and diffusion coefficients from equations (2.69) and (2.70). However, the resonance condition in the Dirac delta δ D (m 1 •Ω 1 -m 2 •Ω 2 ) generates an additional difficulty. Let us recall the definition of the composition of a Dirac delta and a smooth function [START_REF] Hörmander | The analysis of linear partial differential operators[END_REF], which in a d-dimensional setup takes the form
R d dx f (x) δ D (g(x)) = g -1 (0) dσ(x) f (x) |∇g(x)| . (4.23)
In equation (4.23), we introduced as g -1 (0) = {x | g(x) = 0} the hyper-surface of (generically) dimension (d-1) defined by the constraint g(x) = 0, along with dσ(x) the surface measure on g -1 (0). The euclidean norm of the gradient of g is also naturally defined as |∇g(x)| = |∂g/∂x 1 | 2 +...+|∂g/∂x d | 2 . Finally, we also assume that the resonance condition associated with the function g
(J 2 ) = m 1 •Ω 1 -m 2 •Ω 2 is non- degenerate, so that ∀x ∈ g -1 (0), |∇g(x)| > 0.
This also ensures that the dimension of the set g -1 (0) is
(d-1).
When one considers a degenerate potential such as the harmonic or Keplerian potentials, the resonant domain fills space. Dealing with such a degeneracy requires a more involved evaluation of the Balescu-Lenard collision operator, and will be the subject of chapter 6, where we investigate in detail the secular evolution of quasi-Keplerian systems. Here, we consider the case of a razor-thin disc, so that d = 2. As a consequence, g -1 (0) is of dimension 1 and takes the form a curve γ, called the critical resonant line. Generically, γ can be represented as an application of the form
γ : u → γ(u) = (γ 1 (u), γ 2 (u)) , (4.24)
so that the r.h.s. of equation (4.23) can be rewritten as
γ dσ(x) f (x) |∇g(x)| = du f (γ(u)) |∇(g)(γ(u))| |γ (u)| , (4.25)
where we naturally introduced |γ (u
)| = |dγ 1 /du| 2 +|dγ 2 /du| 2 .
Using once again (r p , r a ) as the representative variables of the action space, one can rewrite the drift and diffusion coefficients from equations (2.69) and (2.70) as
A m1 (J 1 ) = m2 dr p dr a δ D (m 1 •Ω 1 -m 2 •Ω 2 ) G A m1,m2 (r p , r a ) , D m1 (J 1 ) = m2 dr p dr a δ D (m 1 •Ω 1 -m 2 •Ω 2 ) G D m1,m2 (r p , r a ) . (4.26)
In equation (4.26), we respectively introduced the function G A m1,m2 (r p , r a ) and G D m1,m2 (r p , r a ) as
G A m1,m2 (r p , r a ) = - 1 Ω 1 ∂(E, L) ∂(r p , r a ) 4π 3 µ m 2 •∂F/∂J 2 |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 )| 2 , G D m1,m2 (r p , r a ) = 1 Ω 1 ∂(E, L) ∂(r p , r a ) 4π 3 µ F (J 2 ) |D m1,m2 (J 1 , J 2 , m 1 •Ω 1 )| 2 .
(4.27)
For a given value of J 1 , m 1 , and m 2 , and defining ω 1 = m 1 •Ω 1 , we introduce the critical curve γ m2 (ω 1 ) as
γ m2 (ω 1 ) = (r p , r a ) m 2 •Ω(r p , r a ) = ω 1 . (4.28)
Relying on the formula from equation (4.23), equation (4.26) becomes
A m1 (J 1 ) = m2 γm 2 (ω1) dσ G A m1,m2 |∇(m 2 •Ω 2 )| ; D m1 (J 1 ) = m2 γm 2 (ω1) dσ G D m1,m2 |∇(m 2 •Ω 2 )| , (4.29)
where the resonant contribution
|∇(m 2 •Ω 2 )| is given by |∇(m 2 •Ω 2 )| = m 2 • ∂Ω 2 ∂r p 2 + m 2 • ∂Ω 2 ∂r a 2 . (4.30)
In equation (4.30), the derivatives of the intrinsic frequencies w.r.t. r p and r a should be computed using finite differences, as was done in equation (4.20). Once the critical lines of resonance have been determined, the computation of the drift and diffusion coefficients from equation (4.29) is straightforward, and the secular diffusion flux F tot from equation (2.71) follows immediately.
Application to self-induced radial diffusion
Let us now illustrate how the previous computations of the response matrix and the Balescu-Lenard drift and diffusion coefficients may be used to interpret the diffusion features observed in the simulation of Sellwood (2012) (hereafter S12), already presented in detail in section 3.7.1. Our aim here is to recover the formation of a narrow resonant ridge, as observed in figure 3.7.5. Our general motivation is churning and blurring (Schönrich & Binney, 2009a), which are the astrophysically relevant underlying processes. Following the WKB results from section 3.7, we also aim to resolve the diffusion timescale discrepancy obtained in equation (3.97). Indeed, when considering the non-WKB Balescu-Lenard equation (2.67), we expect that the use of a non-local basis such as in equation (4.5), as well as the numerical computation of the response matrix will allow us to account for the contributions previously ignored in the WKB approximation.
Let us consider the same razor-thin disc as considered in S12's simulation and already presented in section 3.7.1. Here, we recall that S12 restricted potential perturbations to the harmonic sector m φ = 2, so that we consider the same restriction on the azimuthal number m φ . In the double sum on the resonance vectors m 1 and m 2 present in the Balescu-Lenard equation (2.67), we also assume that m 1 and m 2 belong to the restricted set (m φ , m r ) ∈ (2, -1), (2, 0), (2, 1) . As previously, (2, -1) corresponds to the inner Lindblad resonance (ILR), (2, 0) to the cororation resonance (COR), and (2, 1) to the outer Lindblad resonance (OLR). See figure 3.7.4 for an illustration of these resonances. All the upcoming calculations were also performed while considering the contributions associated with m r = ±2, which were checked to be subdominant.
Initial diffusion flux
In the previous section, we detailed how one could compute both the system's response matrix as well as the Balescu-Lenard diffusion flux in razor-thin axisymmetric discs. The calculation of the response matrix especially requires to build up a grid in the (r p , r a )-space. Here, we consider a grid such that r min p = 0.08, r max a = 4.92, with a grid spacing given by ∆r = 0.05. When computing the response matrix in equation (4.22), the sum on m 1 was reduced to |m 1 | ≤ m max 1 = 7. The basis elements were taken following Kalnajs 2D basis, as presented in Appendix 4.A, with the parameters k Ka = 7 and a truncation radius given by R Ka = 5. Let us note that even though the disc considered extends up to R max = 20, one can still safely consider a basis truncated at such a small radius to efficiently capture the system's diffusion properties. In addition, we restricted the basis elements to 0 ≤ n ≤ 8. As emphasised in equation (4.21), in order to evaluate the response matrix, one has to add a small imaginary part η to the frequency to regularise the resonant denominator. Throughout the upcoming calculations, we considered η = 10 -4 and checked that this choice had no impact on our results.
Here, the total potential ψ M is known analytically from equation (3.85), so that, following section 4.2.1, the mapping to the angle-action coordinates is straightforward to obtain. As given by equations (4.2) and (4.3), one can compute the disc's intrinsic frequencies Ω φ and κ on the (r p , r a )-grid. Once these frequencies known, one can determine the system's critical resonant lines, introduced in equation (4.28). It is along these curves that one has to perform the integrations present in the expression (4.29) of the Balescu-Lenard drift and diffusion coefficients. Figure 4.3.1 illustrates these critical resonant lines. In this figure, one can note that by getting rid of the WKB approximation, we allow for non-local resonances between distant orbits.
Following equation (4.29), one can then compute the disc's drift and diffusion coefficients, and finally the collisional diffusion flux F tot introduced in equation (2.71). As already made in section 3.7.2.2, because the mass of the particles scales like µ = M tot /N , it is natural to consider the quantity N F tot . Following the convention from equation (2.72), the direction along which individual orbits diffuse is given by the vector field -N F tot = -(N F φ tot , N F r tot ) defined over the action space (J φ , J r ). Figure 4.3.2 illustrates this diffusion flux. In figure 4.3.2, one can already note how the diffusion vector field is concentrated in the inner region of the disc and aligned with a narrow resonant direction. Along this ridge of diffusion, one typically has F φ tot = -2F r tot , so that the diffusion is aligned with the direction of the ILR resonance vector given by m ILR = (2, -1).
After having determined the collisional diffusion flux N F tot , one can compute its divergence, to characterise the regions in action space, where the disc's DF is expected to change as a result of diffusion. This is illustrated in figure 4.3.3 which represents the initial contours of N div(F tot ). Figure 4.3.3 is the main result of this section. In this figure, we see that the Balescu-Lenard formalism predicts the formation of a narrow resonant ridge in the inner regions of the disc, aligned the direction of the ILR resonance. One also recovers that the stars which will populate the ridge originate from the base of the ridge and diffuse along the ILR direction. It is most likely that the slight shift in the position of the ridge w.r.t. S12's numerical measurement is due to the fact that the Balescu-Lenard diffusion flux was only estimated for t = 0 + , while S12's measurement was made at t = 1400. Other possible origins for this small difference could be the use of a softening length in the numerical simulations, which modifies the two- body interaction potential. This could as well be due to the difference between the ensemble averaged evolution, predicted by the Balescu-Lenard formalism, and one specific realisation, as probed in S12. Indeed, our own simulations (see section 4.4) suggest some variations in the position of the ridge between different runs. Having determined explicitly the value of N div(F tot ), let us now compare the typical , where the total flux has been computed with m1, m2 ∈ mILR, mCOR, mOLR . Red contours, for which N div(F tot) < 0, correspond to regions from which the orbits will be depleted during the diffusion, while blue contours, for which N div(F tot) > 0, are associated with regions for which the value of the DF will increase as a result of diffusion. The contours are spaced linearly between the minimum and maximum of N div(F tot).
The maximum value for the positive blue contours is given by N div(F tot) 350, while the minimum value for the negative red contours is N div(F tot) -250. The contours in both panels are aligned with the ILR direction mILR = (2, -1) in the (J φ , Jr)-plane, as illustrated with the cyan line.
timescale of diffusion predicted by the Balescu-Lenard equation to what was measured numerically in S12's simulation. This is the purpose of the next section. Finally, in order to get a better a grasp of the driving mechanisms of this secular diffusion, we will investigate in section 4.3.3 the respective roles of the self-gravitating amplification and the limitation to the tightly wound basis elements, to emphasise the crucial role played by swing amplification (illustrated in figure 3.7.14).
Diffusion timescale
In section 3.7.3, when relying on the WKB approximation, the main disagreement was obtained in equation (3.97) when comparing the timescales of diffusion. We noted that the timescale of collisional diffusion predicted by the WKB Balescu-Lenard equation was about a factor 10 3 too slow compared to what was effectively measured in S12. This is because the WKB approximation cannot account for the swing amplification of loosely wound perturbations, which significantly boosts and hastens the diffusion in cold dynamical systems such as razor-thin stellar discs. Thanks to our explicit and quantitative estimation of the collisional diffusion flux N div(F tot ), let us now perform the same analysis, by comparing the rescaled times of diffusion ∆τ as defined in equation (3.95). Section 3.7.3 showed that the time ∆τ S12 required to observe the numerical ridge in S12's numerical simulation was ∆τ S12 3×10 -5 . In figure 3.7.11 obtained with the WKB approximation, we noted that the maximum of the norm of the diffusion flux was given by |N div(F tot )| max 0.4, which led to a WKB rescaled time of diffusion given by ∆τ WKB 3×10 -2 . When fully accounting for the system's self-gravity, we obtained in figure 4.3.3 that the maximum of the norm of the divergence of the diffusion flux was given by |N div(F tot )| max 350, which corresponds to an enhancing of the diffusion flux of a factor 10 3 compared to the WKB case. Hence, one can write ∆τ BL ∆τ WKB /10 3 3×10 -5 , where ∆τ BL stands for the time during which the Balescu-Lenard diffusion flux from figure 4.3.3 has to be considered to allow for the formation of the resonant ridge. Comparing the numerically measured rescaled time ∆τ S12 and the time ∆τ BL predicted by the Balescu-Lenard equation, one gets
∆τ S12 ∆τ BL 1 . (4.31)
As a consequence, the projection of the disc's response over an unbiased basis led to over a hundredfold increase of the disc's susceptibility and therefore to a very significant acceleration of the disc's secular evolution. Thanks to this mechanism, we reached a very good agreement between the diffusion timescale observed in numerical simulations and the prediction from the Balescu-Lenard formalism. This quantitative match is rewarding, both from the point of view of the accuracy of the N -body integrator (symplecticity, timestep size, softening, etc.) and from the point of view of the relevance of the Balescu-Lenard formalism and the various approximations on which it relies (timescale decoupling, 1/N truncation of the BBGKY hierarchy, neglect of the collision term, see, e.g., Appendix 2.B for a discussion).
In the next section, let us now show that the main source of secular collisional evolution in S12's simulation is indeed the strong self-gravitating amplification of loosely wound perturbations, i.e. sequences of uncorrelated swing amplified spirals sourced by finite-N effects.
Why swing amplification matters
Let us now briefly show the importance of both collective effects and the completeness of the basis to capture accurately the swing amplification of loosely wound perturbations, and how it is indeed driving the formation of the diffusion features recovered in figure 4.3.3.
Turning off collective effects
In order to assess the importance of self-gravity, one could proceed to the same evaluation of the diffusion flux as in figure 4.3.3, while however neglecting collective effects. This amounts to assuming that the system's response matrix becomes M = 0. The Balescu-Lenard equation (2.67) then becomes the Landau equation (2.73), where the dressed susceptibility coefficients 1/|D m1,m2 | 2 from equation (2.50) are replaced by their bare analogs |A m1,m2 | 2 from equation (2.74). In this context, the computation of the diffusion flux does not require the calculation of the response matrix. However, one must still perform the integrations along the resonant lines, as presented in section 4.2.5. We finally rely on the same numerical parameters as the ones detailed in section 4. , thanks to which one may assess the importance and the strength of the system's self-gravitating amplification. As expected for dynamically cold systems such as razor-thin stellar discs, turning off self-gravity reduces significantly the system's susceptibility and slows down its secular evolution by a factor of about 1000. One can also note that while the secular appearance of the narrow resonant ridge was obvious in the dressed diffusion from figure 4.3.3, this is much less clear in the bare case from figure 4.3.4. Let us also note that the overall shape observed in figure 4.3.4 is somewhat similar to what was obtained in figure 3.7.11, when relying on the razor-thin WKB limit of the Balescu-Lenard equation. The amplitudes of the bare divergence contours are also similar to the WKB values obtained in figure 3.7.11. As a conclusion, the comparison of figures 4.3.3 and 4.3.4 strongly emphasises how the self-gravitating amplification of loosely wound perturbations is indeed responsible for the appearance of a narrow ridge, while also drastically hastening the system's diffusion to ensure a rapid appearance of the ridge, as seen in the timescales comparisons from equation (4.31).
Turning off loosely wound contributions
In order to emphasise once again the role played by loosely wound perturbations, let us try to reproduce the results presented in section 3.7.2.2, which relied on the razor-thin WKB limit of the Balescu-Lenard equation. Indeed, using the generic numerical methods presented in section 4.2, one can mimic these WKB results by carefully choosing the basis elements introduced in equation (4.5). Recall that the basis elements depend on two indices: an azimuthal index and a radial one n. Because S12's simulation was restricted to the harmonic sector m φ = 2, we only consider basis elements associated with = 2. One can note that as the radial index n increases, the basis elements get more and more radially wound. can note from figure 4.3.5 that the larger n, the faster the radial variation of the basis elements, i.e. the more tightly wound the basis elements. As a consequence, in order to get rid of the loosely wound basis elements which can get swing amplified, let us perform a truncation on the radial indices considered. Let us define the diffusion flux N div(F WKB tot ) computed in the same manner as the total dressed flux N div(F tot ), except that here we restrict ourselves to basis elements such that n cut ≤ n ≤ n max , with n cut = 2 and n max = 8. By keeping only the sufficiently wound basis elements, our aim is to consider the same contributions as the ones captured by the razor-thin WKB limit. Figure 4.3.6 illustrates the initial contours of N div(F WKB tot ). One can first note that the amplitudes of the contours in figure 4.3.6 are of the same order than the WKB contours from figure 3.7.11. The presence in figure 4.3.3 of positive blue contours is also in qualitative agreement with a secular heating of the disc leading to an increase in the radial action J r . However, these contours do not exhibit the formation of a narrow resonant ridge as was predicted in figure 4.3.3, when accounting as well for loosely wound contributions.
As a conclusion, figures 4.3.4 and 4.3.6 illustrate how the strong self-gravitating amplification of loosely wound perturbations (i.e. swing amplification) is indeed responsible for both the appearance of a narrow ridge of resonant orbits as well as for the associated rapid timescale of appearance. Having emphasised the relevance of the Balescu-Lenard formalism to describe the secular dynamics of razorthin discs, let us investigate in the next section some additional properties of these long-term evolutions (already advertised in figure 3.7.6), by relying on our own N -body simulations. ) corresponding to the dressed diffusion flux when loosely wound contributions are not accounted for, following the same conventions as in figure 4.3.3. In order to restrict ourselves only to tightly wound contributions, we did not consider the contributions associated with the basis elements for the radial index n = 0, 1, as these elements are loosely wound (see figure 4.3.5). The contours are spaced linearly between the minimum and the maximum of N div(F WKB tot ). The maximum value for the positive blue contours is given by N div(F WKB tot ) 0.7, while the minimum value for the negative red contours reads N div(F WKB tot ) -4.5. This figure should be compared to figure 3.7.11, which was obtained by relying on the razor-thin WKB limit of the Balescu-Lenard equation.
Comparisons with N -body simulations
In order to investigate in detail the scalings of the system's evolution w.r.t. the number of particles or the active fraction of the disc, we now resort to our own N -body simulations. We present in section 4.4.1 the characteristics of the N -body code that was used, while sections 4.4.2 and 4.4.3 focus on the respective dependences of the system's response with the number of particles and its active fraction. Finally, in section 4.4.4, we illustrate how the very late evolution of the system exhibits an out-of-equilibrium phase transition.
A N -body implementation
When simulating the evolution of self-gravitating discs, one should pay a particular attention to the sampling of the initial conditions in order to ensure that the disc is initially in a state of collisionless equilibrium. In the present context, this requires to be able to sample particles' positions and velocities from the DF of equation (3.91). We do not repeat here the sampling strategy that was used here, whose details can be found in Appendix E of Fouvry et al. (2015c). One should note that we relied on a random sampling procedure of the DF. This does not correspond to a quiet start sampling (Sellwood, 1983), which would have allowed for a reduction of the initial shot noise within the disc.
Once particles have been sampled, their positions and velocities are evolved using a straightforward particle-mesh N -body code with a single-timestep leapfrog integrator (see [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF], §3.4.1.). As was done in S12, the potential in which particles evolve is decomposed in two components: (i) an axisymmetric static contribution ψ M (R) from the unperturbed Mestel disc, as introduced in equation (3.85), (ii) a non-axisymmetric contribution δψ(R, φ) which grows as perturbations develop in the disc. Thanks to this splitting, we avoid difficulties associated with dealing with the rigid part of the potential, which is not due to the DF, created by the tapering functions and the disc's active fraction.
Here we calculate δψ using a cloud-in-cell interpolation (see [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF], §2.9.3.) of the particles' masses onto a N mesh ×N mesh mesh of square cells of size ∆x mesh . We then filter the resulting density field ρ(x, y) to isolate the disc's response, as discussed below. We then rely on the traditional "doubling-up" procedure to determine the potential δψ at the cell vertices. The contribution of δψ to each particles' accelerations is finally obtained using the same cloud-in-cell interpolation scheme.
Similarly to S12's restriction, when computing the density, we add a filtering scheme to account only for the m φ = 2 response. This contribution is obtained by calculating
ρ 2 (r) = 1 2π dφ ρ(r cos(φ), r sin(φ)) e -2iφ . (4.32)
This calculation is performed at each timestep immediately after the cloud-in-cell assignment of mass to the mesh. We then impose to the mesh the new mass distribution (4.33) where (r k , φ k ) are given by (x k , y k ) = (r k cos(φ k ), r k sin(φ k )). To compute ρ 2 (r), we rely on a brute-force computation of equation (4.32) on a series of N ring radial rings with a spacing ∆r ring ∆x mesh , with the trapezium rule with N φ = 720 points in φ for the angular dependence. While being very simple, this N -body code aims at reproducing as closely as possible the details of S12's implementation. One should still underline two important differences: S12 relies on a polar mesh to compute δψ, while we use here a cartesian mesh with a m φ = 2 filtering of the density field; S12 uses a block timestep scheme, while we use here a simpler single-timestep scheme.
ρ(x k , y k ) = ρ 2 (r k ) e 2iφ k ,
The results presented thereafter were obtained with a timestep ∆t = 10 -3 R i /V 0 (where R i and V 0 were introduced in section 3.7.1), with a mesh that extends up to ±R max = 20R i and N mesh = 120 cells, so that ∆x mesh = R i /3. To filter the potential to the m φ = 2 harmonic sector, we used N ring = 1000 radial rings and N φ = 720 points in the azimuthal direction, so that ∆r ring = 2R i /100. The computation of the potential from the density was performed via a Fourier transform on the mesh, with a softening length ε = R i /6, comparable to the Plummer softening ε = R i /8 used in S12. We checked that the results are not significantly changed when the timestep or the mesh size are divided by 2. We detail in Appendix 4.C, how this N -body implementation was validated by recovering known unstable modes of truncated Mestel discs (Zang, 1976;Evans & Read, 1998b;Sellwood & Evans, 2001).
Scaling with N
The Balescu-Lenard equation (2.67) predicts the system's mean secular collisional evolution, in the sense of it being averaged over different realisations. Therefore, in order to investigate such dynamics via N -body simulations, we run multiple simulations for the same number of particles, which only differ initially by the sampling of the initial conditions. For a given number of particles, we then perform an ensemble average of the different evolution realisations. Having estimated this mean evolution, we are in a position to compare it with the Balescu-Lenard predictions.
In order to study the scaling w.r.t. the number of particles of these various numerical simulations, one should define a function quantifying the "amount of diffusion" undergone by the system and compare it with the predictions of the Balescu-Lenard formalism. An additional difficulty also comes from the statistical nature of the initial sampling. Indeed, as only N stars are being sampled, the system's initial DF will necessarily fluctuate w.r.t. the smooth underlying DF as a result of the unavoidable Poisson shot noise. Let us insist on the fact that these fluctuations directly originate from the initial sampling and are not as such specific to the collisional diffusion process described by the Balescu-Lenard equation. It is therefore important to disentangle these two effects. To do so, let us define the function h(t, N ) as
h(t, N ) = h i (t, N ) , (4.34)
where we introduced the operator . as the ensemble average over different realisations for the same number of particles indexed by i. In the upcoming applications, it is approximated as the arithmetic average of p = 32 different realisations, so that . = 1/(p) i ( . ). In equation (4.34), we introduced the distance function h i (t, N ) as
h i (t, N ) = dJ F i (t, J , N )-F (t = 0, J , N ) 2 . (4.35)
In equation (4.35), we noted as F i (t, J , N ) the normalised DF at time t of the i th simulation with a number N of particles, while F (t = 0) stands for the averaged system's DF at the initial time. Defined in such a way, the function h i quantifies the "distance" between the inital mean DF F (t = 0) and the evolved DF F i (t). When interested in the early time behaviour of the function h, one can perform a limited development as
h(t, N ) h 0 (N )+ h 1 (N ) t+ h 2 (N ) t 2 2 , (4.36)
where the coefficients h 0 , h 1 and h 2 only depend on N . Starting from equation (4.35), these coefficients can easily be estimated. One has
h 0 (N ) = dJ F -F 0 2 , (4.37)
where we used the shortening notations F 0 = F (t = 0, J , N ) and F = F (t = 0, J , N ). One can note that this coefficient only depends on the properties of the initial sampling of the DF, and not on its dynamics.
As discrete sampling obeys Poisson statistics, one can immediately rewrite equation (4.37) as
h 0 (N ) = α 0 N , (4.38)
where the constant α 0 is independent of N . One can similarly compute the coefficient h 1 (N ), which reads
h 1 (N ) = 2 dJ F -F 0 F , (4.39)
where we used the notation F = ∂F/∂t (t = 0). The two terms appearing in the crossed term from equation (4.39) have two different physical origins. The first one, F -F 0 , is associated with the initial sampling of the DF, while the second, F , is driven by the system's dynamics. Assuming that the sampling and the system's dynamics are uncorrelated, one can write [F -F 0 ]F = F -F 0 F = 0, i.e. one has h 1 (N ) = 0. Finally, one can compute the coefficient h 2 (N ) given by
h 2 (N ) = 2 dJ F 2 + F -F 0 F , (4.40)
where we used the notation F = ∂ 2 F/∂t 2 (t = 0). Using the same argument of uncorrelation as in equation (4.39), one can get rid of the second term in the r.h.s. of equation (4.40). In addition, let us assume that the variance of F 2 is small compared to its expectation, so that
F 2 = F 2 . Equation (4.40) becomes h 2 (N ) = 2 dJ F 2 . (4.41)
The dependence of the term F with N is directly given by the Balescu-Lenard equation (see equation (3.94)), which gives the scaling
h 2 (N ) = α 2 N 2 , (4.42)
where the amplitude α 2 is independent of N . Let us insist on the fact that such a scaling is a prediction of the Balescu-Lenard formalism. Should the secular evolution observed in S12 be a Vlasov-only evolution, i.e. a collisionless evolution, one would not have such a scaling.
Let us now compare the scalings from equations (4.38) and (4.42) with the ones measured in N -body simulations. We consider number of particles given by N ∈ 8, 12, 16, 24, 32, 48, 64 ×10 5 , and for each number of particles, we perform 32 different simulations with different initial conditions, with the N -body implementation presented in section 4.4.1. For each value of N , one may first study the behaviour of the function t → h(t, N ), as illustrated in figure 4.4.1. Once these functions computed, one can fit parabolas to them, following equation (4.36), to determine the behaviours of N → h 1 (N ), h 2 (N ). The dependence with N of these two coefficients is illustrated in figure 4.4.2. In the left panel of figure 4.4.2, we recover the scaling h 0 (N ) ∝ 1/N obtained in equation (4.38). Such a dependence is fully due to the initial Poisson shot noise in the initial conditions. From the right panel of figure 4.4.2, we obtain the scaling h 2 (N ) ∝ N -1.91 . This has to be compared to the prediction h 2 (N ) ∝ N -2 from equation (4.42) derived from the Balescu-Lenard equation. Given the small number of realisations considered here and the various uncertainties in the fits, the measurements and the predictions appear to be in good agreement. Such a scaling of h 2 (N ) with N therefore confirms the relevance of the Balescu-Lenard framework. It demonstrates that the secular evolution of S12's stable Mestel disc is the result of a collisional diffusion seeded by the system's discrete nature and the effects of amplified distant resonant encounters.
In order to investigate in more detail this collisional scaling, let us now describe another measurement, which allows us to get rid of the pollution by the Poisson shot noise present in equation (4.38). Indeed, one of the difficulty of the previous measurements was to disentangle the contributions from the Poisson shot noise, as in h 0 (N ) from equation (4.38), from the ones associated with the collisional The function is averaged over 32 different realisations with particle numbers N ∈ 8, 12, 16, 24, 32, 48, 64 ×10 5 . To compute h(t, N ), the action space domain (J φ , Jr) = 0; 2.5 × 0; 0.2 was binned in 100×50 regions. The values of h(t, N ) have also been uniformly renormalised to clarify this representation. Dots correspond to the snapshots of the simulations for which h(t, N ) was effectively computed, while the lines correspond to second-order fits, following equation (4.36). As expected, the larger the number of particles, the less noisy the simulation and the smaller h(t, N ).
V (t, N ) = dJ χ F (t, J , N ) -F (t = 0, J , N ) < C V , (4.43)
where we introduced a threshold C V < 0, as well as χ x < C V a characteristic function equal to 1 if
x < C V and 0 otherwise. The function V (t, N ) therefore measures the volume in action space of the regions depleted from particles (as C V < 0), for which the mean DF value has changed by more than C V . Provided that C V is chosen to be sufficiently large, such a definition allows us not to be polluted anymore by the Poisson sampling shot noise. The scaling of V (t, N ) for the initial times is straightforward to obtain (see equation (3.94)) and reads
V (t, N ) t N V 0 , (4.44)
where V 0 is a constant independent of N . As a consequence, for a fixed value of N , one expects a linear time dependence of the function t → V (t, N ), as illustrated in the left panel of figure 4.4.3. Finally, in
Scaling with ξ
Following the previous study of the scaling of the system's response w.r.t. the number of particles, let us now study the impact of the disc's active fraction ξ on the properties of the diffusion. Indeed, one of the strength of the Balescu-Lenard formalism is to capture the effect of gravitational polarisation via the response matrix and the dressed susceptibility coefficients. Following S12, the simulations considered in the previous sections were all performed with an active fraction ξ = 0.5. Only one half of the total potential was effectively generated self-consistently by the stars, while the rest was associated with the contributions from a static and rigid halo. By increasing the disc's active fraction, one can strengthen the self-gravitating amplification, and therefore hasten the diffusion, while still remaining in a collisional regime of evolution. If one keeps increasing even more ξ, the system will eventually become dynamically unstable and its dynamics will be driven by the collisionless Vlasov equation. See section 4.4.4 for a detailed discussion on the transition between the two regimes of diffusion: slow collisional and unstable collisionless. Provided that ξ is not too large, the dynamics of the the disc is still driven by the Balescu-Lenard equation, and the scaling on h 2 with N obtained in equation (4.42) remains the same. However, because of the increased self-gravity, the prefactor α 2 (ξ) from equation (4.42) will increase as the system becomes more responsive. Let us now investigate the dependence of α 2 with ξ, which can be both measured via direct N -body simulations following section 4.4.2, as well as predicted by the Balescu-Lenard formalism, following section 4. regime, one recovers the fact that the function t → h(t, N ) behaves initially like a parabola as obtained in equation (4.36). One also recovers the expected scalings of the functions N → h 0 (N ), h 2 (N ), associated respectively with the initial Poisson shot noise and the collisional scaling of the Balescu-Lenard equation. Thanks to these fits, let us now study the ratio α 2 (ξ = 0.6)/α 2 (ξ = 0.5) to estimate the amplitude of the associated polarisations. The fits N → h 2 (N ) from figures 4.4.2 and 4.4.4 allow us to write log( h 2 (N )) 6.40-1.91(log(N )-3.12) for ξ = 0.5, and log( h 2 (N )) 9.76-1.84(log(N )-3.12) for ξ = 0.6, where we shifted the intercept of the fits to correspond to the centre of the considered domain log(N ) ∈ log(8); log(64) . As a consequence, from the N -body realisations, one obtains the ratio α 2 (0.6) α 2 (0.5) NB exp 9.76-6.40 29 .
(4.47)
Let us now compare this measurement from numerical simulations to the same measurement estimated via the Balescu-Lenard formalism. Equation (4.41) immediately gives us this ratio as
α 2 (ξ 1 ) α 2 (ξ 2 ) = dJ div(F ξ1 tot ) 2 dJ div(F ξ2 tot ) 2 , (4.48)
where F ξ tot stands for the initial Balescu-Lenard diffusion flux from equation (2.72) with an active fraction ξ. The value of α 2 (ξ = 0.5) can be determined from figure 4.3.3, while we illustrate in figure 4.4.5 the secular diffusion flux predicted for ξ = 0.6. Comparing figures 4.3.3 and 4.4.5, we note that they both exhibit similar diffusion features, but the one associated with the larger value of ξ predicts a faster The contours are spaced linearly between the minimum and the maximum of N div(F tot). The maximum value for the positive blue contours corresponds to N div(F tot) 4200, while the minimum value for the negative red contours reads N div(F tot) -3200. Increasing the active fraction of the disc increases its susceptibility, so that the norm of N div(F tot) gets larger and the secular diffusion is hastened.
diffusion. Thanks to the contours from both figures 4.3.3 and 4.4.5, we may estimate the ratio of the coefficients α 2 . In order to focus on the regions associated with the resonant ridge, the integrals on J in equation (4.48) are performed for J φ ∈ 0.5; 1.2 and J r ∈ 0.06; 0.15 . These Balescu-Lenard predictions lead to the measurement α 2 (0.6) α 2 (0.5) BL 42 .
(4.49)
Despite the noise associated with considering a much more sensitive disc with ξ = 0.6, the ratios of α 2 measured either via direct N -body simulations as in equation (4.47) or via the Balescu-Lenard formalism as in equation (4.49) are within the same order of magnitude. We recovered here a crucial strength of the Balescu-Lenard formalism which, because it accounts for collective effects, captures the relative effects of the disc's susceptibility on the characteristics of the secular collisional diffusion. This is essential for dynamically cold systems such as razor-thin discs.
Secular phase transitions
In section 4.3.1, we computed the Balescu-Lenard predictions for the initial diffusion flux F tot (t = 0 + ). The direct N -body simulations presented in section 4.4.2 also allowed us to check the appropriate scaling of the system's diffusion w.r.t. the number of particles and its active fraction. In order to probe the late secular evolution of the system via the Balescu-Lenard formalism, one has to integrate forward in time the Balescu-Lenard equation. In order to account for the fact that this diffusion is self-induced, i.e. the fact the drift and diffusion coefficients are self-consistent with the system's DF, i.e. the fact that the Balescu-Lenard equation is an integro-differential equation, this integration forward in time has to be made iteratively by updating the drift and diffusion coefficients as the system's DF changes. Such difficult iterations are beyond the scope of the present chapter, but we refer to Appendix 6.C for an illustration of how the Balescu-Lenard diffusion equation may be rewritten as a stochastic Langevin equation, for which numerical integrations appear simpler. Now that they have been validated at t = 0 + , the direct N -body simulations may used to investigate the late times evolution of the system.
The Balescu-Lenard equation (2.67) describes the long-term evolution of a discrete stable quasistationary self-gravitating inhomogeneous system. As already underlined in the derivation of this kinetic equation (see, e.g., equation (2.128)), for such slow evolutions to occur, it is mandatory for the system to be dynamically stable w.r.t. the collisionless Vlasov dynamics. This also has to remain valid as the system diffuses through a series of quasi-stationary stable equilibria. The Balescu-Lenard equation being associated with a kinetic development at order 1/N , let us note that such an equation is valid for secular timescales of order N t D , where t D is the dynamical time.
When considering long-term evolutions, the Balescu-Lenard dynamics may lead to two distinct outcomes. On the one hand, if the system remains dynamically stable during its entire evolution, the Balescu-Lenard equation will drive the system towards a 1/N -stationary state. 2 After having reached such a stationary state, the 1/N effects vanish, and the system's dynamics is then driven by 1/N 2 effects, which are not accounted for in the Balescu-Lenard equation. On the other hand, the Balescu-Lenard equation may also lead on secular timescales to a dynamical destabilisation of the system. As a result of the long-term resonant effects of the collisional diffusion, the irreversible changes in the system's DF may make the system unstable. After a slow, stable, and quasi-stationary evolution sourced by collisional 1/N -effects, the system may at some point become dynamically unstable w.r.t. the collisionless dynamics, which then becomes the main driver of the system's later time evolution. This was already suggested in Sellwood (2012), which observed an out-of-equilibrium phase transition between the 1/N Balescu-Lenard collisional evolution and the collisionless Vlasov evolution.
Relying on the N -body simulations presented in section 4.4.2, let us now illustrate in detail this phase transition. In order to capture the change of evolution regime within the disc (collisional vs. collisionless), let us define for a given number N of particles the quantity Σ 2 (t, N ) as
Σ 2 (t, N ) = Rsup R inf dR R dφ Σ star (t, N, R, φ) e -i2φ = µ n e -i2φn , (4.50)
where, similarly to equation (4.34), • stands for the ensemble average over the 32 different realisations with the same number of particles. The radii considered here are restricted to the range R ∈ R inf ; R sup = 1.2; 5 , where the active surface density of the disc is only weakly affected by the inner and outer tapers. Finally, to obtain the second equality in equation (4.50), we replaced the active surface density, Σ star , by a discrete sum over all the particles of the system. Here, the sum on n is restricted only to particles whose radius lies between R inf and R sup , and we noted their azimuthal phase as φ n . The function Σ 2 aims at quantifying the strength of non-axisymmetric features within the disc and should therefore be seen as a way to estimate how much the disc has evolved. During the initial Balescu-Lenard collisional evolution of the system, one expects low values of Σ 2 , as such an evolution is an orbit-averaged evolution, i.e. we assumed that F = F (J , t), so that the mean system's DF should not depend on the angles θ. During this first slow collisional phase, Σ 2 still remains non-zero, because of both unavoidable Poisson sampling shot noise and the fact that the disc sustains transient spiral waves driving its secular evolution. One the long-term, this collisional evolution leads to a destabilisation of the system. The dynamical drivers of the system's evolution are no more discrete distant resonant collisional encounters, but exponentially growing collisionless dynamical instabilities. Because of the appearance of strong non-axisymmetric features, in this collisionless regime, one expects much larger values of Σ 2 . Figure 4.4.6 illustrates this transition between the two regimes of diffusion, thanks to the behaviour of the function t → √ N Σ 2 (t, N ).3 This phase transition can also easily be seen by directly looking at the disc's active surface density Σ star during these two regimes. This is illustrated in figure 4.4.7, where one notices that during the late time collisionless evolution, the disc becomes strongly non-axisymmetric. In order to illustrate this change of dynamical regime, Sellwood (2012) fitted unstable growing modes to the disc in this regime to effectively recover the presence of a dynamical instability. Right after the instability settles in, S12 noted that the pattern speed of the spiral response is consistent with the ILR frequency associated with the ridge. At this stage, one could also rely on the matrix method from Appendix 4.C, to show that a perturbed DF with a sufficiently large ridge is indeed associated with an unstable configuration (De Rijcke & Voulis, 2016). In conclusion, let us emphasise that an isolated stellar disc, fully stable in the mean sense, will, given time, drive itself through two-point resonant correlations towards dynamical instability. This illustrates the extent to which cold quasi-stationary systems such as stellar discs are truly secularly metastable.
Conclusion
Most astrophysical discs were formed through dissipative processes and typically evolved over many dynamical times. When isolated, long-range gravitational interactions allow their components to interact effectively through resonances, which may secularly drive discs toward more likely equilibria. These processes are captured by recent extensions of the kinetic theory of self-gravitating systems rewritten in angle-action variables and captured by the inhomogeneous Balescu-Lenard equation (2.67). Solving these equations provides astronomers with a new opportunity to quantify in detail the secular dynamics of these systems. While numerically challenging, the computation of the diffusion fluxes predicted by these kinetic equations is, as demonstrated in this chapter, within reach of an extension of the matrix method (Kalnajs, 1976), allowing for an estimation of the strength of the self-gravitating orbital response.
In this chapter, we estimated the drift and diffusion coefficients of the inhomogeneous Balescu-Lenard equation in the context of razor-thin stellar discs. The details of the disc's self-gravity were taken into account via the matrix method, as detailed in section 4.2.3. This method was validated on unstable Mestel discs in Appendix 4.C. In section 4.3, we computed the divergence of the self-induced diffusion flux in action space, N div(F tot ), and recovered in figure 4.3.3 the diffusion features observed in the direct numerical simulations from Sellwood (2012). We recovered as well in equation (4.31) an agreement of the diffusion timescale between the Balescu-Lenard prediction and the N -body measurements, which, as shown in section 4.3.3, is permitted by the significant diffusion boost offered by swing amplification. Let us emphasise that these computations are the first exact calculations of the Balescu-Lenard drift and diffusion coefficients in the context of inhomogeneous multi-periodic systems. These computations capture the essence of the self-induced evolution (nature), which should compete with environmentally-induced evolution (nurture). They also demonstrate without ambiguity that the Balescu-Lenard equation is the master equation capturing consistently the self-induced churning and blurring (Schönrich & Binney, 2009a). In addition, the multi-component Balescu-Lenard equation (2.76) can also account for the joint evolutions of multiple populations, e.g., stars and giant molecular clouds. The presence of a spectrum of masses can have a significant effect on the system's secular dynamics, as detailed in section 5.7.6 for thickened discs.
In section 4.4, we compared these predictions to idealised numerical simulations of stable razor-thin Mestel discs sampled by pointwise particles and evolved for hundreds of dynamical times. Relying on ensemble averages of these N -body runs, we identified a clear signature of the Balescu-Lenard process in the scaling of the diffusion features with N and ξ, the active fraction of mass within the disc. We also emphasised how, for late times, the collisional diffusion features slowly appearing in the disc's DF eventually lead to a destabilisation of the disc. As originally identified in Goldreich & Lynden-Bell (1965); Julian & Toomre (1966) by studying their linear response, the susceptibility of cold self-gravitating discs plays a crucial role in their secular evolution as it appears squared in the Balescu-Lenard equation, which significantly boosts the effects induced by the system's discreteness. In these early works, the relevance of the susceptibility was shown via the study of discs' linear reponse. Here, we have shown how central this susceptibility is to discs' secular response.
The various illustrations presented in this chapter offer us therefore a qualitative and quantitative understanding of the secular diffusion processes induced by discreteness effects occurring in galactic discs. Our qualitative agreement in both amplitude, position, width and scaling of the induced orbital signatures suggests therefore that the secular evolution of such razor-thin stellar discs is indeed driven by discrete resonances, as captured by the Balescu-Lenard equation. Let us finally emphasise that such an evolution does not depend on the initial phases of the disc's constituents, since the matching Balescu-Lenard fluxes are phase averaged. The Balescu-Lenard collisional equation therefore reproduces the initial orbital evolution of self-gravitating discs driven by discrete two-point correlations beyond the mean field approximation.
Future works
We have seen in this chapter how the inhomogeneous Balescu-Lenard equation was indeed able to accurately capture the diffusion features observed in direct N -body simulations of razor-thin discs, once one accounts correctly for the system's self-gravity. It appears as particularly important in this context, as razor-thin stellar discs are cold dynamical systems, within which the swing amplification of loosely wound fluctuations can be very large.
A first direct follow-up of this work would be to integrate forward in time the Balescu-Lenard equation, in order to estimate the system's diffusion flux at some later time. Because of the self-consistency of the Balescu-Lenard equation, such an integration remains technically difficult. One possible approach would be to rely on its Langevin rewriting (see Appendix 6.C), which describes the diffusion of individual particles rather than of the system's DF as a whole. This integration would especially allow for a more detailed investigation of the process of secular phase transition during which the system becomes at some point unstable for the collisionless dynamics.
As was emphasised in Appendix 4.C, N -body simulations require the use of an additional parame-ter, the softening length ε, which leads to a modification of the pairwise interaction potential. As can be seen in figure 1 of Sellwood & Evans (2001), this parameter has a strong impact on the characteristics of the unstable modes recovered via N -body simulations. It would be of particular interest to investigate in a systematic manner the various influences of this parameter. This would for example involve looking at the effects of ε on unstable modes measurements in numerical simulations, update the linear theory to account for a modified softened interaction potential, and investigate how such a softening may also impact the Balescu-Lenard predictions for the secular diffusion features. We described in Appendix 4.D a possible follow-up of this work which would be to consider the secular dynamics of 3D spherical systems, whose dynamics is very similar to that presented in this chapter. Accounting for potential fluctuations induced by supernova feedback would allow us to investigate one possible mechanism of softening of dark matter haloes' profiles on cosmic times. By characterising from hydrodynamical simulations the typical feedback-induced perturbations from a galactic disc onto its dark matter halo, one should be in a position to quantitatively estimate the amplitude of the subsequent secular diffusion in the halo. Such mechanisms are for now beyond the reach of current simulations, so that such a precise theoretical framework appears as a necessary first step to probe these processes.
Similarly, the secular dynamics of 3D spherical globular clusters should also be re-investigated within the Balescu-Lenard formalism, following the approach presented in Appendix 4.D. Indeed, though it has long been known that for such systems, their discreteness is the main driver of their secular evolution, it has up to now only be described as local encounters (see [START_REF] Heggie | The Gravitational Million-Body Problem[END_REF] for a review). The Balescu-Lenard equation, complemented with the estimation of the system's self-gravity from linear theory, would allow us to account for non-local resonances, assess the importance of the cluster's selfgravity, and recover the observed dependence of the system's response with the number of particles and the fraction of radial orbits. this calculation, let us first rewrite ℵ in a dimensionless fashion so that
ℵ(a g , b g , c g , a h , b h , c h , η, ∆r) = ∆r 2 -∆r 2 ∆r 2 -∆r 2 dx p dx a a g +b g x p +c g x a a h +b h x p +c h x a +iη = a g a h (∆r) 2 1 2 -1 2 1 2 -1 2 dxdy 1+ bg∆r ag x+ cg∆r ag y 1+ b h ∆r a h x+ c h ∆r a h y+i η a h = a g a h (∆r) 2 ℵ D b g ∆r a g , c g ∆r a g , b h ∆r a h , c h ∆r a h , η a h , (4.56)
where we assumed that a g , a h = 0 and used the change of variables x = x p /∆r and y = x a /∆r. Finally, we introduced the dimensionless function ℵ D as
ℵ D (b, c, e, f, η) = 1 2 -1 2 1 2 -1 2 dxdy 1+bx+cy 1+ex+f y+iη . (4.57)
To effectively compute this integral, it only remains to exhibit a function G(x, y) so that
∂ 2 G ∂x∂y = 1+bx+cy 1+ex+f y+iη . (4.58)
One possible choice for G is given by
G(x, y) = 1 4e 2 f 2 log[e 2 x 2 +2e(f xy+x)+f 2 y 2 +2f y+η 2 +1] × bf (e 2 x 2 -(f y+iη+1) 2 )+2ef (ex+iη+1)-ce(ex+iη+1) 2 + i 2e 2 f 2 π 2 -tan -1 ex+f y+1 η bf (e 2 x 2 -(f y+iη+1) 2 )+2ef (ex+iη+1)-ce(ex+iη+1) 2 + y 4e 2 f f (-4e+b(2ex+f y+2iη+2))+ce(2ex-f y+2iη+2)+2ef (cy+2) log[ex+f y+iη+1] . (4.59)
In the previous expression, one should be careful with the presence of a complex logarithm and a tan -1 . Fortunately, because e, f, η ∈ R, and η = 0, one can straightforwardly show that the arguments of both of these functions never cross the usual branch-cut of these functions chosen to be Im(z) = 0 ; Re(z) ≤ 0 . Equation (4.57) can then be computed as
ℵ D = G[ 1 2 , 1 2 ]-G[ 1 2 , -1 2 ]-G[-1 2 , 1 2 ]+G[-1 2 , -1 2 ] . (4.60)
4.C Recovering unstable modes
Let us detail how the matrix code presented in section 4.2.3 as well as the N -body code described in section 4.4.1 can be validated by recovering known unstable modes of razor-thin discs. The direct numerical calculation of modes in a galactic disc is a complex task, which has only been made for a small number of discs models (Zang, 1976;Kalnajs, 1977;Vauterin & Dejonghe, 1996;Pichon & Cannon, 1997;Evans & Read, 1998b;Jalali & Hunter, 2005;Polyachenko, 2005;Jalali, 2007Jalali, , 2010;;De Rijcke & Voulis, 2016). Here, we will recover the results of the pioneer work of Zang (1976), extended in Evans & Read (1998a,b), and recovered numerically in Sellwood & Evans (2001). These three works were interested in recovering the precession rate ω 0 = m p Ω p and growth rate η = s of the unstable modes a truncated Mestel disc very similar to the stable one presented in section 3.7.1. The unstable discs considered thereafter are fully active discs, so that ξ = 1, and their radial velocity dispersion is given by q = (V 0 /σ r ) 2 -1 = 6. Finally, we consider different models of disc by varying the power index ν t of the inner taper function as defined in equation (3.90). Here, we will look for m p = 2 modes, and will consider three different truncation indices given by ν t = 4, 6, 8. In section 4.C.1, we first recover the associated unstable modes by computing the system's response matrix, following section 4.2.3, while in section 4.C.2, we recover these modes via direct N -body simulations using the N -body implementation presented in section 4.4.1.
4.C.1 The response matrix validation
In order to compute the system's response matrix, we follow the method presented in sections 4.2.3 and 4.2.4. In addition, we use the same numerical parameters as the ones used in section 4.3.1. The grid in the (r p , r a )-space is characterised by r min p = 0.08, r max a = 4.92 and ∆r = 0.05. The sum on the resonant index m 1 is limited to |m 1 | ≤ m max 1 = 7. Finally, we consider basis elements given by Appendix 4.A, with the parameters k Ka = 7 and R Ka = 5, with a restriction of the radial basis elements to 0 ≤ n ≤ 8. One should note that despite having a disc that extends up to R max = 20, one can still safely consider a basis truncated at such a small radius R Ka , which allows us to efficiently capture the self-gravitating properties of the disc in the inner regions.
In order to search for unstable modes in a disc, one has to look for complex frequencies ω = ω 0 +iη, such that the complex response matrix M(ω 0 , η) from equation (4.13) possesses an eigenvalue equal to 1. This complex frequency is then associated with an unstable mode of pattern speed ω 0 = m p Ω p and growth rate η. In order to effectively determine the characteristics of the unstable modes, we follow an approach based on Nyquist contours, as presented in Pichon & Cannon (1997). For a fixed value of η, one studies the behaviour of the function ω 0 → det I-M(ω 0 , η) , which takes the form a continuous curve in the complex plane, called a Nyquist contour. For η → +∞, one has M(ω, η) → 0, so that the contour will shrink around the point (1, 0). As a consequence, for a given value of η, the number of windings of the Nyquist contour around the origin of the complex plane gives a lower bound of the number of unstable modes with a growth rate superior to η. By varying the value of η, one can then determine the largest value of η which admits an unstable mode, and this is the growth rate of the most unstable mode of the disc. Figure 4.C.1 illustrates these Nyquist contours for an unstable Mestel disc with the truncation index ν t = 6. We gathered in figure 4.C.5 the results of the measurements for the three discs considered. for various fixed values of η illustrated with different colors. These contours were obtained via the matrix method for a truncated Mestel disc with νt = 6, q = 6, and looking for mp = 2 modes. One can note that for η = 0.20, the contour crosses the origin, which corresponds to the presence of an unstable mode. Right panel: Illustration of the behaviour of the function ω0 → log |det I• M(ω0, η) |, when considering the same truncated Mestel disc as in the left panel. Each colored curve corresponds to a different fixed value for η. This representation allows us to determine the pattern speed ω0 = mpΩp 0.94 of the system's unstable mode.
After having determined the characteristics (ω 0 , η) of the unstable modes, one can then study their shapes in the physical space. To do so, one can compute M(ω 0 , η) and numerically diagonalise this matrix of size n max ×n max , where n max is the number of basis elements considered. One of the matrix eigenvalues is then almost equal to 1, and one can consider its associated eigenvector X mode . The shape of the mode is then given by
Σ mode (R, φ) = Re p X p mode Σ (p) (R, φ) , (4.61)
where we wrote as Σ (p) the considered surface density basis elements. Figure 4.C.2 illustrates the shape of the recovered unstable mode for the truncated ν t = 4 Mestel disc.
4.C.2 The N -body code validation
Let us now investigate the same unstable modes via direct N -body simulations, in order to validate the N -body implementation on which section 4.4 is based. We do not detail here the initial sampling of the particles required to setup the simulations, and details can be found in Appendix E of Fouvry et al. (2015c). In order not to be significantly impacted by the initial Poisson shot noise and the lack of a quiet start sampling (Sellwood, 1983), for each value of the truncation power ν t , the simulations were performed with N = 20×10 6 particles. As can be observed in figure 1 of Sellwood & Evans (2001), in order to recover correctly the characteristics of the disc's unstable modes, an appropriate setting of the N -body code parameters is crucial. Following the description from section 4.4.1, we consider a cartesian grid made of N mesh = 120 grid cells, while using a softening length given by ε = R i /60. We also restrict the perturbing forces only to the harmonic sector m φ = 2, thanks to N ring = 2400 radial rings with N φ = 720 azimuthal points.
In order to extract the characteristics of the unstable modes from N -body realisations, one may proceed as follows. For each snapshot of the simulation, one can estimate the disc's surface density as
Σ star (x, t) = µ N i=1 δ D (x-x i (t)) = p b p (t) Σ (p) (x) , (4.62)
where the sum on i in the first line is made on all particles in the simulation, and x i (t) stands for the position of the i th particle at time t. In the second line of equation (4.62), the sum on p is made on all the basis elements considered. Here, we consider the same basis elements as the ones considered previously in section 4.C.1. The basis coefficients b p (t) are straightforward to estimate thanks to the biorthogonality property from equation (2.12). They read provided one pays a careful attention to the branch-cut of the complex logarithm. These linear time dependences appear therefore as the appropriate measurements to estimate the growth rate and pattern speed of unstable modes. Let us note that equation (4.64) does not hold anymore if more than one unstable mode of similar strength are present in the disc. of the mp = 2 unstable modes of truncated Mestel discs, with a random radial velocity given by q = (V0/σr) 2 -1 = 6, and truncation indices given by νt = 6, 8. The basis coefficient plotted corresponds to the indices ( , n) = (2, 0).
b p (t) = -dx Σ star (x, t) ψ (p) * (x) = -µ i ψ (p) (x i (t)) . ( 4
Following the determination of the basis coefficients b p (t), one can consequently study the shape of the recovered unstable mode in the physical space. Indeed, similarly to equation (4.61), the shape of the mode is given by As a conclusion, we gathered in figure 4.C.5 the growth rates and pattern speeds obtained either via the matrix method or via direct N -body simulations. As already noted in Sellwood & Evans (2001) when considering truncated Mestel discs, the recovery of the characteristics of unstable modes from direct N -body simulations remains a difficult task, for which the convergence to the values predicted through linear theory can be delicate.
Σ mode (R, φ, t) = Re p b p (t) Σ (p) (R, φ) .
4.D The case of self-gravitating spheres
In this Appendix, let us show how the previous calculations of the system's response matrix and the associated diffusion flux presented for razor-thin discs, can straightforwardly be extended to 3D spherical systems. Analytical studies of the linear collective response of spherical self-gravitating systems have been considered by a number of authors (Tremaine & Weinberg, 1984;Weinberg, 1989;Seguin & Dupraz, 1994;Murali & Tremaine, 1998;Murali, 1999;Weinberg, 2001a;Pichon & Aubert, 2006). Such calculations are of interest, if one wants to describe the long-term evolution of spherical systems such as dark matter haloes while accounting for self-gravity. In section 4.D.1, we show how the main text calculations can straightforwardly be extended to such systems, while in section 4.D.2, we illustrate how such a formalism may be applied to the study of the cusp-core problem in the context of the long-term evolution of dark matter haloes. Unstable m p = 2 modes of truncated Mestel discs, q = 6.
4.D.1 The 3D calculation
As in the case of razor-thin axisymmetric potentials, 3D spherically symmetric potentials are also guaranteed by symmetry to be integrable. The three natural actions are given by
J 1 = J r = 1 π ra rp dr 2(E -ψ 0 (r)) -L 2 /r 2 ; J 2 = L ; J 3 = L z , (4.66)
where the radial action J r was already introduced in equation (4.1), L > 0 stands for the magnitude of the particle's angular momentum, and L z its projection along the z-axis. Here, as previously, the first action J r encodes the amount of radial energy of the star, L encodes the typical distance of the star to the centre, while finally L z characterises the vertical orientation of the orbital plane to which the particle motion is restricted. The intrinsic frequencies of motion of the associated angles are given by Ω = (Ω 1 , Ω 2 , Ω 3 ).
For spherical systems, one should see the third action J 3 as a mute variable, so that Ω 3 = 0. Therefore, one has an additional conserved quantity, namely θ 3 = cst., which corresponds to the longitude of the ascending node. As for razor-thin discs, the two additional frequencies of motion Ω 1 = κ and Ω 2 = Ω φ are given by equations (4.2) and (4.3). In this context, we may also use the pericentre and apocentre (r p , r a ) to represent the two actions (J 1 , J 2 ).
Let us now introduce the 3D spherical coordinates (R, θ, φ). For 3D systems, the generic basis element can be decomposed as
ψ (p) (R, θ, φ) = ψ m n (R, θ, φ) = Y m (θ, φ) U n (R) , (4.67)
where Y m are the usual spherical harmonics and U n is a real radial function. Equation (4.67) is the direct spherical equivalent of the 2D decomposition from equation (4.5), and allows us to separate the angular dependences from the radial one. Similarly, the associated density elements are given by
ρ (p) (R, θ, φ) = ρ m n (R, θ, φ) = Y m (θ, φ) D n (R) , (4.68)
where D n is a real radial function. Explicit 3D basis of potentials and densities elements can for example be built from spherical Bessel functions [START_REF] Fridman | Physics of gravitating systems[END_REF]Weinberg, 1989;Rahmati & Jalali, 2009) or from ultraspherical polynomials (Hernquist & Ostriker, 1992). The spherical basis elements suggested in Weinberg (1989) In 3D, the Fourier transformed basis elements from equation (4.7) become
ψ m (J ) = 1 (2π) 3 dθ 1 dθ 2 dθ 2 ψ (p) (x(θ, J )) e -im•θ , (4.69)
while the angle mapping from equation (4.8) still holds. In order to describe the orientation of the orbital plane, let us define the orbit's inclination, β = β(J ), as
J 3 = J 2 cos(β) with 0 ≤ β ≤ π . (4.70)
Following Tremaine & Weinberg (1984); Weinberg (1989), the Fourier transform in angle of the basis element ψ (p) = ψ p m p n p w.r.t. the resonance vector m = (m 1 , m 2 , m 3 ) takes the form
ψ (p) m (J ) = δ m3 m p V p m2m p (β) W m1 p m2n p (J ) . (4.71)
In the previous equation, one should pay attention to the difference between the index m p , which is the second index of the basis element from equation (4.67), and m = (m 1 , m 2 , m 3 ) corresponding to the three indices of the Fourier transform w.r.t. the angles. In equation (4.71), we introduced the coefficient W m1 p m2n p (J ), whose expression was already obtained in equation (4.10). We also introduced the coefficient V p ,m1,m p (β), specific to the 3D basis, which reads
V p m2m p (β) = i m p -m2 Y m2 p (π/2, 0) R p m2m p (β) , (4.72)
where we introduced the rotation matrix for spherical harmonics, given by
R m2m (β) = t (-1) t ( +m 2 )! ( -m 2 )! ( +m)! ( -m)! ( -m-t)! ( +m 2 -t)! t! (t+m-m 2 )! × [cos(β/2)] 2 +m2-m-2t [sin(β/2)] 2t+m-m2 . (4.73)
In equation (4.73), the sum is to be made on all the "t" such that the arguments of the factorials are either zero of positive. It corresponds to t min ≤ t ≤ t max , with t min = Max[0, m 2 -m] and t max = Min[ +m, +m 2 ] Having computed the Fourier transformed basis elements, one may then proceed to the evaluation of the system's response matrix. As already detailed in section 4.2.3, we perform the estimation of the response matrix by using (r p , r a ) as our variables. To do so, the first step of the calculation is to go from J = (J 1 , J 2 , J 3 ) to (E, L, cos(β)). Similarly to equation (4.12), the Jacobian of this transformation is given by
∂(E, L, cos(β)) ∂(J 1 , J 2 , J 3 ) = Ω 1 Ω 2 0 0 1 0 0 -L z /L 2 1/L = Ω 1 L . (4.74)
One may now perform the integration w.r.t. the inclination β. To do so, we assume that the system's DF is such that F = F (J 1 , J 2 ) = F (E, L). In addition, we noted previously that the system's intrinsic frequencies Ω = (Ω 1 , Ω 2 , Ω 3 ) are independent of J 3 , so that in the expression (2.17) of the response matrix, the only remaining dependences w.r.t. β are in the Fourier transformed basis elements from equation (4.71) through the rotation matrix from equation (4.73). Following [START_REF] Edmonds | Angular Momentum in Quantum Mechanics[END_REF], the rotation matrices satisfy the orthogonality relation
1 -1 d cos(β) R p m2m3 (β) R q m2m3 (β) = δ q p 2 2 p +1 . (4.75) Equation (4.72) then gives 1 -1 d cos(β) V * p m2m3 (β) V q m2m3 (β) = δ q p C p m2 , (4.76)
where we introduced the coefficient C p m2 as
C p m2 = 2 2 p +1 Y m2 p (π/2, 0) 2 .
(4.77)
The expression (4.71) of the Fourier transformed basis elements allows us then to rewrite the response matrix from equation (2.17) as
M pq (ω) = (2π) 3 δ q p δ m q m p m1 |m2|≤ p ( p -m2) even C p m2 dEdL L Ω 1 m•∂F/∂J ω-m•Ω W m1 p m2n p (J ) W m1 p m2n q (J ) , (4.78)
where one may note that the sum on m 3 was dropped thanks to the Kronecker symbol from equation (4.71). In addition, expression (4.77) also imposes the additional constraints |m 2 | ≤ p and ( p -m 2 ) even, so that the sum on m 2 may also be reduced. Finally, we also relied on the fact that the coefficient W m1 p m2n p (J ) from equation (4.10) is real, so that no conjugates are present in equation (4.78). Let us now perform the change of variables (E, L) → (r p , r a ) to rewrite equation (4.78) as
M pq (ω) = δ q p δ m q m p m1 |m2|≤ p ( p -m2) even dr p dr a g p n p n q m1m2 (r p , r a ) h ω m1m2 (r p , r a ) , (4.79)
where the functions g p n p n q m1m2 (r p , r a ) and h ω m1m2 (r p , r a ) are respectively given by
g p n p n q m1m2 (r p , r a ) = (2π) 3 C p m2 ∂(E, L) ∂(r p , r a ) L Ω 1 m• ∂F ∂J W m1 p m2n p (J ) W m1 p m2n q (J ) , (4.80)
and
h ω m1m2 (r p , r a ) = ω-(m 1 , m 2 )•(Ω 1 , Ω 2 ) . (4.81)
In equation (4.80), if the system's DF is such that F = F (E, L), the gradient m•∂F/∂J w.r.t. the actions may be computed following equation (4.17). Let us finally note the very strong analogies that exist between equation (4.79) and equation (4.14) obtained for razor-thin discs. This allows us to apply to equation (4.79) the exact same method as described in section 4.2.4 by truncating the (r p , r a )-space in small regions on which linear approximations may be performed. We do not repeat here these calculations. As the calculation of the response matrix can be a cumbersome numerical calculation, it is important to validate its implementation by recovering known unstable modes for 3D spherical systems, e.g., in Polyachenko & Shukhman (1981); Saha (1991); Weinberg (1991). Following this approach, one is therefore able to compute the response matrix of a 3D spherical system. In addition, the computation of the Fourier transformed basis elements in equation (4.71) allows us to subsequently compute the associated collisionless and collisional fluxes. Let us now illustrate in the upcoming section one possible application of such an approach to describe the secular evolution of dark matter haloes.
4.D.2 An exemple of application: the cusp-core problem
Dark matter (DM) only simulations favour the formation of a cusp in the inner region of dark matter haloes (Dubinski & Carlberg, 1991;Navarro et al., 1997), following what appears to be an universal profile, the NFW profile. However, observations tend to recover profiles more consistent with a shallower core profile (Moore, 1994;de Blok & McGaugh, 1997;de Blok et al., 2001;de Blok & Bosma, 2002;Kuzio de Naray et al., 2008). This discrepancy between the cuspy profile predicted by direct DM only simulations and the core profile inferred from observations is one current important challenge in astrophysics, coined the cusp-core problem.
Various solutions have been proposed to resolve this discrepancy. A first set of solutions relies on modifiying the dynamical properties of the collisionless dark matter preventing it from indeed collapsing into cuspy profile. Examples include the possibility of warm dark matter (Kuzio de Naray et al., 2010) or of self-interacting dark matter (Spergel & Steinhardt, 2000;[START_REF] Rocha | [END_REF]. Another set of solutions rely on the remark that accounting self-consistently for the baryonic physics and its back-reactions on the DM may also be at the origin of the discrepancy. These mechanisms can be divided into three broad categories. The first one relies on dynamical friction from infalling baryonic clumps and disc instabilities (El-Zant et al., 2001;Weinberg & Katz, 2002;Tonini et al., 2006;Romano-Díaz et al., 2008;Goerdt et al., 2010;Cole et al., 2011;Del Popolo & Pace, 2016). A second possible mechanism is associated with AGN-driven feedback (Peirani et al., 2008;Martizzi et al., 2012;Dubois et al., 2016). Finally, a third possible mechanism relies on the long-term effects associated with supernova-driven feedback [START_REF] Binney | [END_REF]Gnedin & Zhao, 2002;Read & Gilmore, 2005;Mashchenko et al., 2006Mashchenko et al., , 2008;;[START_REF] Governato | [END_REF]Teyssier et al., 2013;Pontzen & Governato, 2012;El-Zant et al., 2016).
The collisionless diffusion equation (2.31), and its associated customisation to 3D systems presented in this Appendix is the correct framework to investigate in detail the role that supernona feedback may have on the secular evolution of DM haloes. Can the presence of an inner stellar disc, because of the potential fluctuations it induces, lead to the secular diffusion of a cuspy DM halo to a core one?
The first step of such an analysis is to characterise these fluctuations. To do so, we rely on hydrodynamical simulations. In order to decouple the source of perturbations, i.e. the disc, from the perturbed system, i.e. the halo, these simulations are performed while using a static and inert halo. Therefore, during the numerical simulations, feedback, while still present, cannot lead to a secular evolution of the halo profile. Similarly, any back-reaction from the halo onto the disc cannot be accounted for. Such a setup allows us to measure and characterise the statistical properties of the fluctuations induced by the disc directly from simulations. Because the DM halo is analytical, this also prevents any shot noise associated with the use of a finite number of DM particles. Once these fluctuations have been estimated, their effects on the DM halo may then be quantified using the secular collisionless diffusion equation (2.31).
In order to characterise these fluctuations, we consider an analytic NFW halo profile, and embed within it a gaseous and stellar disc, paying careful attention to preparing the system in a quasi-stationary state. In addition, we implement a supernova feedback recipe allowing for the release of energy from the supernova into the interstellar medium. Figure 4.D.2 illustrates two successive snapshots of such a hydrodynamical simulation. In figure 4.D.2, one can note that because of supernova feedback, the gas density undergoes some fluctuations. These fluctuations in the potential due to the gas will be felt by the DM halo and may therefore be the driver of a resonant forced secular diffusion in the DM halo. Because we are interested in the ensemble average autocorrelation of the feedback fluctuations, various realisations are run for the same physical setup.
Once these simulations are performed, we characterise their statistics by computing the autocorrela- from one realisation to another. Similarly, the typical frequencies of the fluctuations also depend on the considered basis elements. Once the perturbations history t → b p (t) is extracted from the numerical simulations, one may follow equation (2.25) to compute their ensemble averaged autocorrelation matrix C pq (ω). This fully characterises the stochastic external source term, which sources the collisionless diffusion coefficients D m (J ) from equation (2.32). When the characteristics of the initial DM halo (namely its potential and associated self-consistent DF) have been specified, one may follow section 4.D.1 to compute the halo's response matrix M. This allows finally for the calculation of the diffusion coefficients D m (J ), and for the estimation of the collisionless diffusion flux F tot from equation (2.33). This diffusion flux characterises the initial orbital restructuration undergone by the DM halo's DF.
In this context, a final difficulty arises from the integration forward in time of the collisionless diffusion equation (2.34). Indeed, because the diffusion is self-consistent, the diffusion coefficients are slow functions of the halo's DF. Similarly, the halo's potential (initially cuspy) also secularly depends on the halo's DF. The integration of the diffusion equation therefore requires a self-consistent update of the halo's diffusion coefficients and the halo's potential. One possible approach to solve this self-consistency problem relies on an iterative approach (Prendergast & Tomer, 1970;Weinberg, 2001b) that we do not detail further here.
The method described previously is expected to allow for a detailed description of the resonant collisionless diffusion occurring in a DM halo as a result of stochastic external fluctuations induced by its inner galactic disc. It will be the subject of a future work. The same approach could also allow us to investigate how much this diffusion mechanism depends on the strength of the feedback mechanisms, by for example changing the feedback recipes used in the hydrodynamical simulations. One could determine the typical power spectrum of the perturbations induced by the feedback, or give quantitative bounds on the feedback strengths required to induce a softening of the DM halo's profile. Similarly, the dependence of the diffusion efficiency w.r.t. the disc and halo masses could also be investigated. Finally, the efficiency of AGN feedback to induce a secular diffusion in the DM halo could also be studied within the same framework by characterising the associated potential fluctuations.
Chapter 5
Thickened discs
The work presented in this chapter is based on Fouvry et al. (2016c).
Introduction
The problem of explaining the origin of thick discs in our Galaxy has been around for some time (e.g., Gilmore & Reid, 1983;Freeman, 1987). The interest for this dynamical question has been particularly revived recently in the light of the current APOGEE survey [START_REF] Eisenstein | [END_REF] and the upcoming data collected by the GAIA mission. Star formation within stellar discs typically occurs on the circular orbits of the gas, so that young stars should form a very thin disc (Wielen, 1977). However, chemokinematic observations of old stars within our Milky Way (Jurić et al., 2008;[START_REF] Ivezić | [END_REF]Bovy et al., 2012), or in other galactic discs (Burstein, 1979;Mould, 2005;Yoachim & Dalcanton, 2006;Comerón et al., 2011) have all shown that thick components are very common. Yet the formation and the origin of thickened stellar discs remains a significant puzzle for galactic formation theory.
Various dynamical mechanisms, either internal or external, have been proposed to explain this observed thickening, but their respective impacts and roles remain to be clarified. First, some violent major events could be at the origin of the vertically extended distribution of stars in disc galaxies. These could possibly be due to the accrection of galaxy satellites (Meza et al., 2005;Abadi et al., 2003), major mergers of gas-rich systems (Brook et al., 2004), or even gravitational instabilities in gas-rich turbulent clumpy discs (Noguchi, 1998;Bournaud et al., 2009). While such violent mergers definitely have a strong impact on galactic structure, these extreme events may not be required to form a thickened stellar disc, which could also originate from a slow, secular and continuous heating of a pre-existing thin disc.
Numerous smooth thickening mechanisms have been investigated in detail. Galactic discs could be thickened as a result of galactic infall of cosmic origin leading to multiple minor mergers (Toth & Ostriker, 1992;Quinn et al., 1993;Villalobos & Helmi, 2008;Di Matteo et al., 2011), and evidences for such events have been found in the phase space structure of the Milky Way (e.g., Purcell et al., 2011). Spiral density waves (Sellwood & Carlberg, 1984;Minchev & Quillen, 2006;Monari et al., 2016) are also possible candidates to increase the velocity dispersion within the disc, which can in turn be converted into vertical motion through deflections from giant molecular clouds (GMCs) (Spitzer & Schwarzschild, 1953;Lacey, 1984;Hänninen & Flynn, 2002). In addition, radial migration [START_REF] Lynden-Bell | [END_REF]Sellwood & Binney, 2002), which describes the change of angular momentum of a star with no increase in its radial energy, could also play an important role in the secular evolution of stellar discs. Radial migration could be induced by spiral-bar coupling (Minchev & Famaey, 2010), transient spiral structures (Barbanis & Woltjer, 1967;Carlberg & Sellwood, 1985;Solway et al., 2012), or perturbations induced by minor mergers (Quillen et al., 2009;[START_REF] Bird | [END_REF]). An analytical model of radial migration was extensively used in Schönrich & Binney (2009a,b) to investigate in detail its impact on vertical heating, and recovered the main features of the Milky Way thin and thick discs. Recent N -body simulations also investigated the role played by radial migration (Haywood, 2008;Loebman et al., 2011;Minchev et al., 2014), but the efficiency of this mechanism was recently shown to be limited (Minchev et al., 2012). Finally, owing to the increase of computing power, large numerical simulations are now in a position to investigate these processes in a self-consistent cosmological setup (Minchev et al., 2015;[START_REF] Grand | [END_REF]. The developments of these global approaches are expected to offer new clues on the effective interplay between these various competing thickening mechanisms. As discussed in chapter 1, recall that all these investigations can be broadly characterised as induced by an external (nurture) or internal (nature) source to trigger a vertical orbital restructuration in the disc.
In the present chapter, we intend to write down, in the context of tepid stellar discs of finite thickness, the two equations corresponding to collisionless or collisional diffusion derived in chapter 2. As already discussed, on the one hand, the first formalism, presented in section 2.2, assumes the systems to be collisionless and considers the secular effects induced by external perturbations. The second collisional formalism, presented in section 2.3, focuses on the role played by the system's intrinsic graininess. Here, both diffusion processes should be investigated, as it is not a priori known which will be the most efficient at thickening discs.
Following chapter 3, implementing equations (2.31) and (2.67) raises two main difficulties. These are respectively the explicit construction of the angle-action mapping (x, v) → (θ, J ) as well as the computation of the response matrix from equation (2.17), which requires the introduction of a basis of potentials and densities. Both problems are significantly more challenging in the thickened regime. For thickened discs, we will solve the first difficulty by introducing in section 5.2 the thickened epicyclic approximation offering in the limit of sufficiently cold discs explicit angle-action coordinates. We will deal with the second difficulty in sections 5.3 and 5.4 by generalising the razor-thin WKB basis (see chapter 3) to the thickened geometry, which will offer an analytical expression of the disc's amplification eigenvalues thanks to which we will account for the disc's gravitational susceptibility. Once these two difficulties are addressed, we will show in section 5.5 (resp. section 5.6) how one can estimate the diffusion fluxes associated with the collisionless diffusion equation (2.31) (resp. the collisional diffusion equation (2.67)). Finally, section 5.7 will be dedicated to applications of both formalisms to investigate the dynamical mechanisms at play in the secular thickening of stellar discs. These applications will be compared in particular to the numerical experiments from Solway et al. (2012).
Angle-action coordinates and epicyclic approximation
A first step towards the secular dynamics of inhomogeneous systems is to construct a set of angle-action coordinates. Let us follow the same method as what was presented in section 3.2 in the razor-thin case. Assuming that the disc is sufficiently cold, one can decouple the vertical motion and treat it as a harmonic libration. Let us introduce the cylindrical coordinates (R, φ, z) along with their associated momenta (p R , p φ , p z ). We also assume that the axisymmetric potential of the disc is symmetric w.r.t. the equatorial plane z = 0. In the vicinity of circular orbits, the stationary Hamiltonian from equation (3.6) becomes here
H 0 = 1 2 p 2 R +p 2 z +ψ eff (R g , 0)+ κ 2 2 (R-R g ) 2 + ν 2 2 z 2 , (5.1)
where we introduced the vertical epicyclic frequency ν as
ν 2 (R g ) = ∂ 2 ψ eff ∂z 2 (Rg,0) . (5.2)
Of course, for a thickened disc, the azimuthal and radial frequencies Ω φ and κ from equations (3.5) and (3.7) should be computed in the equatorial plane z = 0. We also note here that with the epicyclic approximation, ν depends only on J φ . In equation (5.1), the radial and vertical motions have been decoupled, and, up to initial phases, there exists then a vertical amplitude A z such that z = A z cos(νt).
Similarly to equation (3.8), the associated vertical action J z is immediately given by
J z = 1 2π dz p z = 1 2 νA 2 z .
(5.3)
In this context, (J r , J z ) = (0, 0) corresponds to circular orbits. Increasing J r (resp. J z ) amounts therefore to increasing the amplitude of the radial (resp. vertical) oscillations of the stars, so that the orbits get hotter. This is illustrated in figure 5.2.1. It is straightforward to complete equation (3.9) to obtain an explicit relation between the physical phase space coordinates and the angle-action ones. Indeed, one has
R = R g + A R cos(θ R ) ; φ = θ φ - 2Ω φ κ A R R g sin(θ R ) ; z = A z cos(θ z ) .
(5.4)
In figure 5.2.2, we illustrate epicyclic orbits described by this angle-action mapping. Finally, in the thick- ened context, the razor-thin quasi-isothermal DF from equation (3.10) becomes
F (R g , J r , J z ) = Ω φ Σ πκσ 2 r exp - κJ r σ 2 r ν 2πσ 2 z exp - νJ z σ 2 z , (5.5)
where Σ stands for the projected surface density of the disc, so that Σ(R) = dz ρ(R, z), where ρ is the disc's density. In equation (5.5), σ z represents the vertical velocity dispersion of the stars at a given radius, and only depends on the position in the disc.
The thickened WKB basis
In section 3.3, we presented in detail how one could construct a biorthonormal basis of tightly wound spirals in the context of razor-thin axisymmetric discs. Let us now generalise this construction to thickened stellar discs by specifying the vertical components of these basis elements. As the in-plane dependence of our WKB basis elements will be the same than the one presented in section 3.3, we will here focus on the additional degree of freedom associated with the vertical dimension. The cylindrical coordinates are noted as (R, φ, z), and we introduce the 3D WKB basis elements as
ψ [k φ ,kr,R0,n] (R, φ, z) = A ψ [k φ ,kr,R0] r (R, φ) ψ [kr,n] z (z) , (5.6)
where A is an amplitude, which will be tuned later on to ensure the correct normalisation of the basis elements. We introduced as ψ
[k φ ,kr,R0] r (R, φ) the same in-plane dependence as the razor-thin WKB basis elements from equation (3.11), so that
ψ [k φ ,kr,R0] r (R, φ) = e i(k φ φ+krR) B R0 (R) ,
(5.7)
where the radial window function B R0 was introduced in equation (3.12). In equation ( 5.6), one can note that the basis elements depend on 4 indices. Here, (k φ , k r , R 0 ) are the same indices as in the razor-thin case, so that k φ characterises the angular dependence of the basis elements, k r is the radial frequency of the elements, and R 0 the position in the disc around which the window B R0 is centred. Finally, we introduced the index n ≥ 1, specific to the thick case, which numbers the considered vertical dependences. We also recall that the window function from equation (3.12) involves a decoupling scale σ, which ensures the biorthogonality of the basis elements. The radial dependence of the basis elements is illustrated in figure 3.3.1, while their dependence in the equatorial plane z = 0 is given by figure 3.3.2. The thickened basis elements from equation (5.6) are therefore constructed by multiplying the in-plane razor-thin WKB basis elements by a vertical component ψ
[kr,n] z (z), which we now specify. The construction of the basis elements requires us to satisfy Poisson's equation (2.12), which characterises the associated density elements. Relying on the same tight winding assumptions as in equation (3.15), Poisson's equation becomes
-k 2 r A ψ r ψ z +A ψ r d 2 ψ z dz 2 = 4πGρ , (5.8)
where we dropped the superscripts [k φ , k r , R 0 , n] to shorten the notations and introduced the associated density ρ. At this stage, we now assume that the density elements satisfy an ansatz of separability of the form
ρ(R, φ, z) = λ ρ 4πG A ψ r (R, φ) ψ z (z) w(z) ,
(5.9)
where
λ ρ = λ [kr,n] ρ
is a proportionality constant, while w(z) is a cavity function, which is chosen to be independent of the basis' indices. Equation (5.8) immediately becomes
d 2 ψ z dz 2 -k 2 r ψ z = λ ρ w(z) ψ z .
(5.10) Equation ( 5.10) is a self-consistent relation that the vertical component ψ z has to satisfy. It takes the form of a Sturm-Liouville equation (Courant & Hilbert, 1953), which requires us to determine the eigenfunctions ψ . Assuming a sufficient regularity for the functions involved, the Sturm-Liouville theory ensures that there exists a discrete spectrum of real eigenvalues λ 1 < λ 2 < ... < λ n → +∞, with their associated eigenfunctions ψ 1 z , ..., ψ n z . In addition, when correctly normalised, these eigenfunctions form a biorthogonal basis so that dz w(z) ψ p z (z) ψ q z (z) = δ q p . By explicitly specifying the considered cavity function w(z), one can get explicit expressions for these eigenfunctions. We assume that the density elements vanish out of a sharp cavity, so that they are zero for |z| > h. This corresponds to the choice that the vertical density profile of the disc takes the form of a Spitzer profile, see equation (5.71), h is then given by h(R 0 ) = 2z 0 (R 0 ). One should therefore consider the cavity scale h from equation (5.11) not as a free parameter of the formalism, but as imposed by the mean density profile of the considered disc.
w(z) = Θ(z/h) , ( 5
Once the cavity function from equation (5.11) has been specified, one may explicitly solve Poisson's equation (5.10) to determine the density basis elements. It takes the form of a wave equation; let us therefore assume that ψ z follows the ansatz (5.12) where the frequency k z remains to be determined. The eigenvalue λ ρ immediately reads λ ρ = -(k 2 r +k 2 z ). In addition to the ansatz from equation (5.12), one also has to impose for ψ z and dψ z /dz to be continuous at z = ±h. Let us now restrict ourselves to symmetric perturbations, ψ z (-z) = ψ z (z), while the very similar antisymmetric case will be presented in Appendix 5.A. Symmetric perturbations immediately lead to A = D and B = C, while the continuity requirements impose
ψ z (z) = A e -krz if z > h , B e ikzz + C e -ikzz if |z| ≤ h , D e krz if z < -h ,
A e -krh = 2B cos(k z h) , k r A e -krh = 2k z B sin(k z h) .
(5.13)
To be non-trivial, the vertical frequency k z must therefore satisfy the quantisation relation
tan(k z h) = k r k z .
(5.14)
Once k r and h have been specified, equation (5.14) restricts the allowed values of k z . Following the definition of the basis elements from equation (5.6), let us introduce the index n ≥ 1, such that k n z is the n th solution of equation (5.14), i.e. such that
k 1 z < k 2 z < ... < k n z < ... and tan(k n z h) = k r k n z .
(5.15)
At this stage, we note that for a sufficiently thin disc such that k 1 z h, k r h 1, the first quantised symmetric frequency k 1 z may be approximated as
k 1 z k r h .
(5.16)
In figure 5.3.2, we illustrate the quantisation relation from equation (5.14), as well as its antisymmetric analog from equation (5.100). Two important properties should already be noted. First, the fundamental symmetric frequency k 1 z is the only quantised frequency such that k z h < π/2. Following the approximated expression from equation (5.16), in the limit of a razor-thin disc, for which h → 0, this is the only frequency for which k 1 z h → 0, while all the other quantised frequencies are such that k z h > π/2. This already emphasises why the fundamental symmetric frequency k 1 z will play a crucial role in the razor-thin limit. This will especially become clear in Appendix 5.C, where we recover the razor-thin limits of the two diffusion equations. Moreover, the periodicity of the "tan" function ensures that in the limit of sufficiently thick disc for which k r h π, one can assume that both symmetric and antisymmetric frequencies satisfy
∆k z = k n+1 z -k n z π h .
(5.17)
Straightforward calculations finally lead to the complete expression of the symmetric potential elements which read (5.18) while the associated density elements read
ψ [k φ ,kr,R0,n] (R, φ, z) = A ψ [k φ ,kr,R0] r (R, φ) cos(k n z z) if |z| ≤ h , e krh cos(k n z h) e -kr|z| if |z| ≥ h ,
ρ [k φ ,kr,R0,n] (R, φ, z) = - k 2 r +(k n z ) 2 4πG ψ [k φ ,kr,R0,n] (R, φ, z) Θ z h .
(5.19)
The equivalent expressions for the antisymmetric basis elements are given in equations (5.101) and (5.102).
The vertical components of these basis elements are illustrated in figure 5.3.3. As imposed by equa- for the symmetric elements from equation (5.18), while ψ a is associated with the antisymmetric ones from equation (5.101). The basis elements can also be ordered thanks to their number of nodes within the cavity, as expected from the Sturm-Liouville theory.
x x s 1 x s 2 x a 1 x a 2 tan(x) x 0 /x -x/x 0
tion (2.12), the final step of the construction of the thickened WKB basis elements is to ensure that the basis is biorthogonal. We already showed in equation (3.20), when constructing the razor-thin WKB basis elements, that for (k p φ , k p r , R p 0 ) = (k q φ , k q r , R q 0 ) the orthogonality was satisfied, provided that the decoupling assumptions from equation (3.25) were satisfied. Regarding the vertical component, we also underlined in equation (5.10), that the Sturm-Liouville theory enforces the orthogonality w.r.t. the n p and n q indices, even when considering both symmetric and antisymmetric basis elements. As a consequence, the basis elements from equations (5.18) and (5.19) form a biorthogonal basis. Our final step is to specify the amplitude A to ensure a correct normalisation. Following the razor-thin calculations from equation (3.27), one straightforwardly gets (5.20) where we introduced the prefactor 1 ≤ α n ≤ 1.6 as
A = G R 0 h(k 2 r +(k n z ) 2 ) α n ,
α n = 2 1+sin(2k n z h)/(2k n z h) .
(5.21)
The epicyclic angle-action mapping from equation (5.4) allows us to compute the Fourier transform of the basis elements w.r.t. the angles, as defined in equation (2.6). Following the razor-thin calculations from equation (3.33), one gets (5.22) where the Bessel function of the first kind J were introduced in equation (3.32). The antisymmetric analog of equation ( 5.22) is given in equation (5.105). After having explicity defined our thickened WKB basis elements, let us now illustrate how one may evaluate the disc's response matrix from equation (2.17).
ψ [k φ ,kr,R0,n] m (J ) = δ k φ m φ δ even mz A e ikrRg i mz-mr B R0 (R g ) J mr 2Jr κ k r J mz 2Jz ν k n z ,
WKB thick amplification eigenvalues
Following the construction of our thickened WKB basis elements, let us evaluate the system's response matrix, as given by equation (2.17).
WKB response matrix
When considering thin discs, a key result of section 3.4 was to show that in the razor-thin limit, the response matrix computed with WKB basis elements was diagonal. This was an essential result of the derivation which allowed us to obtain in sections 3.5 and 3.6 explicit analytical expressions for the collisionless and collisional diffusion fluxes. As illustrated in the previous section, the thick WKB basis elements have the same in-plane dependence as the razor-thin ones. However, they may in principle interact one with another via their vertical components. In Appendix 5.B, we show that even for a thick disc, with our thick WKB basis elements, one may still assume that the response matrix is diagonal so that
M [k p φ ,k p r ,R p 0 ,np],[k q φ ,k q r ,R q 0 ,nq] (ω) = δ k q φ k p φ δ k q r k p r δ R q 0 R p 0 δ nq np λ [k p φ ,k p r ,R p 0 ,np] (ω) .
(5.23) Such a property is a crucial step of the present derivation, which allows us to account analytically for the system's self-gravity. Following the calculations presented in section 3.4, one can straightforwardly compute the diagonal elements of the response matrix, using the Fourier transformed WKB basis elements from equation (5.22). The additional integral w.r.t. J z can also be computed using the integration formula from equation (3.42). As in the razor-thin case, we assume the disc to be sufficiently cold, so that one may neglect the contributions from ∂F/∂J φ w.r.t. the ones associated with ∂F/∂J r and ∂F/∂J z . After some simple algebra, the symmetric amplification eigenvalues read
λ sym [k φ ,kr,R0,n] (ω) = 2πGΣα 2 n hκ 2 (1 + (k z /k r ) 2 ) z even e -χz I z [χ z ] (1-s 2 z ) F(s z , χ r )-z ν σ 2 z σ 2 r κ G(s z , χ r ) .
(5.24)
Similarly to equations (3.43) and (3.45), we introduced the dimensionless quantities χ r and χ z as (5.25) and the shifted dimensionless frequency s z as
χ r = σ 2 r k 2 r κ 2 ; χ z = σ 2 z k 2 z ν 2 ,
s z = ω-k φ Ω φ -z ν κ .
(5.26)
Finally, in equation ( 5.24), as already made in equation (3.46), we introduced the reduction functions F and G as
F(s, χ) = 2(1-s 2 ) e -χ χ +∞ =1 I [χ] 1-(s/ ) 2 ; G(s, χ) = 2(1-s 2 ) e -χ χ 1 2 I 0 [χ] s + 1 s +∞ =1 I [χ] 1-( /s) 2 . (5.27)
These two reduction functions are illustrated in figure 5.B.1. Relying on the antisymmetric basis elements from Appendix 5.A, one also obtains the associated antisymmetric amplification eigenvalues as (5.28) where the prefactor β n was introduced in equation (5.104). Equations ( 5.24) and (5.28) are important results as they allow us to easily estimate the strength of the self-gravitating amplification in a thick disc. We will show in section 5.4.2 how these amplification eigenvalues are in full agreement with the razor-thin ones obtained in section 3.4. We will emphasise in particular how these amplification eigenvalues allow us to generalise the razor-thin Toomre's parameter Q from equation (3.49) to the thickened geometry.
λ anti [k φ ,kr,R0,n] (ω) = 2πGΣβ 2 n hκ 2 (1+(k z /k r ) 2 ) z odd e -χz I z [χ z ] (1-s 2 z ) F(s z , χ r )-z ν σ 2 z σ 2 r κ G(s z , χ r ) ,
When effectively computing the thick amplification eigenvalues from equations (5.24) and (5.28), one has to enforce two additional restrictions. These amount to neglecting the contributions from the vertical action gradients w.r.t. the radial ones, and restricting the sum on resonance vectors only to closed orbits on resonance. Let us now motivate these two assumptions.
The general expression of the response matrix from equation (2.17) involves the gradients, ∂F/∂J , of the system's DF w.r.t. all the actions coordinates. As in the razor-thin case, we may assume the disc to be sufficiently cold, so that one may neglect the contributions from ∂F/∂J φ w.r.t. ∂F/∂J r and ∂F/∂J z . In addition, let us now also neglect the contributions from the vertical gradients w.r.t. the radial ones, the radial ones being the only gradients which remain in the razor-thin limit. In equations (5.24) and (5.28), this amounts to neglecting the reduction function G and conserving only the contributions from the reduction function F.
One should also note that in the diffusion equations, the response matrix eigenvalues always have to be evaluated at resonance. As a consequence, for the collisionless diffusion coefficients D m (J ) from equation (2.32) and the collisional drift and diffusion coefficients A m (J ) and D m (J ) from equations (2.69) and (2.70), the amplification eigenvalues have to be evaluated at the resonant frequency ω = m•Ω. Following equation (5.26), the shifted dimensionless frequency s m z associated with a given resonance m reads (5.29) where a small imaginary part η was added. The potential is assumed to be dynamically non-degenerate (see equation (5.60)), so that ν/κ is not a rational number. Consequently, s m z , when evaluated for a resonance m, is an integer only for m z = z . Here, s m z being an integer, means that there exists a rotating frame in which the star's orbit is closed, i.e. in which the considered star is exactly on resonance. In the razor-thin case, this was always possible, but this is no more guaranteed in the thickened case. As illustrated in figure 5.B.1, the reduction function F diverges in the neighbourhood of integers, but is well defined when evaluated for exactly integer values, as long as one adds a small imaginary part η as in equation (5.29). In order never to probe the diverging branches of this reduction function, one should therefore always evaluate this function for exactly integer values of s. As s m z is an integer only for z = m z , we may restrict the sum on z to this only term in the generic expressions (5.24) and (5.28) of the thickened amplification eigenvalues.
s m z = m r +(m z -z ) ν κ +iη ,
When accounting for the two previous approximations, the amplification eigenvalues from equations (5.24) and (5.28), when computed for a resonance m, take the form (5.30) where we introduced the numerical prefactor γ m as
λ m (J φ , k r , k z ) = 2πGΣγ 2 m hκ 2 (1+(k z /k r ) 2 ) e -χz I mz [χ z ] (1-m 2 r ) F(m r , χ r ) ,
γ m (J φ , k r , k z ) = α(J φ , k r , k z ) if m z even , β(J φ , k r , k z ) if m z odd .
(5.31) Equation (5.30), because of its generic writing, applies to both symmetric and antisymmetric vertical resonances. Let us emphasise that these reduced expressions are fully compatible with the discussions from Appendix 5.B showing that the thickened response matrix is diagonal. These expressions are also consistent with the upcoming section, where we recover the razor-thin WKB amplification eigenvalues and generalise Toomre's Q parameter to thick discs. The applications of the WKB formalisms presented in section 5.7 all rely on the simplified expression of the amplification eigenvalues obtained in equation (5.30).
A thickened Q factor
Following our calculation of the amplification eigenvalues, let us now show how these are in full agreement with the razor-thin results obtained in section 3.4, and also offer a generalisation of Toomre's Q parameter to thick discs. In the razor-thin limit, only resonances with m z = 0 are allowed, so that only the symmetric basis elements may play a role. Let us note that the symmetric quantisation relation from equation (5.14) is such that except for the fundamental symmetric frequency k 1 z,s , one always has k n z,s > π/(2h). Because in the razor-thin limit one has h → 0, only the fundamental symmetric mode contributes to the amplification eigenvalue in this limit. In the same infinitely thin limit, one can then get rid of the dependence of λ w.r.t. k z and evaluate the amplification eigenvalues in (k r , k 1 z (k r , h)) = (k r , k r /h), thanks to equation (5.16). Equation (5.30) then becomes (5.32) where the dimensionless frequency s was introduced in equation (3.45), while the prefactor α 1 was defined in equation ( 5.21) and is a function of k 1 z,s h = √ k r h. One immediately has lim thin α 1 = 1. Similarly, χ z , introduced in equation ( 5.25), should also be seen as a function of k r and h, and reads
λ(ω, k φ , k r , h) = 2πGα 2 1 Σk r κ 2 (1+k r h) e -χz I 0 [χ z ] (1-s 2 ) F(s, χ r ) ,
χ z = (σ 2 z k r )/(ν 2 h).
When considering the razor-thin limit, one should keep in mind that the kinematic height of the mean disc σ z /ν and the size of the sharp WKB cavity h are directly related. Indeed, as detailed in equation (5.74), Jeans equation imposes a relation of the form (5.33) where c 2 is a dimensionless constant. For a Spitzer profile, as defined in equation (5.71), this constant reads c 2 = 1/ √ 2. One then immediately has χ z = c 2 2 k r h, so that lim thin χ z = 0. As a consequence, in the razor-thin limit, equation (5.32) gives (5.34) as already obtained in equation (3.47). This result underlines how the thick WKB basis elements constructed in section 5.3 are fully consistent with the razor-thin WKB results from section 3.3. Using the thickened disc model considered in the applications of section 5.7, we illustrate in figure 5.4.1 how the thickened WKB amplification eigenvalues tend to the razor-thin ones, as one reduces the thickness of the disc.
σ z ν = c 2 h ,
lim thin λ sym = 2πGΣ|k r | κ 2 (1-s 2 ) F(s, χ r ) ,
Starting from the asymptotic expression (5.32) of the symmetric amplification eigenvalues in the limit of a thinner disc, let us now study how it impacts the value of the razor-thin Toomre's Q parameter from equation (3.49), as one accounts for the disc finite thickness, i.e. for a non-zero value of h. We are interested in the system's stability w.r.t. to axisymmetric tightly wound perturbations, so that we impose k φ = 0. Placing ourselves at the stability limit ω = 0 (so that s = 0), we seek a criterion on the disc's parameters so that there exists no k r > 0 for which λ(k r , h) = 1, i.e. such that the disc is stable. Let us then rewrite equation (5.32) as (kr)), following equation (5.30). For z0 = 0, i.e. for the razor-thin case, we computed λ thin (kr) from equation (3.47). As expected, the thickening of the disc tends to reduce its gravitational susceptibility.
λ(k r , h) = 2πGΣk r κ 2 F(0, χ r ) α 2 1 1+k r h e -χz I 0 [χ z ] = 2πGΣk r κ 2 F(0, χ r ) 1- 2 3 +c 2 2 k r h = 2πGΣ κσ r K(χ r , γ) , ( 5
where to obtain the second line of equation ( 5.35), we performed a limited development at first order in k r h of α 1 and χ z . In the third line, in order to shorten the notations, we introduced γ = 2 3 +c 2 2 (h/κ)σ r , as well as the function K(χ r , γ) as (5.36) This function is the direct thickened analog of the razor-thin function K 0 (χ r ) introduced in equation (3.48) to derive the razor-thin Q parameter. Figure 5.4.2 illustrates the shape of the function χ r → K(χ r , γ). In order to obtain a simple expression for the thickened stability parameter, our next step is to study the behaviour of K max (γ), the maximum of the function χ r → K(χ r , γ), as one varies γ.
K(χ r , γ) = 1 √ χ r 1-e -χr I 0 [χ r ] 1-γ √ χ r .
As already obtained in equation (3.48), for γ = 0, one has K 0 max 0.534 reached for χ 0 max 0.948. A first order expansion in γ then allows us to write (5.37) where, in the second line, we replaced the linear approximation by an exponential, which offers a better fit. The shapes of the function γ → K max (γ), K approx.
K max (γ) K 0 max 1-γ χ 0 max K 0 max e -γ √ χ 0 max = K approx. max (γ) ,
Q thick = Q thin e -γ √ χ 0 max = Q thin exp 1.61 σ z /ν σ r /κ , (5.38)
where we used equation (5.33) to rewrite h as a function of σ z /ν, given the value c 2 = 1/ √ 2 for a Spitzer profile. We also wrote Q thin for the razor-thin Toomre's parameter from equation (3.49). One can note that equation (5.38) was obtained through a rather general procedure allowing for the computation of the response matrix eigenvalues using the thickened WKB basis elements. Let us emphasise that this calculation is not specific to the Spitzer profile from equation (5.71). Should one consider a different mean vertical density profile, one would only have to change accordingly the value of the constant c 2 from equation (5.33), which relates the thickness of the mean density profile to the size of the sharp cavity from equation (5.11). A follow-up work of the present derivation will be to investigate via numerical simulations the relevance and quality of this new stability parameter to characterise instabilities in thickened stellar discs.
Let us finally discuss how this new Q thick parameter compares to previous results. Vandervoort (1970) tackled in particular a similar issue of characterising tightly wound density waves in thickened stellar discs. See also Romeo (1992) for another generalisation of Q to the thickened geometry. The approach of Vandervoort (1970) relied on the collisionless Boltzmann equation limited to even perturbations. It also relied on the assumption of the existence of the adiabatic invariant J z , thanks to which the vertical motion of the stars may be described. With our current notation, equation (77) of Vandervoort (1970) gives amplification eigenvalues of the form (5.39) where we used equation (5.11) to relate h and z 0 . In equation (5.39), Q V (k r h) is a non-trivial function, which can be computed via implicit variational principles. Similarly, in our present thickened WKB formalism, the analog of equation ( 5.39) is given by equation (5.32) and takes the form (5.40) where the function Q F (k r h) is an explicit function reading
λ V = 2πGΣ|k r | κ 2 (1-s 2 ) F(s, χ r )Q -1 V (k r h) ,
λ F = 2πGΣ|k r | κ 2 (1-s 2 ) F(s, χ r )Q -1 F (k r h) ,
Q F (k r h) = 1+k r h α 2 1 e -χz I 0 [χ z ]
.
(5.41)
Thanks to Table 1 in Vandervoort (1970), which offers approximate values for the function x → Q V (x), we may compare the functions Q V and Q F , as illustrated in figure 5.4.4. We note that the behaviours of the Figure 5.4.4: Comparisons of the correction functions QV from equation (5.39) obtained in Vandervoort (1970) with the explicit function QF from equation (5.40) obtained thanks to the thickened WKB appoximation. The behaviours of QV was obtained from Table 1 in Vandervoort (1970), which provides various approximations of increasing order
Q (0) V , Q (1)
V , and Q
(2)
V . Despite being obtained from significantly different methods, these two approaches lead to similar results.
two functions are very similar on the considered range 0 ≤ k r h ≤ 5, even if they were obtained through different approaches.
WKB limit for the collisionless diffusion
Having characterised the WKB self-gravitating amplification in thickened discs, let us now proceed to the evaluation of the diffusion coefficients involved in the collisionless diffusion equation (2.31). We follow an approach similar to section 3.5. Let us first write the thick WKB basis elements from equation (5.6) as
ψ (p) = ψ [k p φ ,k p r ,R p 0 ,np] .
(5.42)
Relying on equation (5.23) to write the response matrix as M pq = λ p δ q p , the collisionless diffusion coefficients from equation (2.32) become m are evaluated for the same resonance vector m, the diffusion coefficients do not couple symmetric and antisymmetric basis elements. Therefore, in order to estimate D m (J ), depending on whether m z is even (resp. odd), one only has to consider the symmetric (resp. antisymmetric) basis elements. As was done in section 5.3, we restrict ourselves to the symmetric case, while the antisymmetric case will be straightforward to obtain by direct analogy.
D m (J ) = 1 2 p,q ψ (p) m (J ) ψ (q) * m (J ) 1 1-λ p 1 1-λ q C pq (m•Ω) , ( 5
p (ω) = (k p r ) 2 +(k p z ) 2 4πG A p R p 0 (πσ) 1/4 (2π) 2 δ ψ e m φ ,k p r ,k p z [R p 0 , ω] ,
(5.44)
where we used the shortened notation k p z = k np z . Following equation (3.55), here in equation (5.44), δ ψ e has undergone three transformations: (i) an azimuthal Fourier transform of indice m φ , (ii) a local radial Fourier transform centred around R p 0 at the frequency k p r , and (iii) an even restricted vertical Fourier transform on the scale h at the frequency k p z . These three transformations are defined as
(i): f m φ = 1 2π dφ f [φ] e -im φ φ , (ii): f kr [R 0 ] = 1 2π dR e -ikr(R-R0) exp - (R-R 0 ) 2 2σ 2 f [R] ,
(iii):
f kz = +h -h dz cos(k z z) f [z] .
(5.45)
Following again equation (3.57), one may now disentangle the sums on p and q in equation ( 5.43), so that the collisionless diffusion coefficients become (5.46) where we introduced the function g(ω) as
D sym m (J ) = δ even mz 1 2π dω g(m•Ω) g * (ω ) ,
g(ω) = 2π 2h k p r ,R p 0 ,np g s (k p r , R p 0 , k p z , ω) e ik p r (Rg-R p 0 ) G r (R g -R p 0 ) .
(5.47)
In equation ( 5.47), we executed the sum on k p φ thanks to the azimuthal Kronecker delta from equation (5.22). Here, as in equation (3.58), we also introduced G r (R) = 1/ √ 2πσ 2 e -R 2 /(2σ 2 ) a normalised radial Gaussian, and g s encompasses all the slow dependences of the diffusion coefficients w.r.t. the radial position so that
g s (k p r , R p 0 , k p z , ω) = J mr 2Jr κ k p r J mz 2Jz ν k p z α 2 p 1-λ p δ ψ e m φ ,k p r ,k p z [R p 0 , ω] .
(5.48)
Let us emphasise the strong similarities between equation (5.48) and its razor-thin analog from equation (3.59). Following the same method as in the razor-thin case, we rely on Riemann sum formula to rewrite equation (5.47) with continuous integrals w.r.t. R p 0 and k p r using the critical step distances from equation (3.61). Equation (5.47) becomes
g(ω) = 1 2h np dk p r g s (k p r , R g , k p z , ω) , (5.49)
where one can note that there still remains a discrete sum on the index n p . At this stage, to make further progress in the calculations, two strategies are possible. On the one hand, one may assume that the disc is sufficiently thick so that one can replace the sum on n p in equation ( 5.49) by a continuous integral over k z . On the other hand, in the limit of a thinner disc, as the quantised frequencies k z tend to be further apart (see figure 5.3.2), one should keep the discrete sum from equation (5.49). In the upcoming calculations, we stick to the first approach and aim for continuous expressions. In Appendix 5.C, we will follow the second approach, show that these two approaches are in full agreement and fully consistent with the razor-thin results obtained in section 3.5.
As noted in equation (5.17), for a sufficiently thick disc, one can assume that the distance between two successive quantised k z frequencies is of order ∆k z π/h. Assuming that ∆k z is sufficiently small compared to the scale of variation of the function k z → g s (k z ), let us rely once again on Riemann sum formula to rewrite equation (5.49) as
g(ω) = 1 2π dk p r dk p z g s (k p r , R g , k p z , ω) .
(5.50)
Following equation (3.63), we introduce C δψ e the autocorrelation of the external perturbations as
C δψ e [m φ , ω, R g , k p r , k q r , k p z , k q z ] = 1 2π dω δ ψ e m φ ,k p r ,k p z [R g , ω] δ ψ e * m φ ,k q r ,k q z [R g , ω ] ,
(5.51) so that the diffusion coefficients from equation (5.46) become
D sym m (J ) = δ even mz 1 (2π) 2 dk p r dk p z J mr 2Jr κ k p r J mz 2Jz ν k p z α 2 p 1-λ p × dk q r dk q z J mr 2Jr κ k q r J mz 2Jz ν k q z α 2 q 1-λ q C δψ e [m φ , m•Ω, R g , k p r , k q r , k p z , k q z ] .
(5.52)
The antisymmetric equivalent of equation ( 5.52) is straightforward to obtain via the substitutions δ even mz → δ odd mz and α p/q → β p/q . In addition, one should also pay attention to the fact that the autocorrelation C δψ e should be computed slightly differently in the antisymmetric context. Indeed, as the antisymmetric basis elements from equation (5.105) possess an odd vertical dependence, the even restricted Fourier transform from equation (5.45) should be replaced by an odd restricted vertical Fourier transform defined as (iii):
f kz = +h -h dz sin(k z z) f [z] .
(5.53)
Finally, in equation ( 5.52), notice that the integrations on k p z and k q z should only be performed for k z ≥ k 1 z , i.e. for k z larger than the associated fundamental mode, as illustrated in figure 5.3.2.
Following equation (3.66), let us now further simplify equation ( 5.52) by assuming some additional properties on the stochasticity of the external perturbations. In analogy with equation (3.66), we suppose that the external perturbations, δψ e , are spatially quasi-stationary so that
δψ e m φ [R 1 , z 1 , t 1 ] δψ e * m φ [R 2 , z 2 , t 2 ] = C[m φ , t 1 -t 2 , (R 1 +R 2 )/2, R 1 -R 2 , z 1 +z 2 , z 1 -z 2 ] ,
(5.54)
where the dependences w.r.t. (R 1 +R 2 )/2 and z 1 +z 2 are supposed to be slow. Thanks to some simple algebra (see Appendix G of Fouvry et al. (2016c)), one can show that
δ ψ e m φ ,k 1 r ,k 1 z [R g , ω 1 ] δ ψ e * m φ ,k 2 r ,k 2 z [R g , ω 2 ] = 2π 2 δ D (ω 1 -ω 2 ) δ D (k 1 r -k 2 r ) δ D (k 1 z -k 2 z ) C[m φ , ω 1 , R g , k 1 r , k 1 z ] ,
(5.55) where in analogy with equation (3.67), C[...] has been transformed three times, according to a temporal Fourier transform as defined in equation (2.9), according to a local radial Fourier transform as in equation (5.45) of spread √ 2σ w.r.t. R 1 -R 2 in the neighbourhood of R 1 -R 2 = 0 and (R 1 +R 2 )/2 = R g , and finally according to an even restricted vertical Fourier transform as in equation (5.45) w.r.t. z 1 -z 2 in the neighbourhood of z 1 -z 2 = 0 and z 1 +z 2 = 0. In equation (5.55), the autocorrelation of the external perturbation was therefore diagonalised w.r.t. ω, k r , and k z , so that the diffusion coefficients from equation (5.52) become
D sym m (J ) = δ even mz π (2π) 2 dk p r dk p z J 2 mr 2Jr κ k p r J 2 mz 2Jz ν k p z α 2 p 1-λ p 2 C[m φ , m•Ω, R g , k p r , k p z ] .
(5.56)
The antisymmetric analog of equation ( 5.56) is straightforward to obtain thanks to the substitutions δ even mz → δ even mz and α p → β p . Finally, despite the fact that one is considering antisymmetric diffusion coefficients, it is important to note that C still has to be transformed according to an even-restricted vertical Fourier transform, see Appendix G of Fouvry et al. (2016c) for details. The explicit expression (5.56) of the collisionless diffusion coefficients is the main result of this section. It presents close similarities with equation (3.68) short of an extra integral along vertical k z modes modulated by an extra Bessel function.
As in equation (3.69), one may further simplify equation (5.56) by relying on the approximation of the small denominators, for which one focuses on the tightly wound waves which yield the maximum self-gravitating amplification. Let us therefore assume that the function (k r , k z ) → λ(k r , k z ) reaches in its domain a well-defined maximum λ max (R g , ω) for (k r , k z ) = (k max r , k max z ). Let us then define the domain of maximum amplification V max = (k r , k z ) λ(k r , k z ) ≥ λ max /2 and its associated area |V max |. Equation (5.56) can then be approximated as
D sym m (J ) = δ even mz π|V max | (2π) 2 J 2 mr 2Jr κ k max r J 2 mz 2Jz ν k max z α 2 max 1-λ max 2 C[m φ , m•Ω, R g , k max r , k max z ] ,
(5.57) while the associated antisymmetric diffusion coefficients are straightforward to obtain by direct analogy. One can also improve the previous approximation by performing the integrations from equation (5.56) for (k r , k z ) ∈ V max . Such a calculation ensures a better estimation of the diffusion flux while being more numerically demanding. This does not alter the results obtained in the applications presented in section 5.7.
WKB limit for the collisional diffusion
Relying on the thick WKB amplification eigenvalues obtained in equation (5.30) and following the same approach as in the previous section, let us now evaluate the collisional drift and diffusion coefficients of the Balescu-Lenard equation (2.67).
Following the notations from equation (5.42) and separating the contributions from the symmetric and antisymmetric basis elements, the dressed susceptibility coefficients from equation (2.50) take the form
1 D m1,m2 (J 1 , J 2 , ω) = p ψ s,(p) m1 (J 1 ) ψ s,(p) * m2 (J 2 ) 1-λ s p (ω) + ψ a,(p) m1 (J 1 ) ψ a,(p) * m2 (J 2 ) 1-λ a p (ω) , (5.58)
where the superscripts "s" and "a" respectively correspond to the symmetric and antisymmetric basis elements. We showed in equation (5.22) (resp. (5.105)) that the Fourier transformed WKB basis elements involve an azimuthal Kronecker symbol δ k p φ m φ , as well as a δ even mz (resp. δ odd mz ) for the symmetric (resp. antisymmetric) basis elements. As a consequence, in equation (5.58) in order to have non-vanishing susceptibility coefficients, one must necessarily have
m φ 1 = m φ 2 = k φ and (m z 1 -m z 2 ) even . (5.59)
Because m z 1 and m z 2 must be of the same parity, one concludes that the susceptibility coefficients do not mix up symmetric and antisymmetric basis elements. As a consequence, depending on the parity of m z 1 , one can restrict oneself only to the symmetric elements or only to the antisymmetric ones.
Let us now focus on one crucial consequence of the WKB approximation, which is the restriction to local resonances. As already noted in equation (3.73), one technical difficulty of the Balescu-Lenard equation is to deal with the resonance condition m 1 •Ω 1 -m 2 •Ω 2 = 0. For a given value of J 1 , m 1 , and m 2 , one has to identify the resonant radii R r 2 for which the resonance condition is satisfied. In our case, one important simplification comes from the thickened epicyclic approximation, thanks to which the orbital frequencies Ω = (Ω φ , κ, ν) depend only on J φ . As in equation (3.75), we also assume here that the disc's mean potential is dynamically non-degenerate so that
d(m 2 •Ω) dR R r 2 = 0 .
(5.60)
Following the notations from equation (3.76), the resonance condition takes the form
m φ 1 Ω 1 φ + m r 1 κ 1 + m z 1 ν 1 = m φ 1 Ω r φ + m r 2 κ r + m z 2 ν r , (5.61)
where we used the notation
Ω 1 φ = Ω 1 φ (R 1 ) and Ω r φ = Ω φ (R r 2
). We also relied on equation (5.59) to impose m φ 1 = m φ 2 . Because the Fourier transformed basis elements from equations (5.22) and (5.105) involve the narrow radial Gaussian B R0 , the resonant radii R r 2 are necessarily close to R 1 so that
|∆R| = |R r 2 -R 1 | (few) σ.
Similarly to equation (3.77), the resonance condition from equation (5.61) may be rewritten as
m φ 2 dΩ φ dR +m r 2 dκ dR +m z 2 dν dR ∆R = m r 1 -m r 2 κ 1 + m z 1 -m z 2 ν 1 .
(5.62)
In the l.h.s. of equation (5.62), the terms within brackets is non-zero as a result of our assumption from equation (5.60) that the disc's mean potential is dynamically non-degenerate, while ∆R is small because of the scale decoupling approach used in the construction of the WKB basis elements. The r.h.s. of equation ( 5.62) should be seen as discrete in the sense that is the sum of a multiple of κ and of ν. For a disc not too thick, one expects to have ν κ. In addition, we showed in equation (5.59), that (m z 1 -m z 2 ) has to be en even number. As a consequence, for (m z 1 -m z 2 ) = 0, one has (5.63) provided that the resonance vectors m 1 and m 2 are of small order. In this situation, the l.h.s. of equation (5.62) is therefore small, while its r.h.s. is of order ν 1 . Equation (5.62) therefore imposes m z 1 = m z 2 . Equation (5.62) then takes the exact same form as the razor-thin equation (3.77). We follow the same argument and therefore conclude that the thick WKB basis elements impose that only local resonances are allowed so that
(m z 1 -m z 2 ) ν 1 ≥ 2ν 1 m r 1 -m r 2 κ 1 ,
R r 2 = R 1 ; m r 1 = m r 2 ; m z 1 = m z 2 .
(5.64)
This an essential result of the WKB approximation, which enables us to pursue the analytical evaluation of the dressed susceptibility coefficients.
Restricting ourselves to the cases R 2 = R 1 and m 2 = m 1 , and using the expression of the Fourier transformed basis elements from equation (5.22), the symmetric susceptibility coefficients from equation (5.58) now read
1 D m1,m1 = k p r ,R p 0 ,np G R p 0 h 1 (k p r ) 2 +(k p z ) 2 1 √ πσ 2 exp - (R 1 -R p 0 ) 2 σ 2 α 2 p 1-λ p (ω) × J m r 1 2J 1 r κ1 k p r J m r 1 2J 2 r κ1 k p r J m z 1 2J 1 z ν1 k p z J m z 1 2J 2 z ν1 k p z , (5.65)
where we introduced the shortening notations
1/D m1,m1 = 1/D m1,m1 (R 1 ,J 1 r ,J 1 z ,R 1 ,J 2 r ,J 2 z , ω), as well as κ 1 = κ(R 1 ), ν 1 = ν(R 1 ), and k p z = k np z .
One should also note that the sum on k p φ was already executed thanks to the constraint from equation (5.59). Following the same approach as in the collisionless case, let us replace the sums on k p r and R p 0 by continuous expressions. Equation (5.65) becomes
1 D m1,m1 = G 2πR 1 h np dk r 1 k 2 r +(k p z ) 2 α 2 p 1-λ p (ω) × J m r 1 2J 1 r κ1 k r J m r 1 2J 2 r κ1 k r J m z 1 2J 1 z ν1 k p z J m z 1 2J 2 z ν1 k p z .
(5.66)
In equation (5.66), there still remains a sum on the vertical index n p . As already described in equation (5.49) for the collisionless case, one may follow two possible strategies to complete the evaluation of the susceptibility coefficients. If the disc is sufficiently thick, one can replace the sum on n p by a continuous integral over k z . In the limit of a thin disc, one should however keep the discrete sum in equation (5.66). In the next calculations, let us follow the first continuous approach. The second approach is presented in Appendix 5.C, and we show once again that these two approaches are fully consistent one with another, and that they also allow for the recovery of the razor-thin results from section 3.6. Using the vertical step distance ∆k z π/h from equation (5.17) and assuming that the function in the r.h.s. of equation (5.66) vary on scales larger than ∆k z , one may use once again Riemann sum formula to rewrite equation (5.66) as
1 D m1,m1 = G 2π 2 R 1 dk r dk z 1 k 2 r +k 2 z α 2 kr,kz 1-λ kr,kz (ω) × J m r 1 2J 1 r κ1 k r J m r 1 2J 2 r κ1 k r J m z 1 2J 1 z ν1 k z J m z 1 2J 2 z ν1 k z .
(5.67)
Such an explicit expression of the dressed susceptibility coefficients constitutes the main result of this section. Equation (5.67) relates the gravitational susceptibility of the disc to known analytic functions of its actions via simple regular quadratures. Following equation ( 5.57), one may further simplify equation (5.67) by relying on the approximation of the small denominators. It becomes
1 D m1,m1 = G 2π 2 R 1 |V max | (k max r ) 2 +(k max z ) 2 α 2 max 1-λ max × J m r 1 2J 1 r κ1 k max r J m r 1 2J 2 r κ1 k max r J m z 1 2J 1 z ν1 k max z J m z 1 2J 2 z ν1 k max z .
(5.68)
This approximation can be improved by rather performing the integrations in equation (5.67) for (k r , k z ) ∈ V max . This approach allows for a more precise determination of the diffusion flux but is more numerically demanding. Such an improved approximation does not alter the principal conclusions drawn in the upcoming applications. Finally, for m z 1 odd, the antisymmetric analogs of the previous expressions of the dressed susceptibility coefficients are straightforward to obtain thanks to the substitution α → β.
As a final step, let us now compute the Balescu-Lenard drift and diffusion coefficients from equations (2.69) and (2.70). Restricting oneself only to local resonances and using the shortened notation from equation (3.82), one can write the collisional drift coefficients as
A m1 (J 1 ) = - 8π 4 µ (m 1 •Ω 1 ) dJ 2 r dJ 2 z m 1 •∂F/∂J (J 1 φ , J 2 r , J 2 z ) |D m1,m1 (J 1 , J 2 , m 1 •Ω 1 )| 2 ,
(5.69)
while the diffusion coefficients are given by
D m1 (J 1 ) = 8π 4 µ (m 1 •Ω 1 ) dJ 2 r dJ 2 z F (J 1 φ , J 2 r , J 2 z ) |D m1,m1 (J 1 , J 2 , m 1 •Ω 2 )| 2 .
(5.70)
In equations (5.69) and (5.70), the susceptibility coefficients are given by equation (5.67), or even equation (5.68) when using the approximation of the small denominators. In particular, because of the restriction to local resonances, they always have to be evaluated for J 2 φ = J 1 φ . Note that in the case where the DF is a quasi-isothermal DF as in equation (5.5), and where the susceptibility coefficients are computed via the approximation of the small denominators from equation (5.68), the integrations w.r.t. J 2 r and J 2 z in equations (5.69) and (5.70) may be computed explicitly (see Appendix C of Fouvry et al. (2015b) for an illustration in the razor-thin limit). The simple and tractable expressions of the collisional drift and diffusion coefficients obtained in equations (5.69) and (5.70) constitute an important result of this section. Let us finally insist on the fact that the application of the thick WKB approximation to the Balescu-Lenard equation is self-contained and that no ad hoc fittings were required. Except for the explicit calculation of the thickened amplification eigenvalues in equation (5.30), the previous calculations are not limited to the quasi-isothermal DF from equation (5.5). The collisional drift and diffusion coefficients from equations (5.69) and (5.70) are valid for any tepid disc's DF, provided that one can rely on the epicyclic angle-action mapping from equation (5.4).
Application to disc thickening
Let us now implement the previous thick WKB collisionless and collisional diffusion equations in order to investigate the various resonant processes at play during the secular evolution of a thick stellar disc. In section 5.7.1, we present the considered thick disc model. In section 5.7.2, we show how our formalism allows us to recover qualitatively the secular formation of vertical resonant ridges observed in the numerical experiments of Solway et al. (2012). Sections 5.7.3 and 5.7.4 respectively consider the associated diffusion timescales as well as the secular in-plane diffusion. In section 5.7.5, we consider the mechanism of disc thickening via the resonant diffusion induced by central decaying bars. Finally, in section 5.7.6, we show how one can account for the joint evolution of giant molecular clouds (GMCs) and how they hasten the secular diffusion. Let us first describe the considered disc model.
A thickened disc model
In order to setup a model of thickened stellar disc, we follow the model recently considered in Solway et al. (2012) (hereafter So12). We follow specifically the numerical parameters from their simulation UCB keeping only the most massive its two components. This simulation is particularly relevant in the context of secular diffusion, as it dealt with an unperturbed isolated stable thick stellar disc. On secular timescales, this thick disc developed spontaneously sequences of transient spirals and only on the very long-term a central bar. This disc should be seen as a thickened version of the razor-thin Mestel disc presented in section 3.7.1. Let us start from the razor-thin surface density of the Mestel disc Σ M introduced in equation (3.85). Then, assuming a given vertical profile shape, one can thicken Σ M to construct a density ρ M . Indeed, let us define the 3D density ρ M (R, z) as
ρ M (R, z) = Σ M (R) 1 4z 0 (R) sech 2 z 2z 0 (R)
.
(5.71) Equation (5.71) corresponds to a Spitzer profile (Spitzer, 1942), where we introduced the local thickness z 0 of the disc. It satisfies dz ρ M (R, z) = Σ M (R). This profile corresponds to an isothermal vertical distribution, i.e. a vertical statistical and thermodynamical equilibrium. Let us note that at this stage, one could have used alternative vertical profiles, e.g., exponential. The results presented thereafter can straightforwardly be applied to other vertical profiles, provided one adapts accordingly the relations between h, z 0 , and σ z /ν obtained in equations (5.11) and (5.74). After having constructed the system's total density, one can numerically determine the associated thickened potential ψ M given by
ψ M (x) = -dx 1 Gρ M (x 1 )/|x-x 1 |.
Thanks to the disc's axisymmetry, this can be rewritten as
ψ M (R, z) = dR 1 dz 1 -4GR 1 ρ M (R 1 , z 1 ) (R-R 1 ) 2 +(z-z 1 ) 2 F ell π 2 , - 4RR 1 (R-R 1 ) 2 +(z-z 1 ) 2 ,
(5.72)
where
F ell [φ, m] = φ 0 dφ [1-m sin 2 (φ )] -1/2
is the elliptic integral of the first kind. Thanks to the numerical calculation of the thickened total potential ψ M , one may then rely on the epicyclic approximation from section 5.2 to construct the mapping R g → J φ , as well as the three intrinsic frequencies Ω φ , κ and ν. These elements determined, the angle-action mapping from equation (5.4) is then fully characterised. For a sufficiently thin disc, one expects these mappings to be close to the ones obtained in equations (3.86) and (3.87) for the razor-thin case.
Let us emphasise that the equilibrium value of the vertical velocity dispersion σ z is directly constrained by the thickened mean density profile ρ M . Indeed, the one-dimensional vertical Jeans equation (see, e.g., equation (4.271) in [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF]) imposes
∂(ρ M σ 2 z ) ∂z = -ρ M ∂ψ M ∂z , (5.73)
where we assume that σ z only depends on R. Differentiating once this relation w.r.t. z and evaluating it at z = 0, one immediately gets for a Spitzer profile the relation
σ z (R) ν(R) = √ 2 z 0 (R) . (5.74)
As a consequence, once the scale height of the disc z 0 and the vertical frequency ν have been determined, the value of the velocity dispersion σ z immediately follows from the constraint of vertical equilibrium.
The previous determinations of the system's intrinsic frequencies required the use of the system's total potential ψ M . However, here we are interested in the dynamics of the active component of the disc, the stars, whose surface density Σ star is only one component of the total surface density Σ M . As was done in equations (3.90) and (3.91) in the razor-thin case, we introduce two taper functions T inner and T outer to deal with inner singularity and the infinite extent of the system, as well as an active fraction ξ, so that Σ star is given by equation (3.92), and is illustrated in figure 5.7.1. Using the same units than So12's UCB simulation, our numerical parameters are given by
V 0 = G = R i = 1, R o = 15, ν t = 4, µ t = 6,
R max = 25, σ r = 0.227, and ξ = 0.4. Finally, to mimic So12's vertical profile, we use for the Spitzer profile from equation (5.71) a constant scale height given by z 0 = 0.26. One can also straightforwardly estimate the total active mass of the system as M tot = 5.8. Using these numerical parameters, the shape of the quasi-isothermal DF F star from equation (5.5) is illustrated in figure 5.7.2.
It is important to note that So12's simulation was limited to the harmonic sector 0 ≤ m φ ≤ 8, except m φ = 1 to avoid decentring effects. In our analysis, in order to clarify and simplify the dynamical mechanisms at play, we impose an even more drastic limitation to the potential perturbations, and we restrict ourselves only to m φ = 2. In addition to this azimuthal restriction, all our analyses are also limited to only 9 different resonance vectors m = (m φ , m r , m z ). Indeed, we assume m φ = 2, m r ∈ -1, 0, 1 and m z ∈ -1, 0, 1 . Among these resonances, one can identify the corotation resonance (COR) as m = (2, 0, 0), the radial (resp. vertical) inner Lindblad resonance (rILR) (resp. vILR) as m = (2, -1, 0) (resp. m = (2, 0, -1)), and the radial (resp. vertical) outer Lindblad resonance (rOLR) (resp. vOLR) as m = (2, 1, 0) (resp. m = (2, 0, 1)). Having computed the intrinsic frequencies Ω and specified the considered resonance vectors m, one can study the behaviour of the resonance frequencies ω = m•Ω as a function of the position within the disc, as illustrated in figure 5.7.3. These frequencies correspond to the frequencies for which the amplification eigenvalues and the perturbation autocorrelation from equation (5.56) have to be evaluated.
When simulating the previous quasi-stationary and stable thick disc on secular timescales, So12 (private communication) observed sequences of transient spirals within the disc, which on the long-term led to an irreversible diffusion in action space of the system's DF, and especially to a thickening of the stellar disc. In order to probe such diffusion features, one can consider the marginal distribution of vertical action J z as a function of the guiding radius R g within the disc. To do so, let us define the function
F Z (R g , J z , t) as F Z (R g , J z , t) = dθ dJ δ D (R g -R g ) δ D (J z -J z ) F (J , t) = (2π) 3 dJ φ dR g dJ r F (R g , J r , J z , t) .
(5.75)
Following equations (2.34) and (2.72) and rewriting both collisionless and collisional diffusion equations as ∂F/∂t = div(F tot ), one can straightforwardly estimate the time variations of F Z as
∂F Z ∂t = (2π) 3 dJ φ dR g dJ r div(F tot )(R g , J r , J z , t) .
(5.76) Equation ( 5.76) can be rewritten as the divergence of a flux
F Z = (F φ Z , F z Z ) defined in the (J φ , J z )-plane, so that ∂F Z (J φ , J z ) ∂t = ∂ ∂J φ , ∂ ∂J z •F Z = ∂F φ Z ∂J φ + ∂F z Z ∂J z , (5.77)
where we introduced the flux components (F φ Z , F z Z ) as
F φ Z = (2π) 3 dJ r F φ tot (J φ , J r , J z ) ; F z Z = (2π) 3 dJ r F z tot (J φ , J r , J z ) .
(5.78)
In equation ( 5.78), we introduced the components of the total diffusion flux F tot in the (J φ , J r , J z ) space as Contours are spaced linearly between 95% and 5% of the function maximum. The red curve gives the mean value of Jz for a given Rg. Right panel: Same as in the left panel but at a much later stage of the evolution t = 3500. In the inner regions of the disc, one clearly notes the formation on secular timescales of a narrow ridge of enhanced vertical actions Jz.
F tot = (F φ tot , F r tot , F z tot ).
the two panels of figure 5.7.4, one can indubitably note the spontaneous formation on secular times of a narrow ridge of enhanced vertical actions in the inner regions of the disc, characterised by an increase of the mean value of the vertical actions in these regions. Such a feature is the direct vertical equivalent of what was observed in the radial direction in the razor-thin simulations presented in figure 3.7.5. This ridge is a signature of the spontaneous thickening of the disc sourced by its intrinsic shot noise. Let us now investigate in section 5.7.2 how the thickened WKB limits of the collisionless and collisional diffusion equations obtained in sections 5.5 and 5.6 allow us to explain such a feature.
Shot noise driven resonant disc thickening
In order to compute the diffusion fluxes associated with the collisionless and collisional diffusion equations, the first step is to study the properties of the system's self-gravity. To do, let us consider the amplification eigenvalues λ(k r , k z ) from equation (5.30), thanks to which one may perform the approximation of the small denominators. For a given position J φ and a given resonance vector m, we illustrate in figure 5.7.5 the behaviour of the function (k r , k z ) → λ(k r , k z ). Such a behaviour allows us to identify Figure 5.7.5: Illustration of the behaviour of the amplification function (kr, kz) → λ(kr, kz) as obtained in equation (5.30), for m = mCOR and J φ = 1.5. We recall that the diffusion coefficients generically require to compute the amplification eigenvalues at the local intrinsic frequency ω = m•Ω. Contours are spaced linearly between 90% and 10% of the function maximum λmax. The grey domain corresponds to the region Vmax = (kr, kr) λ(kr, kz) ≥ λmax/2 . This is the region on which the integrations for the approximation of the small denominators will be performed as in equations (5.57) and (5.68). One can finally note that here the maximum of amplification lies along the line kz = k 1 z (kr), i.e. along the line of the minimum quantised frequency kz, see figure 5.3.2. a region V max (m, J φ ) of maximum amplification over which the integrations on k r and k z may be performed in equations (5.56) and (5.67). Figure 5.7.6 illustrates the importance of the system's self-gravity by representing the behaviour of the function J φ → 1/(1-λ max (m, J φ )) for different resonance vectors m. Following our characterisation of the disc's amplification, let us now compute in turn the induced collisionless diffusion (section 5.7.2.1) as well as the collisional one (section 5.7.2.2), and investigate if such approaches are able to recover the secular formation of vertical resonant ridges observed in figure 5.7.4 via direct N -body simulations.
Figure 5.7.6: Illustration of the dependence of the amplification factor 1/(1-λmax(m, J φ )), as given by equation (5.30), for various resonances m as a function of the position within the disc given by J φ . One can note that the amplification associated with the corotation (COR) is always larger than the ones associated with the other resonances. As expected from the taper functions of equation (3.90), self-gravity is turned off in the inner and outer regions of the disc.
Collisionless forced thickening
As a first to approach to understand the formation of the vertical ridge observed in figure 5.7.4, let us rely on the WKB limit of the collisionless diffusion equation obtained in section 5.5. So12 considered an isolated disc, so that in order to use our collisionless formalism, one should assume some form for the perturbation power spectrum C[m φ , ω, R g , k r , k z ] that sources equation (5.56). Following the same approximation than the one considered in the razor-thin equation (3.93), let us assume that the source of perturbation comes from the system's internal Poisson shot noise due to the finite number of stars. In the galactic context, such perturbations could also mimic the perturbations by compact gas clouds within the disc (see section 5.7.6). With such a Poisson shot noise, the intrinsic potential fluctuations vary like δψ e ∝ √ Σ star . For simplicity, we only keep the dependence w.r.t. R g and neglect any dependence w.r.t. ω, k r and k z in the autocorrelation C from equation (5.56). Up to a normalisation, let us therefore assume that the autocorrelation of the external perturbations takes the simple form
C[m φ , ω, R g , k r , k z ] = δ 2 m φ Σ star (R g ) .
(5.79)
As discussed in the end of section 5.7.1, we restrict potential perturbations to the sole harmonic sector m φ = 2, and the same restriction applies to C, hence the Kronecker symbol δ 2 m φ . Of course, one should keep in mind that Poisson shot noise is not per se an external perturbation, as it is induced by the disc's constituents themselves. In order to account in a more rigourous and self-consistent way for these intrinsic finite-N effects, one has to rely on the inhomogeneous Balescu-Lenard equation. This will be the focus of section 5.7.2.2. In equation (5.79), having no dependence w.r.t. ω implies in particular that for a given location in the disc, all resonances undergo the same perturbations, even if they are not associated with the same local resonant frequencies ω = m•Ω.
Relying on the previous estimation of the disc's amplification eigenvalues and on our assumption for the perturbation power spectrum, one can compute the collisionless diffusion coefficients from equation (5.57) and the associated collisionless diffusion flux F tot from equation (2.34). The initial time variations of F Z from equation (5.76) can then be estimated. The initial contours of ∂F Z /∂t t=0 are illustrated in figure 5.7.7. In this figure, one qualitatively recovers the formation of a resonant ridge of increased vertical actions in the inner region of the disc, as observed in So12. This illustrates how the Poisson shot noise induced by the finite number of particles and approximated by equation (5.79), can indeed be the source of a secular disc thickening. This qualitative agreement between the numerical measurements from figure 5.7.4 and the collisionless WKB predictions from figure 5.7.7 is impressive considering the various approximations required to obtain the thickened WKB limit of the collisionless diffusion equation.
Relying on the same collisionless approach, let us briefly investigate how the disc's gravitational Red contours, for which ∂FZ/∂t t=0 < 0, are associated with regions from which the particles will be depleted and are spaced linearly between 90% and 10% of the function minimum. Blue contours, for which ∂FZ/∂t t=0 > 0 are associated with regions where the number of orbits will increase during the diffusion and are spaced linearly between 90% and 10% of the function maximum. The background contours illustrate the initial contours of FZ(t = 0) spaced linearly between 95% and 5% of the function maximum and computed for the quasi-isothermal DF from equation (5.5).
susceptibility may impact its secular dynamics. To do so, let us consider the effect of varying the fraction of mass in the disc, by changing the value of the active fraction ξ (see equation (3.91)). The dependence of the system's collisionless response with ξ is illustrated in figure 5.7.8. As expected, as one increases the disc's self-gravity, the dressing of the perturbations gets stronger which subsequently hastens the orbital diffusion.
Collisional thickening
In the previous section, we investigated how the WKB collisionless diffusion equation could explain the vertical ridge observed in figure 5.7.4. This essentially relied on treating the intrinsic Poisson shot noise as an external perturbation, via equation (5.79). In order to account in a self-consistent manner for these internal and self-induced perturbations, one should rely on the WKB Balescu-Lenard equation derived in section 5.6. Thanks to the previous estimation of the disc's amplification eigenvalues, one can straightforwardly compute the disc's dressed susceptibility coefficients given by equation (5.68). One may then compute the collisional drift and diffusion coefficients from equations (5.69) and (5.70) and the associated total collisional diffusion flux F tot . As the particles' mass scales like µ = M tot /N , let us rather consider the quantity N F tot which is independent of N . As defined in equation ( 5.77), one can then compute the collisional diffusion flux |N F Z | in the (J φ , J z )-plane. We illustrate in figure 5.7.9 the initial contours of |N F Z |(t = 0). In figure 5.7.9, one can note that the diffusion flux N F Z is maximum in the disc's inner region. Let us note that both figures 5.7.7 and 5.7.9, which were obtained respectively in a collisionless or collisional approach, are in qualitative agreement and both predict an increase of the vertical actions in the inner regions as was observed in direct numerical simulations. The crude assumption for the Poisson shot noise in equation (5.79) used with the collisionless diffusion equation allowed us to mimic the results from the collisional Balescu-Lenard formalism, for which the spectral properties of the internal Poisson shot noise are self-consistently accounted for.
Figure 5.7.8: Illustration of the dependence of the system's collisionless secular response to Poisson shot noise, as one varies the disc's active fraction ξ. The units for the vertical axis were rescaled to clarify the presentation. The blue line corresponds to the maximum value of div(F Z), while the red line corresponds to the minimum value of div(F Z). The larger the disc's active fraction ξ, the stronger the disc's gravitational susceptibility and therefore the faster the diffusion. For ξ 0.8, the disc becomes dynamically unstable. See figure 2 in Weinberg (1993) for a similar illustration of the crucial role of collective effects in accelerating orbital diffusion.
Figure 5.7.9: Illustration of the norm of the collisional diffusion flux |N F Z|(t = 0) in the (J φ , Jz)-plane, as predicted by the thickened WKB limit of the Balescu-Lenard equation. The blue contours are spaced linearly between 90% and 10% of the maximum norm. The background contours correspond to the initial contours of FZ(t = 0) spaced linearly between 95% and 5% of the function maximum and computed for the quasi-isothermal DF from equation (5.5). One can clearly note the presence of an enhanced diffusion flux in the inner regions of the disc, compatible with the localised increase in vertical actions observed in figure 5.7.4.
Vertical kinetic heating
In order to better assess the secular increase in vertical actions induced by finite-N effects, let us now consider the associated increase in the disc's vertical velocity dispersion. Indeed, from observations, disc thickening is best probed by considering the evolution of the vertical velocity dispersion
ς 2 z (R g , t) = v 2 z (R g , t)
, which can be computed as
ς 2 z (R g , t) = dθ dJ δ D (R g -R g ) F (J , t) (v z ) 2 dθ dJ δ D (R g -R g ) F (J , t) = ν(R g ) dJ r dJ z F (R g , J r , J z , t) J z dJ r dJ z F (R g , J r , J z , t) , (5.80)
where to obtain the second equality, we relied on the the epicyclic approximation from equation (5.4) which gives v 2 z = 2J z ν sin 2 (θ z ). For t = 0, the system's DF is given by the quasi-isothermal DF from equation (5.5) and one recovers ς 2 z (R g , t = 0) = σ 2 z (R g ). The initial time derivative of ς 2 z can also be computed. It reads
∂ς 2 z ∂t t=0 = ν dJ r dJ z J z ∂F ∂t t=0 - σ 2 z ν dJ r dJ z ∂F ∂t t=0 dJ r dJ z F (t = 0) , (5.81)
where ∂F/∂t = div(F tot ) is given by the diffusion equations (either collisionless or collisional). Finally, as ∂ς 2 z /∂t = 2ς z ∂ς z /∂t, one can compute the expected increase in the vertical velocity dispersion ς z resulting from the disc's intrinsic Poisson shot noise. This is illustrated in figure 5.7.10, where we represent ς z (R g , t) σ z (R g )+t ∂ς z /∂t| t=0 as predicted by both collisionless and collisional formalisms. As was Figure 5.7.10: Illustration of the increase in the vertical velocity dispersion ςz induced by the intrinsic Poisson shot noise. Left panel: Prediction for the collisionless WKB diffusion equation from section 5.5, when approximating Poisson shot noise with equation (5.79). For t = 0, one has ςz(Rg, t = 0) = σz(Rg), while for later times (here ∆T is an arbitrary timestep), we relied on the approximation ςz(Rg, t) σz(Rg)+t ∂ςz/∂t|t=0 and on equation (5.81). Right panel: Same as the left panel but for the thickened WKB limit of the collisional Balescu-Lenard equation derived in section 5.6. Here ∆τWKB is a timescale introduced in section 5.7.3. observed in figures 5.7.7 and 5.7.9, the vertical velocity dispersion also demonstrates that the most significant increase in vertical velocity dispersion occurs in the inner regions of the disc. This illustrates once again how the self-induced Poisson shot noise can indeed be the source of a disc thickening on secular timescales. Such a mechanism is qualitatively captured by both collisionless and collisional WKB diffusion equations. These qualitative agreements are all the more impressive in view of the various assumptions introduced throughout the derivations to obtain analytical and explicit expressions for both collisionless and collisional diffusion fluxes. Recall finally that a crucial strength of the Balescu-Lenard formalism is that it is self-contained and does not involve any ad hoc fittings of the system's perturbations. Following the calculation of the induced collisional increase in ς z presented in the right panel of figure 5.7.10, one may now compare the typical timescale of collisional diffusion predicted by the thick WKB Balescu-Lenard equation with the one observed in numerical simulations. This is the purpose of the next section.
Diffusion timescale
Our previous estimations of the collisional diffusion flux N F Z now allow us to compare the timescale of appearance of the ridge predicted by the thickened WKB Balescu-Lenard equation with the time during which So12's simulation was performed. Following section 3.7.3, let us therefore compare the rescaled times of diffusion ∆τ , as defined in equation (3.95).
The right panel of figure 5.7.4 was obtained in So12's simulation after a time ∆t So12 = 3500 in a simulation with N = 2×10 5 particles. As a consequence, the vertical ridge was observed in So12 after a rescaled time ∆τ So12 = ∆t So12 /N 2×10 -2 . When looking at the mean evolution of J z in figure 5.7.4, one can note that during the rescaled time ∆τ So12 , the mean vertical action in the inner region of the disc was approximately doubled. This time may then be compared with the typical time necessary to lead to a similar increase via the Balescu-Lenard equation. The epicyclic approximation from section 5.2 immediately gives v 2 z = 2νJ z sin 2 (θ z ), so that ς 2 z = ν J z . Hence, doubling the mean vertical action J z amounts to multiplying the vertical velocity dispersion ς z by √ 2. The right panel of figure 5.7.10 gives us that such an increase of ς z is reached after a rescaled time ∆τ WKB 10 3 . Comparing the numerically measured rescaled time ∆τ So12 to the thick WKB Balescu-Lenard predictions, one therefore gets ∆τ So12 ∆τ WKB 2×10 -5 .
(5.82)
Note that the disagreement obtained here between the measured and the predicted timescales is even larger than what was obtained in equation (3.97) in the razor-thin case, when considering radial diffusion in razor-thin stellar discs. The initial timescale discrepancy from equation (3.97) was resolved in equation ( 4.31) by resorting to a global evaluation of the Balescu-Lenard diffusion flux. We subsequently showed in section 4.3.3 that this discrepancy was caused by the incompleteness of the WKB basis, which cannot correctly capture swing amplification (illustrated in figure 3.7.14), the strong amplification of unwinding perturbations. The present thickened WKB formalism suffers from the same flaws, as illustrated in the timescale mismatch from equation (5.82). Even if the lack of any loosely wound contributions to the disc's susceptibility leads to such a significant mismatch, the diffusion features recovered in figures 5.7.9 and 5.7.10 illustrate however how the thickened WKB limit of the Balescu-Lenard equation still allows for an explicit qualitative description of the long-term evolution of discrete self-gravitating thick discs induced by their intrinsic Poisson shot noise.
Radial migration
Let us now detail how the previous results are also in agreement with what was presented in section 3.7.2 in the context of razor-thin discs. In order to study the diffusion in the (R g , J r )-plane, similarly to equation (5.75), let us define the function F R (R g , J r , t) as
F R (R g , J z , t) = dθ dJ δ D (R g -R g ) δ D (J r -J r ) F (J , t) = (2π) 3 dJ φ dR g dJ z F (R g , J r , J z , t) .
(5.83)
As in equation (5.76), the time derivative of F R reads
∂F R ∂t = (2π) 3 dJ φ dR g dJ z div(F tot )(R g , J r , J z , t) .
(5.84)
Similarly to equation (5.77), the associated diffusion in the (J φ , J r )-plane is straightforwardly captured by the flux
F R = (F φ R , F r R ) with ∂F R (J φ , J r ) ∂t = ∂ ∂J φ , ∂ ∂J r •F R = ∂F φ R ∂J φ + ∂F r R ∂F r R , (5.85)
where the flux components
(F φ R , F r R ) read F φ R = (2π) 3 dJ z F φ tot (J φ , J r , J z ) ; F r R = (2π) 3 dJ z F r tot (J φ , J r , J z ) .
(5.86)
As in equation (5.78), we introduced here the total diffusion flux F tot in the (J φ , J r , J z )-plane as
F tot = (F φ tot , F r tot , F z tot ).
Because it is marginalised over J z , the function F R allows us to get rid of the vertical dependence of the diffusion. It mimics the razor-thin measurements presented in section 3.7.2.
Relying on the shot noise perturbation from equation (5.79), figure 5.7.11 illustrates the initial contours of ∂F R /∂t| t=0 predicted by the thickened WKB limit of the collisionless diffusion equation. In figure 5.7.11, one predicts the formation in the (R g , J r )-plane of a narrow ridge of resonant orbits in Figure 5.7.11: Illustration of the initial contours of ∂FR/∂t|t=0 from equation (5.84), computed via the WKB collisionless diffusion equation from section 5.5, when considering a secular forcing sourced by Poisson shot noise approximated with equation (5.79). We use the same conventions as in figure 5.7.7. The background contours illustrate the initial contours of FR(t = 0). They are spaced linearly between 95% and 5% of the function maximum and are computed for the quasi-isothermal DF from equation (5.5). This figure should be compared to figure 3.7.9 corresponding to the razor-thin case, for which we also recovered the formation of a narrow ridge of increased radial actions in the inner region of the disc along the direction of the rILR. the inner regions of the disc along the direction of the rILR. One therefore recovers the same feature as observed in the razor-thin figure 3.7.9.
Similarly, one can perform the same predictions by relying on the thickened WKB limit of the collisional Balescu-Lenard equation. This is illustrated in figure 5.7.12 where we represent the initial contours of |N F R |(t = 0). Even with the collisional approach, one also recovers the formation of an inner narrow ridge of radial diffusion aligned with the rILR resonance. This is in qualitative agreement with what was observed in the razor-thin figure 3.7.13. These results illustrate once again how the razor-thin and thickened WKB formalims are indeed in agreement, as emphasised in Appendix 5.C.
Thickening induced by bars
In order to investigate other possible mechanisms of secular thickening, let us now consider a different source of perturbations driving the WKB collisionless diffusion coefficients from equation (5.57). Rather than focusing on the effect of Poisson shot noise, let us now study the secular effect of a stochastic series of central bars on the disc thickness. We therefore assume that the autocorrelation C of the external perturbations takes the form
C[m φ , ω, R g , k r , k z ] = δ mp m φ A b (R g ) exp - (ω-m p Ω p ) 2 2σ 2 p , (5.87)
where m p = 2 is the bar's pattern number, Ω p its typical pattern speed, and
σ p ∼ 1/T b ∼ (1/Ω p )(∂Ω p /∂t),
with T b the typical bar's lifetime, characterises the typical decay time of the bar frequency. The slower Ω p evolves, the smaller σ p , and therefore the narrower the frequency window in equation (5.87). In equation (5.87), we also introduced A b (R g ) an amplitude factor varying with the position in the disc, which aims at describing the radial profile and extension of the bar. Let us underline that equation (5.87) is a rather crude description, as we neglected any dependence w.r.t. the frequencies k r and k z . We study perturbations imposed by various series of bars characterised by Ω p ∈ 0.4, 0.25 and σ p ∈ 0.03, 0.06 .
Figure 5.7.12: Illustration of the norm of the collisional diffusion flux |N F R|(t = 0) in the (J φ , Jr)-plane, as predicted by the thickened WKB limit of the Balescu-Lenard equation derived in section 5.6. The blue contours are spaced linearly between 90% and 10% of the maximum norm. The background contours correspond to the initial contours of FR(t = 0). They are spaced linearly between 95% and 5% of the function maximum and are computed for the quasi-isothermal DF from equation (5.5). One clearly notes the presence of an enhanced diffusion flux in the inner region of the disc towards larger radial actions. This figure should be compared to figure 3.7.13 corresponding to the razor-thin case, which also predicted the formation of a narrow ridge of enhanced radial action in the inner regions of the disc along the direction of the rILR.
Finally, in order to focus our interest on the intermediate regions of the disc, belonging neither to the bulge nor the bar, we consider
A b (R g ) = H[R g -R cut ], with H[x]
a Heaviside function, such that H[x] = 1 for x ≥ 0 and 0 otherwise, and R cut = 2.5 is a truncation radius below which the bar is present. The initial contours of ∂F Z /∂t| t=0 for these various choices of bar perturbations are illustrated in figure 5.7.13. The various panels presented in figure 5.7.13 first allow us to note how the additional dependence on ω present in equation (5.87) tends to localise the ridge of enhanced thickness. Let us also emphasise one important property of the collisionless diffusion coefficients from equation (5.57), which is the fact that the orbital diffusion is strongly affected by the the dynamical properties of the perturbing bars. Comparing the left-hand panels of figure 5.7.13 with the right-hand ones, one immediately recovers that the slower the bars, the further out the diffusion. As Ω p decreases, the ridge move outwards, i.e. particles resonating with slower bars are located further out in the disc. Comparing the top panels of figure 5.7.13 with the bottom ones, one recovers that the more long-lived the bars, the narrower the diffusion features. As σ p decreases, the different ridges get sharper and do not overlay anymore. If the pattern speeds of the bars decrease rapidly, the associated perturbations swipe a broader temporal frequency range, and therefore perturb a larger number of particles, hence the wider ridges. Finally, the position of the various ridges observed in figure 5.7.13 can be straightforwardly predicted thanks to figure 5.7.3, which illustrates the dependence of the various resonance frequencies ω = m•Ω as a function of the position in the disc. In order to allow for a resonant diffusion, one should match the frequency of the bars perturbation, m p Ω p , with the local orbital frequency m•Ω. Different resonances, i.e. different resonance vectors m, are then associated with different locations in the disc, as can be seen in figure 5.7.3. Because the previous shot noise perturbations from equation (5.79) and the bar ones from equation (5.87) do not have the same spectral structure, the diffusion features predicted in figures 5.7.7 and 5.7.13 are significantly different. This underlines the critical role played by the perturbations' spectral characteristics in shaping the collisionless diffusion coefficients. As seen in figure 5.7.13, the process of secular thickening induced by bar-like perturbations can have a very clear chemo-dynamical signature in the radial distribution of stars at a given age and velocity dis- persion. The structure of a disc's stellar DF will be mainly shaped by two competing mechanisms: gas inflow will continuously regenerate a cold component of stars within a razor-thin disc, while potential fluctuations within the disc will trigger both radial and vertical migrations in regions which resonate with the perturbations. As a consequence, the distributions of stellar ages, metallicities, radial and vertical velocities will reflect the net effect of all these simultaneous processes. See the end of chapter 3 for a brief discussion on how chemistry can be incorporated in these formalisms. These will be affected by the disc's underlying orbital structure, the spectral properties of the perturbations, the rate of star formation, the gas infall within the disc, etc.
GMCs triggered thickening
In a realistic thick galactic disc, one does not expect the self-induced diffusion of stars alone to drive the disc's thickening within a Hubble time, the number of stars being too large to lead to an efficient collisional heating. However, if one accounts for the joint evolution of the disc's giant molecular clouds (GMCs), one has to update the previous predictions for the collisional timescale of diffusion. Indeed, this second population of less numerous but more massive particles can significantly hasten the secular diffusion. So12 gives a possible scaling to physical units as
R i = 0.75 kpc ; τ 0 = R i V 0 = 3.0 Myr . (5.88)
For a typical Milky Way like galaxy, the number of stars scales like N MW 10 11 . As a consequence, the rescaled time of collisional thickening ∆τ So12 2×10 -2 measured in So12's simulation becomes for a Milky Way like galaxy
∆t MW 6×10 6 Gyr 6×10 5 t Hub. , (5.89)
where we introduced the Hubble time t Hub. 10 Gyr. This estimate shows that the mechanism of selfinduced collisional thickening discussed in section 5.7.2.2 is not sufficiently efficient to be relevant per se for a Milky Way like galaxy. However, it has long been speculated (e.g., Spitzer & Schwarzschild, 1953) that in stellar discs, the joint evolution of the stars and a population of forming and dissolving GMCs could be responsible for the disc's thickening as a result of local deflections. As already emphasised in equation (2.76), an important strength of the Balescu-Lenard formalism is that it also allows for the simultaneous description of the dynamics of multiple components. This multi-component equation accounts at the same time for transient spiral structures and non-local resonant encounters between these various components. Let us now briefly discuss how the joint evolution of stars and GMCs could enable a thickening of stellar discs on a much shorter timescale.
Let us follow the notations from section 2.3.6 when we presented the multi-component Balescu-Lenard equation. Let us assume that the disc contains a total mass M tot of N stars of individual mass µ described by the DF F . In addition, we assume that the disc also contains a total mass M G tot of N G GMCs of individual mass µ G described by the DF F G . In order to simplify our presentation, we will assume that the stars and the GMCs are distributed according to a similar distribution (in reality, the GMCs are typically dynamically colder). Because of their respective normalisations, the DFs then satisfy
F G = M G tot M tot F .
(5.90)
The total collisional drift and diffusion coefficients A tot m1 and D tot m1 from equation (2.82) may then be estimated as
A tot m1 = (1+α A ) A m1 ; D tot m1 = (1+α D ) µ D m1 , (5.91)
where we introduced as A m1 and D m1 the drift and diffusion coefficients of the stars' population when considered alone. In equation (5.91), we also introduced the dimensionless quantities α A and α D as
α A = M G tot M tot = µ G µ N G N ; α D = µ G M G tot µ M tot = µ G µ 2 N G N .
(5.92)
When accounting simultaneously for the presence of stars and of GMCs, the multi-component Balescu-Lenard equation (2.81) gives us the evolution of the stars' DF F as
∂F ∂t = ∂ ∂J 1 • m 1 µ (1+α A )A m1 F + (1+α D )D m1 m 1 • ∂F ∂J 1 , (5.93)
where we did not write the dependence w.r.t. J 1 to simplify the notations. In equation (5.93), the case without GMCs can be recovered by assuming α A = α D = 0. Murray (2011) gives the typical current properties of the Milky Way's GMCs as µ G 10 5 M ; N G 10 4 ; M G tot 10 9 M .
(5.94)
A more involved modelling of the GMC population should also account for the expected secular variability of this population, due to the exponential decay of the disc's star formation and the rapid disappearance of GMCs. For a Milky Way like galaxy with N 10 11 and µ 1M , equation (5.92) gives us α A 10 -2 ; α D 10 3 , (5.95) so that relying on the fact that α A 1 and α D 1, equation (5.93) becomes
∂F ∂t = ∂ ∂J 1 • m 1 µ A m1 F +α D D m1 m 1 • ∂F ∂J 1 .
(5.96)
As a consequence, the joint evolution of the GMCs tends to boost the diffusion coefficients both in absolute terms as well as w.r.t. the drift ones. Because α D 1, the GMCs act as a catalyst and can significantly hasten the diffusion of the stars and therefore the thickening of the stellar disc. Indeed, the multi-component Balescu-Lenard equation captures the effects of multiple resonant deflections of stars by GMCs, leading to a diffusion of the lighter stellar population towards larger J z , while the GMCs sink in. Let us assume that this selective boost of the diffusion component w.r.t. the drift directly translates to the timescale of thickening. We therefore write (5.97) where ∆t corresponds to the timescale of spontaneous thickening when only stars are considered, while ∆t G+ corresponds to the case where the joint evolution of the GMCs is also accounted for. Applied to equation (5.89), the diffusion boost from equation (5.97) leads to ∆t MW+G 6×10 2 t Hub. , (5.98)
∆t G+ = ∆t α D ,
where ∆t MW+G corresponds to the timescale of thickening of a Milky Way like galaxy when the joint evolution of the GMCs is also accounted for. In equation (5.98), let us underline how the joint presence of the GMCs tends to significantly hasten the thickening of stellar discs induced by discrete resonant encounters. However, we note that despite this boost, such a self-induced thickening remains too slow to be significant during the lifetime of a Milky Way like galaxy. This analysis would therefore tend to show that the self-induced mechanism of secular collisional thickening induced by finite-N fluctuations, captured by the Balescu-Lenard equation and studied numerically in So12, even when boosted by the presence of the more massive and less numerous GMCs, is not sufficiently rapid to lead to a significant thickening of a Milky Way like galaxy on a Hubble time. Aumer et al. (2016) recently reached a similar conclusion by studying the quiescent growth of isolated discs in numerical simulations.
Inspired by this consideration on the role played by GMCs, let us now perform the same calculations in the case of razor-thin discs and update the timescale of collisional radial diffusion presented in section 3.7.3. Following the results from Sellwood (2012), we showed that the ridge in the (J φ , J r )-plane observed in figure 3.7.5 appeared after a time ∆t radial S12 = 1500 for N = 5×10 7 particles. The associated rescaled time of diffusion is then given by ∆τ radial S12 = 3×10 -5 . Thanks to the physical units from equation (5.88), for a Milky Way like galaxy and accounting only for the stellar component, the radial ridge would appear after a time ∆t radial MW = 10 3 t Hub. . Accounting for the GMCs diffusion acceleration obtained in equation (5.97) would hasten the radial diffusion so that in a Milky Way like galaxy, the radial ridge would appear on a timescale of the order of ∆t radial MW+G ∆t radial MW /(10 3 ) t Hub. As a conclusion, while we had showed in equation (5.98), that the simultaneous presence of the GMCs was still not sufficient to allow for the appearance of a vertical ridge on the typical lifetime of a Milky Way like galaxy, such an accelerated self-induced mechanism appears as fast enough to induce a radial ridge in a Milky Way like galaxy's DF on a Hubble time.
Conclusion
In this chapter, we presented applications of the two diffusion formalisms (collisionless and collisional) in the context of thickened stellar discs. Relying on the epicyclic approximation (section 5.2) and the construction of a thickened WKB basis (sections 5.3 and 5.4), we derived the thick WKB limit of these two equations (section 5.5 and 5.6), by assuming that only radially tightly wound transient spiral perturbations are sustained by the disc. We introduced in particular an ad hoc vertical cavity in order to solve Poisson's equation in a closed form. This yielded simple double quadratures for the collisionless diffusion coefficients in equation (5.56), as well for the collisional drift and diffusion coefficients in equations (5.69) and (5.70). These simple expressions provided us a straightforward tool to estimate the locations of maximum diffusion within a thick stellar disc. The use of an improved thick WKB approximation also allowed us to derive in equation (5.38) a new scale-height dependent thickened Toomre's parameter.
We applied in section 5.7 these two formalisms to a shot noise perturbed tepid stable thick disc. The estimated diffusion fluxes predict the formation in the inner region of the disc of a vertical ridge of resonant orbits towards larger actions, in qualitative agreement with the ridges observed in the direct N -body simulations from Solway et al. (2012). Let us note that these diffusion frameworks extend the findings of Binney & Lacey (1988) to the self-gravitating case, as here we treat in a coherent and selfconsistent manner the collective dressing of the perturbations, the associated spiral response and the induced thickening. This is the appropriate approach to account self-consistently and simultaneously for churning, blurring (Schönrich & Binney, 2009a), and thickening. We noted a discrepancy in the diffusion timescale predicted by this formalism (equation (5.82)), which was interpreted as being due to the WKB approximation that does not account for loosely wound perturbations and their associated strong swing amplification.
These applications illustrated that potential fluctuations within the disc induce a vertical bending of a subset of resonant orbits, leading to an increase in the vertical velocity dispersion. This generically offers a mechanism allowing for stellar discs to thicken on secular timescales, driven by their own intrinsic Poisson shot noise, or by a set of dynamically dragged central bars, or catalysed by the joint evolution of GMCs. When considering the effects of GMCs (section 5.7.6), we showed that such a self-induced thickening mechanism remains still too slow to lead to a significant secular thickening on cosmic times of a Milky Way like galaxy (see D 'Onghia et al. (2013) and references therein for a discussion on the effects of GMCs on spiral activity). Determining which of these processes are the dominant ones in the secular thickening of stellar discs depends directly on the relative amplitudes of the various external and internal potential fluctuations which can source the diffusion. For example, the statistical properties of external perturbations can be quantified beforehand in numerical simulations. All these mechanisms should have clear signatures in the vertical metallicity gradients to be observed in detail by GAIA. This offers a promising way of weighing the relative importance of these mechanisms.
Finally, we relied on various approximations, which we now recall. We enforced the epicyclic approximation as well as the plane parallel Schwarzschild approximation to build an integrable model of thickened stellar disc. To solve Poisson's equation vertically, the vertical edge of the disc was approximated with a sharp edge. The radial components were described within the WKB approximation, i.e. assumed to be radially tightly wound. When computing the disc's self-gravity, we neglected the vertical action gradients of the DF w.r.t. the radial ones, and also assumed that the orbits were closed on resonance. Finally, when implementing the dressed collisionless diffusion, we assumed some partially ad hoc external source of perturbations to describe the disc's internal shot noise or sequences of central decaying bars.
Future works
Having exhibited in detail how one could compute the characteristics of the secular diffusion in thickened axisymmetric discs, one could now extend these approaches in various ways. One first side product of the thickened WKB approximation is the derivation in equation (5.38) of a new generalised thickened Q parameter. In order to assess the quality of this stability parameter, it would be of interest to investigate via numerical simulations, how accurately such a parameter can predict the presence of local axisymmetric instabilities in thickened stellar discs. One difficulty with such a numerical investigation is the preparation of the disc's nitial conditions, thanks to which one aims at setting up a disc initially as close as possible to an equilibrium.
A possible improvement of the present WKB approach would be to implement anharmonic corrections in the vertical oscillations to better account for the stiffness of the vertical potential. This would require to improve the thickened epicyclic approximation from section 5.2. As was emphasised by the timescale comparison from equation (5.82), in order to correctly account for the system's self-gravity, one should eventually get rid of the WKB approximation, to capture the contributions associated with strongly amplified loosely wound perturbations. This was already a challenge in the case of razor-thin discs (see chapter 4), and its implementation for thick discs would be all the more difficult, as one does not have explicit angle-action coordinates for thick discs beyond the epicyclic approximation. In order to construct such coordinates, one can rely on the torus machine to build perturbatively a mapping of action space from an integrable model to a non-integrable one via fits of generating functions (Kaasalainen & Binney, 1994a,b;Binney & McMillan, 2016). Once these coordinates constructed, one would then have to solve the exact fields equations, construct an appropriate basis of potentials, and deal with the full response matrix. Should chaos become important in such systems, one could finally resort to the dual stochastic Langevin rewriting (see Appendix 6.C) to account for the associated chaotic diffusion.
the response matrix coefficients from equation (2.17) are equal to zero as soon as the two considered basis elements do not share the same vertical symmetry. We may therefore treat separately the symmetric and antisymmetric cases.
The thickened WKB basis elements introduced in equation (5.6) depend on four indices [k φ , k r , R 0 , n]. Following the same argument as in the razor-thin section 3.4, we may assume that the response matrix is diagonal w.r.t. the indices [k φ , k r , R 0 ]. As a consequence, it then only remains to check whether or not for a given set [k φ , k r , R 0 ], the response matrix is diagonal w.r.t. the index k n z . The expression (5.24) of the symmetric diagonal basis elements is straightforward to generalise to the non-diagonal ones and gives
M pq = 2πGΣα p α q hκ 2 (1+(k p z /k r ) 2 )(1+(k q z /k r ) 2 ) z even exp - (k p z ) 2 +(k q z ) 2 2ν 2 /σ 2 z I z k p z k q z ν 2 /σ 2 z × 1 (1-s 2 z ) F(s z , χ r )-z ν σ 2 z σ 2 r κ G(s z , χ r ) .
(5.106)
As already underlined in equation (5.28), starting from equation (5.106), it is straightforward to obtain the expression of the associated antisymmetric non-diagonal coefficients thanks to the substitution α → β and the restriction of the sum on z to odd values. Because it is a symmetric matrix, showing that the response matrix is diagonal amounts to proving that for p = q, one has M pq M pp . In order to perform such a comparison, let us focus in equation ( 5.106) on the quantities which depend on k p z and k q z . We introduce the quantity K ( z ) pq as
K ( z ) pq = 1 (1+(k p z /k r ) 2 )(1+(k q z /k r ) 2 ) exp - (k p z ) 2 +(k q z ) 2 2ν 2 /σ 2 z I z k p z k q z ν 2 /σ 2 z .
(5.107)
One can note that the definition from equation (5.107) does not involve the prefactors α p and α q , as they are always of order unity. In addition, equation (5.107) does not involve the terms F(s z , χ r ), G(s z , χ r ), and 1/(1-s z ) from equation (5.106), as they do not depend on the choices of k p z and k q z . Figure 5.B.1 illustrates the behaviours of the reduction functions s z → F(s z , χ r ), G(s z , χ r ) defined in equation (5.27). We note in figure 5.B.1 that these functions are ill-defined when computed for integer values of s z . In s → G(s, χ) (right panel) given by the black curves, along with their approximations from equation (5.108) given by the grey lines. One should note the divergences of these functions in the neighbourhood of integers. However, these functions are well defined when evaluated for integer values of s, provided one considers limη→0 Re[F(n+iη, χ)] (similarly for G), as illustrated with the black dots. order to regularise these diverging behaviours a small imaginary part is added to s z . While this procedure works for exactly integer values, this does not however prevent the divergences of F and G in the neighbourhood of integers. As illustrated in figure 5.B.1, in order to avoid these divergences, let us assume that the functions F and G can be approximated by the smooth functions (5.108) where f r and g r do not depend on s z . As already underlined in equation (5.29), when computing the collisionless diffusion coefficients from equation (2.32) or the dressed susceptibility coefficients from equation (2.50), the frequency ω should be considered at resonance so that ω = m•Ω. Following equation (5.29), the value of s z is either an integer (for z = m z ) or far from one provided that ν/κ is of high rational order. This distance from the exact resonance justifies the approximations from equation (5.108).
F(s z , χ r ) f r ; G(s z , χ r ) -g r s z ,
Thanks to these approximations, the sum on z in equation (5.106) may then be cut out according to the resulting powers of z . In order to prove that for p = q, one has M pq M pp , one should therefore prove that (5.109) where the power index γ is such that γ ∈ {0, 1, 2}.
S γ (p, q) = z γ z K ( z ) pq 1-s 2 z S γ (p, p) ,
In order to further dedimensionalise the problem, let us introduce the typical dynamical height of the disc, d = σ z /ν, as well as the dimensionless quantities (5.110) which allow us to rewrite equation (5.107) as
p = k p z d ; q = k q z d ; r = k r d ,
K ( z ) pq = I z p q e -( 2 p + 2 q )/2 (1+( p / r ) 2 )(1+( q / r ) 2 ) .
(5.111)
As was already illustrated in figure 5.3.2, let us recall that the fundamental symmetric frequency is significantly different from the other quantised frequencies (both symmetric and antisymmetric), as it is the only frequency inferior to π/(2h). In order to emphasise this very specific property, in this Appendix only, let us renumber the vertical indices p, such that p = 0 corresponds to the quantised fundamental symmetric mode, while p ≥ 1 corresponds to the rest of the quantised frequencies, all superior to π/(2h).
With such a choice, the numbering of the antisymmetric elements only starts at p = 1. Following figure 5.3.2, one has the inequalities
0 < 0 < π 2 √ 2 ; (p-1 2 ) π √ 2 < p < (p + 1 2 ) π √ 2 (for p ≥ 1) , (5.112)
where, following equation (5.74) for the Spitzer profile, we relied on the relation h = 2d, with h the height of the WKB sharp cavity (see figure 5.3.1). Similarly, one has the relation r = (k r h)/ √ 2. Let us note that the expression (5.111) of K (n) pq involves a modified Bessel function I n [ p q ] that needs as well to be approximated carefully. Equivalents in 0 and +∞ of these Bessel functions are immediately given by
I n (x) ∼ 0 1 n! x 2 n ; I n (x) ∼ +∞ e x √ 2πx .
(5.113)
As illustrated in figure 5.B.2, for a given value of n and x, one has to determine which approximation (polynomial or exponential) is relevant for I n (x). Let us therefore define for each n ≥ 0, the quantity x n such that for x ≤ x n (resp. x ≥ x n ), one uses the asymptotic development from equation (5.113) in 0 (resp.
+∞). Because in the expression (5.111), the Bessel functions are only evaluated in p q , for p and q given, there exists an integer n pq such that ∀ z < n pq , I z p q e p q 2π p q ; ∀ z ≥ n pq , I z p q 1 z ! p q 2 z .
(5.114)
In figure 5.B.2, let us finally note that, except for z = 0, the exponential approximation of the Bessel function is significantly bigger than the actual value of I z . This does not impact the upcoming discussion, as, when proving M pq M pp , the exponential approximation is applied for M pq alone, or for M pq and M pp simultaneously with similar errors, so that the comparisons between the approximations also hold for the exact values. Following equation (5.109), a naive approach to compare the terms S γ (p, q) and S γ (p, p) would be to compare the sum on z term by term, i.e. to prove that K
( z ) pq K ( z )
pp for all z . However, this is not sufficient and one should therefore be more cautious. In equation (5.109), one cuts out the sum on z appearing in S γ (p, q) between three different contributions, for which one can straightforwardly show:
• For the first terms, with | z | < n pp and | z | < n pq :
K ( z ) pq K (1)
pp . tion (5.113). The full lines are the four first Bessel functions, along with their polynomial approximations in zero (dashed curves). The black dashed curve is their common exponential approximation. The transition between the two approximations is given by the quantity xn.
• For the intermediate terms, with n pp ≤ | z | < n pq :
npp≤| z |<npq γ z K ( z ) pq 1-s 2 z K (1)
pp .
• For the last terms, with | z | ≥ n pq :
| z |≥npq γ z K ( z ) pq 1-s 2 z K (1) pp .
This last relation holds whenever k r h 0.03, but gets violated for q = 0 in the limit of a razor-thin disc.
The previous comparisons are straightforward to obtain thanks to the step distances between consecutive basis elements from equation (5.112) and the use of the approximations of the Bessel functions from equation (5.113). The combination of these relations shows that for k r h 0.03, for all p and q, one has M pq M pp . The same conclusion also holds for k r h 0.03, but only for q = 0. We therefore reached the following conclusions:
• The antisymmetric response matrix can always be assumed to be diagonal.
• For k r h 0.03, the symmetric matrix response can be assumed to be diagonal
• For k r h 0.03, i.e. in the limit of a razor-thin disc, the symmetric response matrix takes the form of an arrowhead matrix.
As a last step of this Appendix, let us finally justify why for a sufficiently thin disc, for which the symmetric response matrix takes the form of an arrowhead matrix, the diagonal response matrix can still be assumed to be diagonal. In this limit, the symmetric response matrix takes the form .115) where thanks to the previous calculations, one has the comparison relations α z i and z i d i . Let us assume that ∀i , z i = 0 and that ∀i = j , d i = d j . Following O' Leary & Stewart (1990), it can be shown that the eigenvalues (λ i ) 0≤i≤n of the arrowhead matrix from equation (5.115) are the (n+1) solutions of the equation
M = α z 1 • • • z n z 1 d 1 . . . . . . z n d n , ( 5
f M (λ) = α -λ - n i=1 z 2 i d i -λ = 0 .
(5.116)
In addition, provided that the d i are in descending order, these eigenvalues are interlaced so that
λ 0 > d 1 > λ 1 > ... > d n > λ n .
(5.117) Finally, the eigenvectors x i associated with the eigenvalue λ i are proportional to
x i = 1 ; z 1 λ i -d 1 ; ... ; z j λ i -d j ; ... ; z n λ i -d n .
(5.118)
Accounting for the comparison relations α z i and z i d i , figure 5.B.3 illustrates the behaviour of the function λ → f M (λ) from equation (5.116). In order to justify why the arrowhead response matrix from equation (5.115) may be considered as diagonal, one has to justify that, despite its first line and column, the matrix eigenvalues remain close to the matrix coefficients, so that λ 0 α and λ i d i (for i ≥ 1) .
d n d 3 d 2 d 1 α α λ 0 λ 1 λ 2 λ n
(5.119)
In addition, one must also ensure that the associated eigenvectors x i remain close to the natural basis elements so that x i (0 ; ... ; 1 ; 0 ; ...) , (5.120)
where the non-zero index is at the i th position. As illustrated in figure 5.B.3, the determination of the eigenvalues λ i requires to solve equation (5.116), which may be rewritten as
1 - λ i α - n i=1 (z i /α) 2 (d i /α)-(λ i /α) = 0 .
(5.121) Because we have (z i /α) 1, in order for equation (5.121) to be satisfied, one must necessarily either have λ i /α 1 or ((d i /α)-(λ i /α)) 1. It then follows immediately that λ 0 α and λ i d i . Equation (5.119) therefore holds and the matrix eigenvalues λ i remain close to the matrix diagonal coefficients (α, d 1 , ..., d n ). The eigenvectors x i from equation (5.118) may then be rewritten as
x i = 1 ; (z 1 /α) 2 (λ i /α)-(d 1 /α) 1 (z 1 /α) ; ... ; (z j /α) 2 (λ i /α)-(d j /α) 1 (z j /α) ; ... . (5.122)
Let us consider the first eigenvector associated with i = 0. Following equation (5.119), one has λ 0 α, so that, because d j α, the generic term from equation (5.122) becomes (5.123) where we relied on the fact that z j α. As consequence, for i = 0 in equation (5.122), all the terms except the first one are negligible in front of 1, and one gets x 0 (1; 0; ...; 0). Similarly, in equation ( 5.122), one can consider the case i = 0, for which the i th term of equation (5.122) takes the form (5.124) where we relied on the same argument as in equation (5.121). It states that for i = 0, there is only one dominant term in the sum from equation (5.121), given by
(z j /α) 2 (λ 0 /α)-(d j /α) 1 (z j /α) (z j /α) 1 1 ,
(z i /α) 2 (λ i /α)-(d i /α) 1 (z i /α) 1 (z i /α) 1 ,
(zi/α) 2 (di/α)-(λi/α)
1. As a consequence, for i = 0, the eigenvector x i from equation (5.122) is dominated by its i th coefficient and the eigenvector may therefore be assumed to be proportional to (0; ...; 1; 0; ...), where the non-zero index is at the i th position. We may therefore assume that the response matrix eigenvectors remain close to the natural basis elements. As a conclusion, even in the limit of a razor-thin disc, the symmetric arrowhead response matrix from equation (5.115) may still be assumed to be diagonal. We therefore justified why one may limit oneself to the diagonal coefficients of the response matrix, as in equation (5.23). The thickened WKB basis elements therefore allowed us to diagonalise the disc's response matrix. This is a crucial step in the explicit calculations of the collisionless and collisional diffusion fluxes as shown in sections 5.5 and 5.6.
5.C From thick to thin
In this Appendix, let us detail how one can, starting from the thickened WKB basis, recover all the razor-thin expressions obtained in chapter 3.
5.C.1 The collisionless case
Let us first consider the case of the collisionless diffusion presented in section 5.5 and show one now may compute the collisionless diffusion coefficients when the disc is too thin to rely on the continuous expression from equation (5.50). We will show that this second approach is fully consistent with the one used in equation (5.50). We will also show how one can recover the razor-thin expressions previously obtained in section 3.5.
We noted in equation ( 5.49) that in order to use Riemann sum formula w.r.t. the index k p z , one should ensure that the typical step distance ∆k z π/h from equation (5.17) remains sufficiently small compared to the scale on which the function k z → g s (k z ) varies. In the limit of a thinner disc, one has h → 0, so that ∆k z → +∞. As a consequence, the continuous approximation cannot be used anymore, and one should keep the discrete sum on the quantised k p z in equation (5.49). Of course, it is also within this limit of a thinner disc, that one can recover the razor-thin results from section 3.5.
Starting from equation (5.49), the expression (5.52) of the symmetric collisionless diffusion coefficients becomes 5.125) where the perturbation autocorrelation, C δψ e , was introduced in equation (5.51). Let us recall that the antisymmetric analog of equation (5.125) is straightforward to obtain thanks to the substitutions α p → β p and δ even mz → δ odd mz . For the antisymmetric case, as already noted in equation ( 5.53), one should pay attention to the fact that the perturbation autocorrelation involves the odd-restricted vertical Fourier transform of the potential perturbations. As in equation (5.55), the next step of the calculation is to diagonalise the perturbation autocorrelation, where one should pay attention to the fact that k z = k z (k r , n) is no longer a free variable but should be seen as a function of the considered k r and n. Following Appendices F and G in Fouvry et al. (2016c), equation (5.55) becomes here
D sym m (J ) = δ even mz 1 (2π) 2 np,nq dk p r J mr 2Jr κ k p r J mz 2Jz ν k np z (k p r ) α 2 p 1-λ p × dk q r J mr 2Jr κ k q r J mz 2Jz ν k nq z (k q r ) α 2 q 1-λ q C δψ e [m φ , m•Ω, R g , k p r , k q r , k np z (k p r ), k nq z (k q r )] , (
δ ψ e m φ ,k 1 r ,k n 1 z [R g , ω 1 ] δ ψ e * m φ ,k 2 r ,k n 2 z [R g , ω 2 ] = 2πhδ D (ω 1 -ω 2 ) δ D (k 1 r -k 2 r ) δ n2 n1 C[m φ , ω 1 , R g , k 1 r , k n1 z ] ,
(5.126) where the diagonalisation w.r.t. the vertical dependence is captured by the Kronecker symbol δ n2 n1 . This diagonalised autocorrelation allows us to rewrite the diffusion coefficients from equation (5.125) as
D sym m (J ) = δ even mz 1 4h np dk p r J 2 mr 2Jr κ k p r J 2 mz 2Jz ν k np z (k p r ) α 2 p 1-λ p 2 C[m φ , m•Ω, R g , k p r , k np z (k p r )] .
(5.127) Equation (5.127) is the direct discrete equivalent of equation (5.56), and both expressions are in full agreement. Indeed, starting from equation (5.127), the continuous expression w.r.t. k p z can immediately be recovered by using Riemann sum formula with the step distance ∆k z π/h from equation (5.17). Similarly to equation (5.57), one can also simplify equation (5.127) thanks to the approximation of the small denominators, which gives here
D sym m (J ) = δ even mz 1 4h np ∆k np r J 2 mr 2Jr κ k max r,np J 2 mz 2Jz ν k max z,np (α max np ) 2 1-λ max np 2 C[m φ , m•Ω, R g , k max r,np , k max z,np ] .
(5.128) In equation (5.128), for a given value of the index n p , we consider the behaviour of the function
k p r → λ(k p r , k np z (k p r ))
, and assume that it reaches a maximum value λ max np for k r = k max r,np on a domain of typical extension ∆k np r . In equation (5.128), we also used the shortening notation k max z,np = k np z (k max r,np ). The antisymmetric analogs of equations (5.127) and (5.128) are straightforward to obtain by considering the antisymmetric quantised frequencies k z from equation (5.100) and performing the substitution α p → β p . As already emphasised in equation ( 5.56), one should pay attention to the fact that in these antisymmetric analogs, C involves an even-restricted vertical Fourier transform of the autocorrelation, despite the fact that one is interested in antisymmetric diffusion coefficients.
Starting from the discrete expression of the diffusion coefficients obtained in equation (5.127), let us now illustrate how one can recover the razor-thin WKB diffusion coefficients from section 3.5 by considering the limit of a thinner disc. As already noted in figure 5.3.2, let us recall that except for the fundamental symmetric frequency k 1 z,s , one always has k n z > π/(2h). As a consequence, in the infinitely thin limit, for which h → 0, one has k n z → +∞, except for k 1 z,s . Let us also recall that in equation (5.127), the dependence of C[k p z ] takes the form
C[k p z ] = 2h -2h dv C[v] cos[k p z v] .
(5.129)
One therefore gets the majoration | C[k p z ]| ≤ 4h C max , which, in the razor-thin limit, cancels out the prefactor 1/(4h) present in equation (5.127). Because ∀n ≥ 0 , lim x→+∞ J n (x) = 0, it immediately follows from equation (5.127) that lim thin D anti m (J ) = 0 .
(5.130)
In addition, equation (5.127) also implies that for symmetric diffusion coefficients, the sum on n p may be limited to the only fundamental term n p = 1. Equation (5.16) gives us that in the razor-thin limit, one has k 1 z,s k r /h. Equation (5.127) therefore also implies that for m z = 0, one has lim thin D sym m = 0. Therefore, in the infinitely thin limit, only the symmetric diffusion coefficients for m z = 0 will not vanish. In addition, from equation (5.127), it is also straighforward to obtain that in order to have a non-vanishing symmetric diffusion coefficient, one should also restrict oneself to J z = 0. In the razor-thin limit for m z = 0 and J z = 0, one can therefore write
lim thin D sym m (J ) = lim thin 1 4h dk p r J 2 mr 2Jr κ k p r α 2 1 1-λ p 2 C[m φ , m•Ω, R g , k p r , k 1 z,s ] .
(5.131)
The definition of the prefactor α p in equation (5.21) immediately gives us lim thin α 1 = 1. In addition, we also obtained in equation (5.34) that lim thin λ p = λ thin p . The last step of the present calculation is to study, in the razor-thin limit, the behaviour of the term C[k 1 z,s ] from equation (5.129). Equation (5.129) takes the form of an integral of length 4h of a function oscillating at the frequency k 1 z,s k r /h. In this interval, the number of oscillations of the fluctuating term is of order k 1 z,s h ∼ √ k r h, so that in the razor-thin limit the number of oscillations of the function v → cos[k 1 z,s v] tends to 0. In the razor-thin limit, equation (5. 5.132) where C thin [m φ , m•Ω, R g , k p r ] stands for the local razor-thin power spectrum of the external perturbations in the equatorial plane as defined in equation (3.67) in the razor-thin case. In equation (5.132), we fully recovered the razor-thin result previously obtained in equation (3.68).
κ k p r 1 1-λ thin 2 C thin [m φ , m•Ω, R g , k p r ] , (
Introduction
The previous chapters focused on the dynamics of stellar discs, either razor-thin or thickened. For these systems, we explored two regimes of secular diffusion either collisionless or collisional, depending on whether fluctuations are external or internal. In this chapter, we focus on another family of selfgravitating systems, for which a large set of particles orbits a dominant massive object. This corresponds for example to stars bound to a central super massive black hole in galactic nuclei, or to protoplanetary debris discs encircling a central star. As will be emphasised in the upcoming discussions, such systems, because they are dominated by one central object, have the peculiarity of being dynamically degenerate. This requires some adjustements to tailor the previous diffusion formalisms. Let us first discuss the main properties of such systems.
Stars in a stellar cluster surrounding a dominant super massive black hole (BH) evolve in a quasi-Keplerian potential. As a consequence, their orbits take the form of ellipses, which conserve their spatial orientation for many orbital periods, as illustrated in figure 6.1.1. This is a signature of the dynamical degeneracy of the Keplerian potential. The stellar cluster may then be represented as a system of massive Keplerian wires, for which the mass of each star is smeared out along the elliptic path followed by its quasi-Keplerian orbit. Such ideas were first developed in Rauch & Tremaine (1996), which introduced the concept of "resonant relaxation" by noting that wire-wire interactions greatly enhance the relaxation of the stars' angular momenta w.r.t. conventional estimates which do not account for the coherence of stars' orbits over many dynamical times and consider only uncorrelated two-body encounters. See Alexander (2005) for a review of the various stellar processes occurring in the vicinity of super massive black holes.
A detailed understanding of the relaxation processes occurring in galactic nuclei is important in order to predict the rates of tidal disruptions of stars by BHs (e.g., Rauch & Tremaine, 1996;Rauch & Ingalls, 1998), the merging rates of binary super massive BHs (e.g., Yu, 2002), or the rate of gravitational wave emissions from star-BH interactions (e.g., Hopman & Alexander, 2006;[START_REF] Merritt | [END_REF]. Resonant relaxation also appears as the appropriate framework to understand some of the features of young stellar populations found in the centre of our own Galaxy (e.g., [START_REF] Kocsis | [END_REF].
As for stellar discs, a first way to study the secular dynamics of quasi-Keplerian stellar clusters is to rely on direct N -body simulations. However, in this context, gaining physical insights from these simulations is challenging, as various complex dynamical processes are intimately entangled there. In addition, because of the significative breadth of timescales in these systems between the fast Keplerian motion and cosmic times, the computational costs of these simulations are such that one can typically only run a few realisations, limited to a relatively small number N of particles. Moreover, this cannot be scaled up easily to astrophysical systems, as different dynamical mechanisms scale differently with N [START_REF] Heggie | The Gravitational Million-Body Problem[END_REF]. When focusing on resonant relaxation, one can improve these simulations by using N -wires code (e.g., [START_REF] Kocsis | [END_REF], in which stars are replaced by orbit-averaged Keplerian wires.
A complementary approach to understand and describe the dynamics of such systems is to rely on tools from kinetic theory. Especially, in order to account for effects induced by the system's finite number of particles, the Balescu-Lenard formalism, presented in section 2.3, then appears as perfectly well suited. However, in the context of quasi-Keplerian systems, the application of the Balescu-Lenard formalism in its original form raises two additional difficulties, which ask for a particular attention. The first difficulty comes from the fact that one has to describe the dynamics of a system within a possibly non-inertial set of coordinates. This requires to pay a careful attention to canonical changes of coordinates as will be emphasised in section 6.2. The second difficulty arises from the intrinsic dynamical degeneracies of the Keplerian problem, i.e. the fact that the Keplerian frequencies Ω Kep satisfy commensurability conditions of the form n•Ω Kep 0, for some vectors of integers n = (n 1 , n 2 , n 3 ), as will be discussed in section 6.3. Indeed, the Balescu-Lenard formalism in its original form assumes that resonances are localised in action space and are not degenerate. As a consequence, it must be re-examined before it can be applied to the degeneracies inherent to quasi-Keplerian systems.
In the upcoming sections, we will show how one can account for these degeneracies in the case of a cluster of N particles orbiting a massive, possibly relativistic, central body. This will require to first average the equations of motion over the fast Keplerian angle associated with the orbital motion of stars around the BH. Once such an averaging is carried out, we will emphasise how the generic Balescu-Lenard formalism applies straightforwardly and yields the associated degenerate secular collisional equation. As will be detailed in the upcoming sections, this equation captures the drift and diffusion of particles' actions induced by their mutual resonant interaction at the frequency shifts present in addition to the mean Keplerian dynamics, e.g., possibly induced by the cluster's self-gravity or relativistic effects. This new equation will be shown to be ideally suited to describe the secular evolution of a large set of particles orbiting a massive central object, by capturing the secular effects of sequences of polarised wire-wire interactions (associated with scalar or vector resonant relaxation) on the underlying cluster's orbital structure.
This chapter is organised as follows. Section 6.2 specifies the BBGKY hierarchy to systems with a finite number of particles orbiting a central massive body, by using canonical coordinates to account adequately for the motion due to the central body. Section 6.3 describes the angle-action coordinates appropriate for such quasi-Keplerian systems and discusses how the dynamical degeneracies should be dealt with. Section 6.4 averages the corresponding dynamical equations over the fast Keplerian angles and discusses the newly obtained set of coupled evolution equations. Section 6.5 presents in detail the tions of the BBGKY hierarchy. The first equation (2.104) becomes here
∂f 1 ∂t + v 1 + ε N v 1 • ∂f 1 ∂x 1 + M • F 10 +M F 1r • ∂f 1 ∂v 1 + dΓ 2 F 12 f 1 (Γ 2 ) • ∂f 1 ∂v 1 + dΓ 2 F 12 • ∂g 2 (Γ 1 , Γ 2 ) ∂v 1 + 1 M • ∂f 1 ∂x 1 • dΓ 2 v 2 f 1 (Γ 2 )+ 1 M • dΓ 2 v 2 • ∂g 2 (Γ 1 , Γ 2 ) ∂x 1 = 0 . (6.10)
Similarly, the second equation (2.105) becomes (6.11) where (1 ↔ 2) stands for the permutation of indices 1 and 2 and applies to all preceding terms.
1 2 ∂g 2 ∂t + v 1 + ε N (v 1 +v 2 ) • ∂g 2 ∂x 1 + ε N v 2 • ∂f 1 ∂x 1 f 1 (Γ 2 )+ M • F 10 +M F 1r • ∂g 2 ∂v 1 + dΓ 3 F 13 f 1 (Γ 3 ) • ∂g 2 ∂v 1 + µF 12 • ∂f 1 ∂v 1 f 1 (Γ 2 )+ dΓ 3 F 13 g 2 (Γ 2 , Γ 3 ) • ∂f 1 ∂v 1 + 1 M • ∂f 1 ∂x 1 • dΓ 3 v 3 g 2 (Γ 2 , Γ 3 )+ 1 M • ∂g 2 ∂x 1 • dΓ 3 v 3 f 1 (Γ 3 ) + µF 12 • ∂g 2 ∂v 1 + dΓ 3 F 13 • ∂g 3 (Γ 1 , Γ 2 , Γ 3 ) ∂v 1 + 1 M • dΓ 3 v 3 • ∂g 3 (Γ 1 , Γ 2 , Γ 3 ) ∂x 1 +(1 ↔ 2) = 0 ,
As we are interested in first-order collisional effects, let us proceed as in equations (2.107) and (2.108), and truncate equations (6.10) and ( 6.11) at order 1/N . At this stage, let us emphasise that such quasi-Keplerian systems involve two small parameters, namely 1/N associated with the system's discreteness, and ε = M /M • capturing the dominance of the BH on the stars' individual dynamics. As the upcoming calculations will emphasise, we will perform kinetic developments where we keep only small terms of order ε and ε/N , while higher order corrections will be neglected. In equation (6.10), we note that all the terms are at least of order 1/N and should therefore all be kept. In equation (6.11), the first two lines are of order 1/N (except for the correction (ε/N )(v 1 +v 2 )•∂g 2 /∂x 1 which may be neglected) and should be kept, while all terms from the third line are of order 1/N 2 and may therefore be neglected. As already noted in equation (2.105), we note that the first term of the third line of equation (6.11), while being of order 1/N 2 can still get arbitrary large as particles 1 and 2 get closer. This term captures strong collisions and is not accounted for in the present formalism. In addition to these truncations, and in order to consider terms of order 1, let us finally define the system's 1-body DF F and its 2-body autocorrelation C as
F = f 1 M ; C = g 2 µM .
(6.12)
One should pay attention to the fact that these normalisations differ from the generic ones introduced in equation (2.106). Finally, in order to highlight the different orders of magnitude of the various components present in the problem, let us rescale as well some of the quantities appearing in equations (6.10) and (6.11). We first rescale the binary interaction potential, U , by using the mass of the central BH, so that
F ij = - ∂U ij ∂x i ; U ij = - GM • |x i -x j | . (6.13)
In addition, the potential Φ r = Φ rel associated with the relativistic corrections is rescaled so that
F ir = - ∂Φ r ∂x i ; Φ r → Φ r M • ; F ir → F ir M • . (6.14)
As a result of these various truncations and renormalisations, the first BBGKY equation (6.10) becomes
∂F ∂t + v 1 + ε N v 1 • ∂F ∂x 1 +F 10 • ∂F ∂v 1 +ε dΓ 2 F 12 F (Γ 2 ) • ∂F ∂v 1 +εF 1r • ∂F ∂v 1 + ε N dΓ 2 F 12 • ∂C(Γ 1 , Γ 2 ) ∂v 1 + ε ∂F ∂x 1 • dΓ 2 v 2 F (Γ 2 )+ ε N dΓ 2 v 2 • ∂C(Γ 1 , Γ 2 ) ∂x 1 = 0 . (6.15)
Similarly, the second BBGKY equation (6.11) becomes
1 2 ∂C ∂t +v 1 • ∂C ∂x 1 +F 10 • ∂C ∂x 1 +ε dΓ 3 F 13 F (Γ 3 ) • ∂C ∂v 1 +εF 1r • ∂C ∂v 1 + εF 12 • ∂F ∂v 1 F (Γ 2 )+ε dΓ 3 F 13 C(Γ 2 , Γ 3 ) • ∂F ∂v 1 + εv 2 • ∂F ∂x 1 F (Γ 2 )+ε ∂F ∂x 1 • dΓ 3 v 3 C(Γ 2 , Γ 3 )+ε ∂C ∂x 1 • dΓ 3 v 3 F (Γ 3 )+(1 ↔ 2) = 0 . (6.16)
One should of course note how equations (6.15) and (6.16) are similar to the associated generic ones obtained in equations (2.107) and (2.108). Differences arise from the contributions from the central BH and the relativistic corrections as well as from the additional kinetic terms present in the Hamiltonian from equation (6.7). As will be shown in section 6.4, once averaged over the BH-induced Keplerian motion, these kinetic corrections will not come into play at the order considered in the kinetic developments. The next step of the calculations is now to rewrite equations (6.15) and (6.16) within appropriate angle-action coordinates to capture in a simple manner the dominant mean Keplerian motion induced by the central BH. One subtlety with such Keplerian dynamics comes from the dynamical degeneracies present in the associated Keplerian orbital frequencies. These degeneracies have to handled with care as we will now detail.
Degenerate angle-action coordinates
In equations (6.15) and (6.16), one notes the presence of a dominant advection term v 1 •∂/∂x 1 +F 10 •∂/∂v 1 associated with the Keplerian motion induced by the central BH. The next step of our derivation is now to introduce the appropriate angle-action coordinates to capture this integrable Keplerian motion. Following section 1.3, let us remap the physical coordinates (x, v) to the Keplerian angle-action ones (θ, J ). The Keplerian orbital frequencies associated with these coordinates are then given by
θ = Ω Kep (J ) = ∂H Kep ∂J , (6.17)
where H Kep stands for the Hamiltonian associated with the Keplerian motion due to the BH. Of course, various choices of angle-action coordinates are possible. For 3D spherical potential, the usual angleaction coordinates [START_REF] Binney | Galactic Dynamics: Second Edition[END_REF] are given by
(J , θ) = (J 1 , J 2 , J 3 , θ 1 , θ 2 , θ 3 ) = (J r , L, L z , θ 1 , θ 2 , θ 3 ) , (6.18)
where J r and L are respectively the radial action and the magnitude of the angular momentum, while L z is its projection along the z-axis (see Appendix 4.D). The Keplerian Hamiltonian then becomes H Kep = H Kep (J r +L). Another choice of 3D angle-action coordinates is given by the Delaunay variables (Sridhar & Touma, 1999;[START_REF] Binney | Galactic Dynamics: Second Edition[END_REF] reading (J , θ) = (I, L, L z , w, g, h) . (6.19)
In equation (6.19), we introduced as (I = J r +L, L, L z ) the three actions of the system, while (w, g, h) are the associated angles. Here, the angles have straightforward interpretation in terms of the orbital elements of the Keplerian ellipses: w stands for the orbital phase or mean anomaly, g is the angle from the ascending node to the periapse, while h is the longitude of the ascending node. Within these variables, the Keplerian Hamiltonian becomes H Kep = H Kep (I), so that the angles g and h become integrals of motion, while the angle w advances at the frequency ẇ = Ω Kep = ∂H Kep /∂I. Because of the existence of these additional conserved quantities, the Keplerian potential is considered to be dynamically degenerate. This can have some crucial consequences on the long-term behaviour of the system, as we will now detail.
To clarify the upcoming discussions, let us note as d the dimension of the considered physical space, e.g., d = 2 for a razor-thin disc. Within this space, we consider an integrable potential ψ 0 and one associated angle-action mapping (x, v) → (θ, J ). A potential is said to be degenerate if there exists n ∈ Z d such that (6.20) where it is important for the vector n to be independent of J , for the degeneracy to be global. See figure 1.3.2 for an illustration of resonant orbits. Of course, a given potential may have more than one such degeneracy, and we denote as k the degree of degeneracy of the potential, i.e. the number of linearly independent vectors n satisfying equation (6.20). Let us consider for example the 3D angleaction coordinates from equation (6.18). The associated frequencies and degeneracy vectors are given by
∀J , n•Ω(J ) = 0 ,
Ω 3D = (Ω Kep , Ω Kep , 0) ⇒ n 1 = (1, -1, 0)
and n 2 = (0, 0, 1) , (6.21) so that k = 2. Similarly, for the 3D Delaunay angle-action variables from equation (6.19) one can write Ω Del = (Ω Kep , 0, 0) ⇒ n 1 = (0, 1, 0) and n 2 = (0, 0, 1) , (6.22) so that one gets as well k = 2, i.e. the degree of degeneracy of the potential is independent of the chosen angle-action coordinates. Because of their simpler degeneracy vectors n 1 and n 2 , the Delaunay variables from equation (6.19) appear as a more appropriate choice than the usual ones from equation (6.18). As a final remark, let us emphasise that for a given degenerate potential, one can always remap the system's angle-action coordinates to get simpler dynamical degeneracies. As an illustration, let us assume that in our initial choice of angle-action coordinates (θ, J ) , the system's degeneracies are captured by the k degeneracy vectors n 1 , ..., n k . Thanks to a linear change of coordinates (θ, J ) → (θ , J ), one can always construct new angle-action coordinates within which the k degeneracy vectors take the simple form n i = e i , where e i are the natural basis elements of Z d . Indeed, following [START_REF] Morbidelli | Modern celestial mechanics (Taylor & Francis) Mould[END_REF], because the vectors n i are by definition linearly independent, we may complete this family with d-k vectors n k+1 , ..., n d ∈ Z d to construct a basis over Q d . Defining the transformation matrix A of determinant 1 (6.23) let us introduce new angle-action coordinates (θ , J ) as
A = n 1 , ..., n d t /|(n 1 , ..., n d )| ,
θ = A•θ ; J = (A t ) -1 •J . (6.24)
It is straighforward to check that (θ , J ) are indeed new angle-action coordinates, for which J are conserved and θ ∈ [0, 2π]. In addition, within these new coordinates, the system's denegeracies are immediately characterised by the k vectors n i = e i . The new intrinsic frequencies then satisfy Ω i = 0 for 1 ≤ i ≤ k, the degeneracies of the potential became simpler. In all the upcoming calculations, we will always consider such simpler angle-action coordinates, for which the additional conserved quantities are straightforward to obtain. Let us finally introduce the notations θ s = (θ 1 , ..., θ k ) ; θ f = (θ k+1 , ..., θ d ) ; J s = (J 1 , ..., J k ) ; J f = (J k+1 , ..., J d ) ; E = (J , θ s ) .
(6.25)
In equation (6.25), θ s and J s correspond to the slow angles and actions, while θ f and J f correspond to the fast angles and actions. Finally, we introduced as E the vector of all the conserved quantities for the underlying dynamics. In the case of a Keplerian potential, this corresponds to a Keplerian elliptical wire. For a degenerate potential, the slow angles are the ones for which the intrinsic frequencies are equal to 0, while these frequencies are non-zero for the fast angles. Let us finally define the degenerate angle-average w.r.t. the fast angles as
F (J , θ s ) = dθ f (2π) d-k F (J , θ s , θ f ) .
(6.26)
Let us now use these angle-action coordinates to rewrite the two evolution equations (6.15) and (6.16). Because they were tailored for the Keplerian dynamics, these coordinates allow us to rewrite the Keplerian advection term as
v 1 • ∂ ∂x 1 +F 10 • ∂ ∂v 1 = Ω Kep • ∂ ∂θ . (6.27)
In addition, let us emphasise that the angle average from equation (6.26) is such that the advection term from equation (6.27) immediately vanishes when averaged, so that
Ω Kep • ∂F ∂θ = dθ k+1 2π ... dθ d 2π d i=k+1 Ω i Kep (J ) ∂F ∂θ i = 0 . (6.28)
Finally, the coordinates mapping (x, v) → (θ, J ) being canonical, infinitesimal volumes are conserved so that dΓ = dxdv = dθdJ . Poisson brackets are also preserved, so that for any functions G 1 (x, v) and .29) In order to shorten the notations, let us finally introduce the rescaled self-consistent potential of the stars
G 2 (x, v) one has G 1 , G 2 = ∂G 1 ∂x • ∂G 2 ∂v - ∂G 1 ∂v • ∂G 2 ∂x = ∂G 1 ∂θ • ∂G 2 ∂J - ∂G 1 ∂J • ∂G 2 ∂θ . ( 6
Φ as Φ(x 1 ) = dΓ 2 U 12 F (Γ 2 ) ; - ∂Φ ∂x 1 = dΓ 2 F 12 F (Γ 2 ) . (6.30)
One may now rewrite equation (6.15) within these new coordinates as (6.31) where we wrote Ω 1 Kep = Ω Kep (J 1 ), and introduced the notation 6.32) corresponding to the Poisson bracket w.r.t. the variables 1. In equation ( 6.31), we gathered on the second line all the terms associated with the additional kinetic terms present in the Hamiltonian from equation (6.7). As shown in section 6.4, these terms once averaged over the fast Keplerian motion will be negligible at the order considered here. Equation (6.16) can also straighforwardly be rewritten as
∂F ∂t +Ω 1 Kep • ∂F ∂θ 1 +ε F, Φ+Φ r + ε N dΓ 2 C(Γ 1 , Γ 2 ), U 12 (1) + ε N F, v 2 1 2 +ε F, v 1 • dΓ 2 v 2 F (Γ 2 ) + ε N dΓ 2 C(Γ 1 , Γ 2 ), v 1 •v 2 (1) = 0 ,
G 1 (Γ 1 , Γ 2 ), G 2 (Γ 1 , Γ 2 ) (1) = ∂G 1 ∂θ 1 • ∂G 2 ∂J 1 - ∂G 1 ∂J 1 • ∂G 2 ∂θ 1 , (
1 2 ∂C ∂t +Ω 1 Kep • ∂C ∂θ 1 +ε C(Γ 1 , Γ 2 ), Φ+Φ r (1) +ε F (Γ 1 )F (Γ 2 ), U 12 (1) +ε dΓ 3 C(Γ 2 , Γ 3 ) F (Γ 1 ), U 13 (1) + ε F (Γ 1 ), v 1 •v 2 F (Γ 2 ) (1) +ε F (Γ 1 ), v 1 • dΓ 3 v 3 C(Γ 2 , Γ 3 ) (1) +ε C(Γ 1 , Γ 2 ), v 1 • dΓ 3 v 3 F (Γ 3 ) (1)
+(1 ↔ 2) = 0 , (6.33)
where the terms present in the second line are the ones associated with the additional kinetic terms from equation (6.7). The rewriting from equation (6.31) allows us to easily identify the various timescales of the problem. These are: (i) the Keplerian dynamical timescale T Kep = 1/Ω Kep associated with the dominant BHinduced Keplerian dynamics and captured by the advection term Ω 1 Kep •∂F/∂θ 1 , (ii) the secular collisionless timescale of evolution T sec = ε -1 T Kep associated with the potential term ε Φ+Φ r due to the stars' self-consistent potential as well as the relativistic corrections, and finally (iii) the collisional timescale of relaxation T relax = N T sec associated with the last term of the first line of equation (6.31). Having obtained equations (6.31) and (6.33) which describe the joint evolution of the system's 1-body DF and its 2-body autocorrelation, we will show in the next section how one may get rid of the BH-induced Keplerian dynamics via an appropriate degenerate angle-average.
Averaging the evolution equations
As the Keplerian dynamics due to the BH is much faster than the one associated with all the other potential contributions, rather than considering the stars as point particles, let us describe them as massive elliptical wires, for which the mass of the star is smeared out along the elliptic path of its Keplerian orbit. This is the exact purpose of the degenerate angle-average from equation (6.26). As noted in equation (6.28), such an average naturally cancels out any contributions associated with the BH Keplerian advection term. Let us start from equation (6.31) and multiply it by dθ f /(2π) d-k . In order to estimate the average of the various terms that occur in equation (6.31), let us finally assume that the system's DF, F , can be decomposed as F = F + f with f ∼ O(1) and f = 0 , (6.34) where 1 is an additional small parameter of order 1/N . The ansatz from equation (6.34) is the crucial assumption of the present derivation. Indeed, contrary to whas was generically discussed in figures 1.3.4 and 1.3.5 w.r.t. the mechanisms of phase mixing or violent relaxation, the BH's domination on the dynamics strongly limits the efficiency of such mechanisms to allow for a rapid dissolution of any θ f -dependence. Here, we therefore assume that the ansatz from equation (6.34) was satisfied because in its initial state the system was already phase mixed.
Relying on this ansatz, let us now discuss in turn how the various terms appearing in equation (6.31) can be dealt with once averaged over the fast Keplerian angle. In the first Poisson bracket of equation (6.31), let us recall that the self-consistent potential Φ introduced in equation (6.30) should be seen as a functional of the system's DF F . As a consequence, this term becomes
ε F, Φ(F )+Φ r = ε F + f, Φ(F + f )+Φ r = ε F , Φ(F )+Φ r + O(ε ) = (2π) d-k ε F , Φ(F )+Φ r + O(ε ) . (6.35)
In equation (6.35), we introduced the system's averaged self-consistent potential Φ as 6.36) where, for clarity, the notation for the self-consistent potential was shortened as Φ = Φ(F ). In equation (6.36), we also introduced the (doubly) averaged interaction potential U 12 as .37) This potential describes the pairwise interaction potential between two Keplerian wires of coordinates E 1 and E 2 . Finally, we also defined the averaged potential Φ r as (6.38) where we introduced the prefactor 1/(2π) d-k for convenience. See Appendix 6.A for the expression of the relativistic precession frequencies. In equation (6.35), let us note that at first order in ε and zeroth order in , the self-consistent potential of the system has to be computed while only considering the averaged system's DF F . In order to deal with the second Poisson bracket in equation ( 6.31), we perform on the 2-body autocorrelation C the same double average than the one introduced in equation (6.37).
Φ(E 1 ) = dE 2 F (E 2 ) U 12 (E 1 , E 2 ) , (
U 12 (E 1 , E 2 ) = dθ f 1 (2π) d-k dθ f 2 (2π) d-k U 12 (Γ 1 , Γ 2 ) . ( 6
Φ r (E) = 1 (2π) d-k dθ f (2π) d-k Φ r (Γ) ,
Similarly to equation (6.34), let us assume that the 2-body autocorrelation can be developed as
C = C + c with c ∼ O(1) and c = 0 . (6.39)
At first order in ε and zeroth order in , the third term from equation (6.31) can be then be rewritten as
ε N dΓ 2 C(Γ 1 , Γ 2 ), U 12 (1) = ε (2π) d-k N dE 2 C(E 1 , E 2 ), U 12 (1) . (6.40)
Finally, let us deal with all the additional kinetic terms present in the second line of equation (6.31).
Once averaged over the fast Keplerian angle, and considering only terms at first order in ε and zeroth order in , these various terms involve the quantities
dθ f 1 v 1 = 0 ; dθ f 1 v 2 1 2 ∝ H Kep (J f 1 ) . (6.41)
In equation (6.41), the first identity comes the fact that Keplerian orbits are closed, so that the mean displacement over one orbit is zero, while the second identity is a direct consequence of the virial theorem. Because these terms either vanish or do not depend on the slow coordinates θ s and J s , at the order considered here, they will not contribute to the dynamics once averaged over the fast Keplerian angle. As a conclusion, keeping only terms of order ε and ε/N , equation (6.31) becomes
∂F ∂t +ε(2π) d-k F , Φ+Φ r + ε(2π) d-k N dE 2 C(E 1 , E 2 ), U 12 (1) = 0 . (6.42)
Because equation (6.42) was obtained via an average over the fast angles, one can note in this equation that all the functions appearing in the Poisson brackets only depend on E 1 = (J 1 , θ s 1 ). As a consequence, the Poisson bracket from equation (6.29) takes here the shortened form Let us follow a similar angle-averaging procedure to deal with the second equation (6.33) of the BBGKY hierarchy. Let us therefore multiply equation (6.33) by dθ f 1 dθ f 2 /(2π) 2(d-k) and rely on the assumptions from equations (6.34) and (6.39). Keeping only terms of order ε, equation (6.33) finally becomes (6.46) where once again, one can note that all the additional kinetic terms present in the second line of equation (6.33) vanish at the order considered here. Equations (6.45) and (6.46) are the main results of this section. They describe the coupled evolutions of the system's averaged DF F and 2-body correlation C driven by finite-N effects. Let us already underline the strong analogies between these two equations and the non-degenerate equations (2.107) and (2.108). A rewriting of the degenerate equations (6.45) and (6.46) was also recently obtained in Sridhar & Touma (2016a,b), following Gilbert's method (Gilbert, 1968). Starting from equation ( 6.45), one can now investigate at least four different dynamical regimes of evolution, as we now detail:
G 1 (E), G 2 (E) = ∂G 1 ∂θ s • ∂G 2 ∂J s - ∂G 1 ∂J s • ∂G 2 ∂θ s , ( 6
1 2 ∂C ∂τ + C(E 1 , E 2 ), Φ(E 1 )+Φ r (E 1 ) (1) + F (E 1 )F (E 2 ), U 12 (1) (2π) d-k + dE 3 C(E 2 , E 3 ) F (E 1 ), U 13 (1) +(1 ↔ 2) = 0 ,
I Considering equation (6.45), the system of Keplerian wires could be initially far from a quasistationary equilibrium, so that [F , Φ+Φ r ] = 0. It is then expected that the system will undergo a first collisionless phase of violent relaxation (Lynden-Bell, 1967), allowing it to rapidly reach a quasi-stationary equilibrium. See figure 1.3.5 and the associated discussion, for an illustration of the classical violent relaxation in self-gravitating systems. We do not investigate this process further here. However, we assume that this collisionless relaxation of Keplerian wires is sufficiently efficient, so that the system rapidly reaches a quasi-stationary stable state. Following this initial violent phase, the system's dynamics is then driven by a much slower secular evolution, either collisionless (item III) or collisional (item IV).
II For a given stationary DF of Keplerian wires, equation (6.45) also captures the system's gravitational susceptibility, so that one could also investigate the possible existence of collisionless dynamical instabilities through the equation ∂F /∂τ +[F , Φ+Φ r ] = 0. See Appendix 4.C for an illustration of dynamical instabilities in non-degenerate stellar discs. We do not investigate such instabilities further here. However, similarly to what was assumed for non-degenerate systems, we suppose that, throughout its evolution, the system, while still being able to amplify and dress perturbations, always remains dynamically stable w.r.t. the collisionless dynamics. See, e.g., Tremaine (2005); Polyachenko et al. (2007); Jalali & Tremaine (2012) for examples of stability investigations in the quasi-Keplerian context. III After the system has reached a quasi-stationary stable state, one may now study the system's secular evolution along quasi-stationary stable equilibria. As was presented in detail in section 2.2, a first way to induce a long-term evolution is via the presence of external stochastic fluctuations. In order to describe such externally induced secular collisionless evolution, one would start from equation (6.45), neglect the contributions associated with the collisional term in 1/N in equation (6.45), and look for the long-term effects of external perturbations. This would correspond to the specification to degenerate quasi-Keplerian systems of the secular collisionless stochastic forcing considered in section 2.2. In the case of quasi-Keplerian systems, one additional difficulty comes from the canonical change of coordinates we had to perform in equation (6.4) to emphasise the properties of the dominant BH-induced Keplerian dynamics. Adding external perturbations may offset the system and introduce non-trivial inertial forces, which should be dealt with carefully. We do not present thereafter the specification of such externally forced secular dynamics to the case of quasi-Keplerian systems.
IV During its secular evolution along quasi-stationary equilibria, the dynamics of an isolated system of Keplerian wires may also be driven by finite-N fluctuations. Such a self-induced collisional dynamics, in the context of non-degenerate inhomogeneous systems, was presented in detail in section 2.3. In the present quasi-Keplerian context, this amounts to neglecting any effects associated with external stochastic perturbations and consider the contributions coming from the 1/N collisional term in equation (6.45). In order to characterise this collisional term, this requires to consider as well equation (6.46), which describes the dynamics of the system's fluctuations. In section 6.5, we present in detail how this approach may be pursued for quasi-Keplerian systems.
We then derive the analog of the Balescu-Lenard equation (2.67) in the context of degenerate dynamical systems, such as galactic nuclei. As discussed in the next section, such a diffusion equation sourced by finite-N fluctuations captures the known mechanism of resonant relaxation (Rauch & Tremaine, 1996) of particular relevance for galactic nuclei. See Bar-Or & Alexander (2014) for a similar study of the effect of finite-N stochastic internal forcings via the so-called η-formalism.
Let us finally note that one could also investigate the secular dynamics a quasi-stationary nonaxisymmetric set of eccentric orbtis orbiting a central black hole, as an unperturbed collisionless equilibrium. This would correspond for example to the expected lopsided configuration of M31's galactic centre (Tremaine, 1995). In order to derive the Balescu-Lenard equation associated with such a configuration, one would first have to find new angle-action variables for which item II would be satisfied, and then proceed further with the formalism. We also emphasised in item II, the role played by the system's self-gravity, whose importance varies with the context. Indeed, depending on the mass of the stellar cluster, one expects that there exists a regime for which the system's self-induced orbital precession is significant, but the wires' self-gravity remains too weak to induce a collisionless instability. In such a regime, accounting for the self-induced polarisation of the wires becomes important in item III and IV. This motivates the upcoming derivation of the degenerate Balescu-Lenard equation.
The degenerate Balescu-Lenard equation
Assuming that the evolution of the system is driven by finite-N effects, let us now illustrate how one may derive the inhomogeneous degenerate Balescu-Lenard equation. This equation captures the longterm effects of the 1/N collisional contribution in equation (6.45). It is assumed that the system is isolated and undergoes no external perturbations. In order to ease the derivation of the closed kinetic equation satisfied by F , we rely on the strong analogies between the present quasi-Keplerian case and the non-degenerate equations considered in Appendix 2.B. As already underlined in section 2.3.1 when deriving the Balescu-Lenard equation, we rely on the adiabatic approximation that the system secularly relaxes through a series of collisionless equilibria. In the present quasi-Keplerian context, these collisionless equilibria correspond to stationary (and stable) steady states of the collisionless advection term F , Φ+Φ r from equation (6.45). Let us therefore assume that throughout its evolution, the system's DF satisfies ∀τ , F (τ ), Φ(τ )+Φ r (τ ) = 0 .
(6.47)
As already underlined in item I of the previous section, it is expected that such collisionless equilibria are rapidly reached by the system (on a few T sec ) through an efficient out-of-equilibrium violent relaxation.
In addition, we also assume that the symmetry of the system is such that collisionless equilibria are of the form 6.48) so that throughout its evolution, the system's averaged DF F does not depend on the slow angles θ s . At this stage, let us note that in the present quasi-Keplerian context, for which additional conserved quantities other than the actions J are available (namely the slow angles θ s ), the assumption from equation (6.48) limits the breadth of collisionless equilibria which can considered. For example, lopsided collisionless equilibria, such as the one expected in M31, cannot be considered. Despite the assumption from equation (6.48), let us emphasise however that the system's averaged autocorrelation C, which evolves according to equation (6.46), still depends on the two slow angles θ s 1 and θ s 2 . Finally, let us assume that the system's symmetry also guarantees that F = F (J ) ⇒ Φ = Φ(J ) and Φ r = Φ r (J ) .
F (J , θ s , τ ) = F (J , τ ) , (
(6.49)
One should note that the previous assumptions, while being restrictive, are still satisifed, among others, for two important cases namely razor-thin axisymmetric discs (see section 6.6.1) and 3D spherical systems (see section 6.6.2). When assuming equations (6.48) and (6.49), the equilibrium condition from equation (6.47) is then immediately satisfied. Finally, let us introduce the total precession frequencies
Ω s as Ω s (J ) = ∂[Φ+Φ r ] ∂J s . (6.50)
These frequencies capture the precession of the slow angles θ s , i.e. the precession of the Keplerian wires induced by the joint contributions from the system's self-consistent potential Φ and the relativistic corrections Φ r . Let us note that these frequencies do not involve the Keplerian frequencies from equation (6.17), and are therefore a priori non-degenerate. The two evolution equations (6.45) and (6.46) can then be rewritten as
∂F ∂τ + 1 N dE 2 C(E 1 , E 2 ), U 12 (1) = 0 , (6.51) and 1 2 ∂C ∂t +Ω s 1 • ∂C(E 1 , E 2 ) ∂θ s 1 - 1 (2π) d-k ∂F ∂J s 1 • ∂U 12 ∂θ s 1 -dE 3 C(E 2 , E 3 ) ∂F ∂J s 1 • ∂U 13 ∂θ s 1 +(1 ↔ 2) = 0 . (6.52)
At this stage, let us emphasise how the two coupled evolution equations (6.51) and (6.52) are similar to the non-degenerate equations (2.107) and (2.108). The only differences correspond to changes in the prefactors, as well as to the fact that only derivatives w.r.t. the slow angles and actions θ s and J s are present in the quasi-Keplerian context. Relying on these strong similarities, we may follow the same method as in Appendix 2.B to derive the kinetic equation satisfied by the averaged DF F . Because of these analogies, we do not repeat here this derivation, but refer to Appendix B in Fouvry et al. (2016d) for a detailed presentation of this derivation. As a brief summary, let us recall the main steps of this calculation. The first step is to solve equation (6.52) to obtain the system autocorrelation C as a functional of the system's 1-body DF F . To do so, one relies on Bogoliubov's ansatz, which assumes that F evolves on timescales much larger than the one associated with C. Injecting this inverted expression of C in equation ( 6.51), one finally obtains the closed kinetic equation satisfied by F only. This is the degenerate inhomogeneous Balescu-Lenard equation, that we will now present in detail.
The one-component Balescu-Lenard equation
Once the two coupled equations (6.51) and (6.52) are solved, one gets the degenerate inhomogeneous Balescu-Lenard equation reading
∂F ∂τ = π(2π) 2k-d N ∂ ∂J s 1 • m s 1 ,m s 2 m s 1 dJ 2 δ D (m s 1 •Ω s 1 -m s 2 •Ω s 2 ) |D m s 1 ,m s 2 (J 1 , J 2 , m s 1 •Ω s 1 )| 2 × m s 1 • ∂ ∂J s 1 -m s 2 • ∂ ∂J s 2 F (J 1 , τ ) F (J 2 , τ ) . (6.53)
Let us first strongly emphasise how this degenerate Balescu-Lenard equation ressembles the nondegenerate one from equation (2.67). Let us recall now some important properties of this diffusion equation. In equation (6.53), we noted as d the dimension of the physical space, k the number of dynamical degeneracies of the underlying zeroth-order potential (here the Keplerian potential induced by the BH). The r.h.s. of equation (6.53) is the degenerate inhomogeneous Balescu-Lenard collision operator. It captures the secular diffusion of Keplerian wires induced by finite-N fluctuations, i.e. it describes the distortion of these wires as their actions diffuse through their self-interaction. Of course, because it was obtained thanks to a kinetic development at order 1/N , the r.h.s. of equation (6.53) vanishes in the limit N → +∞. Equation (6.53) also encompasses a resonance condition via the Dirac delta
δ D (m s 1 •Ω s 1 -m s 2 •Ω s 2 ) (with the shortened notation Ω s i = Ω s (J i ))
, where m s 1 , m s 2 ∈ Z k are integer resonance vectors. It is important to note here that this resonance condition only involves the precession frequencies of the Keplerian wires. As already noted previously, for a given resonance vector m s 1 , the diffusion in action space will occur along the discrete direction given by this vector. One should finally interpret the integration over the dummy variable J 2 as a scan of action space looking for regions where the resonance condition is satisfied. These resonant distant encounters between precessing Keplerian wires are the drivers of the collisional evolution. In analogy with figure 2.3.2, we illustrate in figure 6.5.1 this resonance condition on precession frequencies. We also note that equation (6.53) involves the antisymmetric operator, m s 1 •∂/∂J s 1 -m s 2 •∂/∂J s 2 , which when applied to the quadratic term F (J 1 )F (J 2 ) The two wires satisfy a resonance condition for their precession frequencies. Uncorrelated sequences of such resonant interactions will lead to a secular diffusion of the system's orbital structure following equation (6.53). These resonances are non-local in the sense that the two resonant orbits need not be close in position nor in action space. As emphasised in section 6.6.1, in razor-thin axisymmetric discs, the system's symmetry enforces m s 1 = m s 2 , i.e. the two orbits are caught in the same resonance.
"weighs" the relative number of pairwise resonant orbits caught in the resonant configuration. Because it accounts for collective effects, i.e. the dressing of fluctuations by the system's susceptibility, equation (6.53) involves the dressed susceptibility coefficients 1/D m s 1 ,m s 2 (J 1 , J 2 , ω). In the quasi-Keplerian context, the dressed susceptibility coefficients from equation (2.50) become 6.54) where I stands for the identity matrix and M is the system's averaged response matrix. In the quasi-Keplerian context, the response matrix from equation (2.17) becomes
1 D m s 1 ,m s 2 (J 1 , J 2 , ω) = p,q ψ (p) m s 1 (J 1 ) I-M(ω) -1 pq ψ (q) * m s 2 (J 2 ) , (
M pq (ω) = (2π) k m s dJ m s •∂F /∂J s ω-m s •Ω s ψ (p) * m s (J ) ψ (q)
m s (J ) . (6.55)
One can note that in equations (6.54) and (6.55), we had to rely on the matrix method (see section 2.2.2) to relate the DF's perturbations to the induced potential perturbations. In the non-degenerate case, this requires the introduction of a biorthonormal basis of potentials and densities (ψ (p) , ρ (p) ) satisfying equation (2.12). In the degenerate quasi-Keplerian context, we rely on the same method and introduce a basis of potentials and densities satisfying similarly
ψ (p) (x) = dx ρ (p) (x ) U (|x-x |) ; dx ψ (p) (x) ρ (q) * (x) = -δ q p .
(6.56) Equation (6.56) is identical to equation (2.12), except for the fact that equation (6.56) involves the rescaled interaction potential, U , from equation (6.13). Once the basis elements ψ (p) specified, one can define their average ψ (p) following equation (6.26). Finally, following the convention from equation (2.6), we define their Fourier transform w.r.t. the slow angles θ s as
ψ (p) (E) = m s ψ (p) m s (J ) e im s •θ s ; ψ (p) m s (J ) = dθ s (2π) k ψ (p) (E) e -im s •θ s . (6.57)
Inspired by the various rephrasings presented in section 2.3.4, it is straighforward to rewrite equation (6.53) as an anisotropic self-consistent non-linear diffusion equation, by introducing the associated drift and diffusion coefficients. The non-degenerate equation (2.68) becomes for quasi-Keplerian systems
∂F ∂τ = ∂ ∂J s F (J 1 , τ ) F (J 2 , τ ) , (6.60)
where, as in equation (2.74), we introduced the averaged bare susceptibility coefficients 6.61) where the averaged rescladed wire-wire interaction potential U 12 was introduced in equation (6.37). As a final remark, let us note that the degenerate Balescu-Lenard equation (6.53) (similarly for the associated Landau equation (6.60)), while defined on the full action space J = (J s , J f ), does not allow for changes in the fast actions J f . Indeed, let us define the marginal DF, P F , as P F = dJ s F (J ). Equation (6.53) then gives
A m s 1 ,m s 2 (J 1 , J 2 ) as A m s 1 ,m s 2 (J 1 , J 2 ) = dθ s 1 (2π) k dθ s 2 (2π) k U 12 (E 1 , E 2 ) e -i(m s 1 •θ s 1 -m s 2 •θ s 2 ) , (
∂P F ∂τ = 0 . (6.62)
As a consequence, in the degenerate context, the collisional secular diffusion only occurs in the directions J f = cst. Such a conservation of the individual fast actions of the particles is a direct consequence of the adiabatic invariance of these actions, whose associated intrinsic frequencies are much faster than the precession frequencies involved in the collisional diffusion.
The multi-component Balescu-Lenard equation
Similarly to what was presented in section 2.3.6, one may also generalise the degenerate Balescu-Lenard equation (6.53) to a system of multiple components. This is of particular relevance for quasi-Keplerian systems such as galactic nuclei, for which one expects that the joint presence of multiple type of stars or black holes orbiting a central super massive black hole could be of importance for the system's fate, by inducing for example relative segregation. As in equation (2.76), let us assume that the considered system is made of various components indexed by the letters "a" and "b". The particles of the component "a" have an individual mass µ a and follow the DF F a . As detailed in Appendix 6.B (which also details all the normalisations used), the evolution of each DF is then given by the multi-component degenerate inhomogeneous Balescu-Lenard equation reading
∂F a ∂τ = π(2π) 2k-d ∂ ∂J s
In equation (6.67), let us emphasise that the total drift coefficients are multiplied by the dimensionless mass η a of the considered component. This essentially captures the process of segregation, when multiple masses are involved, as components with larger individual masses will secularly tend towards narrower steady states. This can be seen for example in equation (6.67) by seeking asymptotic stationary states to equation (6.67) by nulling the curly brace in its r.h.s. As a final remark, let us note that the demonstration in section 2.3.7 that the non-degenerate Balescu-Lenard equation (2.67) satisfies a H-theorem for Boltzmann's entropy naturally extends to the degenerate case. Therefore, the degenerate Balescu-Lenard equation (6.53) and Landau equation (6.60) satisfy a Htheorem for the system's entropy defined similarly to equation (2.83) as S(τ ) = -dJ 1 s(F (J 1 )). Finally, following equation (2.91), the multi-component degenerate Balescu-Lenard equation (6.63) satisfies similarly a H-theorem for the system's total entropy defined as S tot (τ ) = -dJ 1 a (1/η a )s(F a (J 1 )).
Applications
In the previous sections, we detailed how one could describe the secular dynamics of a system composed of a finite number of particles orbiting a central massive object. In the derivation of the degenerate Balescu-Lenard equation (6.53), we especially assumed in equation (6.49), that the symmetry of the considered system was such that DFs F depending only on the actions would lead to self-consistent potentials Φ depending as well only on the actions. Let us now examine in turn some more specific configurations for which the assumption from equation (6.49) is indeed satisfied, and discuss how the previous results can be further extended for these specific geometries. This will allow us to underline the wealth of possible physical implications one can draw from this general framework. Sections 6.6.1 and 6.6.2 will respectively consider the cases of razor-thin axisymmetric discs and 3D spherical systems, while in section 6.6.3, we will detail how the present formalism allows us to recover the phenomenon of relativistic Schwarzschild barrier [START_REF] Merritt | [END_REF] recently discovered in N -body simulations.
Razor-thin axisymmetric discs
As a first case of interest, let us specialise the Balescu-Lenard equation (6.53) to razor-thin axisymmetric discs. The dimension of the physical space is d = 2, while the number of degeneracies of the Keplerian dynamics is k = 1. For such systems, the resonance condition from equation (6.53) becomes a simpler 1D resonance condition naively reading m s 1 Ω s 1 -m s 2 Ω s 2 = 0. Let us now detail how the Balescu-Lenard equation (6.53) can be further simplified in the case of razor-thin discs, as a consequence of additional symmetries of the pairwise interaction potential. For razor-thin discs, the Delaunay angle-action coordinates from equation (6.19) become
(J , θ) = (J 1 , J 2 , θ 1 , θ 2 ) = (J s , J f , θ 1 , θ 2 ) = (L, I, g, w) .
(6.69)
Introducing the polar coordinates (R, φ), the rescaled interaction potential U 12 from equation (6.13) can be written as where we introduced the semi-major axis a, the eccentricity e, the true anomaly f , and the eccentric anomaly η as
U 12 = - GM • |x 1 -x 2 | = - GM • R 2 1 +R 2 1 -2R 1 R 2 cos(φ 1 -φ 2 ) . ( 6
a = I 2 GM • ; e = 1-(L/I) 2 ; f = tan -1 √
1-e 2 sin(η) cos(η)-e ; w = η-e sin(η) . (6.72)
These mappings allow us to rewrite the interaction potential from equation (6.70) as
U 12 = U (g 1 -g 2 , w 1 , w 2 , J 1 , J 2 ) =⇒ U 12 = U (g 1 -g 2 , J 1 , J 2 ) . (6.73)
Because of this dependence, the bare susceptibility coefficients from equation (6.61) satisfy
A m s 1 ,m s 2 (J 1 , J 2 ) = δ m s 2 m s 1 A m s 1 ,m s 1 (J 1 , J 2 ) . (6.74)
Let us now show that a similar relation also holds for the dressed susceptibility coefficients from equation (6.54). When computing the response matrix for razor-thin stellar discs in chapter 4, we already emphasised in equation (4.5) that the basis elements for a razor-thin disc may generically be written as
ψ (p) (R, φ) = e i p U p n p (R) , (6.75)
where p and n p are two integer indices, and U n are radial functions (see figure 4.3.5 for an illustration of possible radial functions). With the decomposition from equation (6.75), one can note that the azimuthal and radial dependences have been disentangled. In the mapping from equation (6.71), only the azimuthal angle φ depends on the slow angle g, so that the Fourier transformed averaged basis elements from equation (6.57) satisfy
ψ (p) m s (J ) = δ m s p ψ (p)
m s (J ) . (6.76)
As a consequence, the system's response matrix from equation (6.55) satisfies
M pq (ω) = δ q p M pq (ω) . (6.77)
The two properties from equations (6.76) and (6.77) finally allow us to rewrite the dressed susceptibility coefficients from equation (6.54) as .78) so that both the bare and dressed susceptibility coefficients satisfy similar relations. The two additional symmetry properties from equations (6.74) and (6.78) allow us to simplify the resonance condition of the Balescu-Lenard equation (6.53). For razor-thin discs, one can write
1 D m s 1 ,m s 2 (J 1 , J 2 , ω) = δ m s 2 m s 1 1 D m s 1 ,m s 1 (J 1 , J 2 , ω) , ( 6
∂F ∂τ = π N ∂ ∂J s 1 dJ 2 δ D (Ω s (J 1 )-Ω s (J 2 )) |D tot (J 1 , J 2 )| 2 ∂ ∂J s 1 - ∂ ∂J s 2 F (J 1 ) F (J 2 ) , (6.79)
where we introduced the system's unique total dressed susceptibilty coefficient as
1 |D tot (J 1 , J 2 )| 2 = m s 1 |m s 1 | |D m s 1 ,m s 1 (J 1 , J 2 , m s 1 Ω s (J 1 ))| 2 .
(6.80)
If one neglects collective effects, equation (6.79) becomes the associated Landau equation, for which the total dressed susceptibility coefficient 1/|D tot (J 1 , J 2 )| 2 should be replaced by the bare one
|A tot (J 1 , J 2 )| 2 reading |A tot (J 1 , J 2 )| 2 = m s 1 |m s 1 | |A m s 1 ,m s 1 (J 1 , J 2 )| 2 . (6.81)
The Landau analog of equation (6.79) for razor-thin axisymmetric discs with the bare susceptibility coefficients from equation (6.81) was also recently derived in Sridhar & Touma (2016c) via Gilbert's equation. Thanks to these additional symmetries, the degenerate Balescu-Lenard equation (6.79) for razor-thin discs involves a simpler resonance condition, which constrains resonant encounters to occur only between Keplerian wires caught in the same resonance, as illustrated in figure 6.5.1. Finally, in order to compute effectively the diffusion flux from equation (6.79), one can follow the exact same approach as detailed in section 4.2.5 to deal with the resonance condition. We do not repeat here this method. More details can be found in section 6.1 of Fouvry et al. (2016d). Thanks to equations (6.36) and (6.103), one can compute the two quasi-stationary potentials Φ and Φ r , which respectively capture the contributions from the self-induced potential as well as the relativistic corrections. One can then estimate the associated precession frequencies Ω s , thanks to which the critical resonant lines γ(ω) = {J | Ω s (J ) = ω} can be determined. These curves characterise the set of all orbits which precess at the same frequency ω. As already emphasised in equation (4.29), the calculation of the diffusion flux then only involves a simple one-dimensional integral of a regular integrand along these resonant lines. In the context of quasi-Keplerian systems, one expects two additional difficulties. The first one is associated with the calculation of the wire-wire interaction potential U 12 from equation (6.37), which exhibits a diverging behaviour as one considers the interaction of nearby wires (see, e.g., Touma et al. (2009) and Appendix A in Touma & Sridhar (2012)). The second difficulty arises from the calculation of the system's response matrix given by equation (6.55), which, as already emphasised in section 4.2.3, can be a cumbersome and delicate task.
Spherical clusters
Let us now specify the degenerate Balescu-Lenard equation (6.53) to 3D spherical systems. The dimension of the physical space is d = 3, while the number of degeneracies of the Keplerian dynamics is k = 2. As a consequence, the resonance condition from equation (6.53) becomes two-dimensional. In the 3D context, the Delaunay variables from equation (6.19) become
(J , θ) = (J s 1 , J s 2 , J f 3 , θ s 1 , θ s 2 , θ f 3 ) = (L, L z , I, g, h, w) , (6.82)
where, as previously, g stands for the angle from the ascending node to the periapse, h for the longitude of the ascending node, and w for the mean anomaly, i.e. the Keplerian orbital phase. As in the previous section, let us now detail how the 3D spherical geometry allows us to simplify the degenerate Balescu-Lenard equation. Within the spherical coordinates (R, θ, φ), the rescaled interaction potential U from equation (6.13) can be written as
U 12 = - GM • |x 1 -x 2 | = -GM • R 2 1 +R 2 2 -2R 1 R 2 sin(θ 1 ) sin(θ 2 ) cos(φ 1 -φ 2 )+cos(θ 1 ) cos(θ 2 ) -1/2
. (6.83)
Following equation (5.20) of [START_REF] Merritt | Astrophysical Black Holes[END_REF], the mapping from the physical spherical coordinates to the Delaunay angle-action ones takes the form R = a(1-e cos(η)) ; φ = h+tan -1 cos(i) tan(g+f ) ; θ = cos -1 sin(i) sin(g+f ) , (6.84)
where a, e, f and η were previously introduced in equation (6.72). We also introduced the orbit's inclination i as cos(i) = L z /L. When used in equation (6.83), these mappings immediately give the dependences
U 12 = U (g 1 , g 2 , h 1 -h 2 , w 1 , w 2 , J 1 , J 2 ) =⇒ U 12 = U (g 1 , g 2 , h 1 -h 2 , J 1 , J 2 ) . (6.85)
Computing the averaged bare susceptibility coefficients from equation (6.61), one immediately gets
A m s 1 ,m s 2 (J 1 , J 2 ) = δ m s 2,h m s 1,h A m s 1 ,m s 2 (J 1 , J 2 ) , (6.86)
where we wrote the resonance vectors as m s 1 = (m s 1,g , m s 1,h ), so that the coefficient m s 1,h is the one associated with the slow angle h.
As in section 6.6.1, let us briefly emphasise how such a property also holds for the dressed susceptibility coefficients from equation (6.54). For 3D systems, as already emphasised in equation (4.67), the basis elements may generically be cast under the form
ψ (p) (R, θ, φ) = Y m p p (θ, φ) U p n p (R) , (6.87)
where p , m p , and n p are three integer indices, Y m are the usual spherical harmonics, and U n are radial functions. We note in the mappings from equation (6.84) that only the azimuthal angle φ depends on the slow angle h. Because the spherical harmonics are of the form Y m (θ, φ) ∝ P m (cos(θ)) e imφ , where P m are Legendre polynoms, one immediately gets from equation (6.57) that the averaged Fourier transformed basis elements satisfy
ψ (p) m s (J ) = δ m s h m p ψ (p) m s (J ) . (6.88)
The expression (6.55) of the system's response matrix then straightforwardly satisfies
M pq (ω) = δ m q m p M pq (ω) . (6.89)
The combination of the two properties from equations (6.88) and (6.89) allow us to finally rewrite the dressed susceptibility coefficients from equation (6.54) as
1 D m s 1 ,m s 2 (J 1 , J 2 , ω) = δ m s 2,h m s 1,h 1 D m s 1 ,m s 2 (J 1 , J 2 , ω) , ( 6
.90) so that they satisfy the same symmetry relation than the bare susceptibility coefficients. Thanks to the two additional symmetries properties from equations (6.86) and ( 6.90), one may now simplify the resonance condition from the Balescu-Lenard equation (6.53), so that in the context of 3D spherical systems it becomes (6.91) where the resonance vectors were written as m s 1 = (m s 1,g , m s 1,h ). It is straightforward to obtain the Landau equation associated with the . This only amounts to neglecting collective effects, and therefore perform the substitution 1/|D| 2 → |A| 2 . Let us note that in the 3D case, the 1.5PN relativistic precession frequencies obtained in equation (6.104) depend on the action L z , so that, at this stage, further simplifications of equation (6.91) are not possible anymore. To effectively evaluate the diffusion flux in equation ( 6.91), one may follow the method presented in section 4.2.5, by identifying the system's critical surfaces of resonance. We do not detail these calculations here.
∂F ∂τ = 2π 2 N ∂ ∂J s 1 • m s 1 ,m s 2,g m s 1 dJ 2 δ D (m s 1 •Ω s 1 -(m s 2,g , m s 1,h )•Ω s 2 ) |D m s 1 ,(m s 2,g ,m s 1,h ) (J 1 , J 2 , m s 1 •Ω s 1 )| 2 × m s 1 • ∂ ∂J s 1 -(m s 2,g , m s 1,h )• ∂ ∂J s 2 F (J 1 ) F (J 2 ) ,
Relativistic barrier crossing
As a final discussion of the physical content of the Balescu-Lenard equation (6.53), let us now illustrate how this degenerate diffusion equation allows for a qualitative description of the Schwarzschild barrier encountered by stars as they diffuse towards the central BH. This Schwarzschild barrier was discovered in [START_REF] Merritt | [END_REF] via simulations of spherically symmetric clusters. Here, in order to clarify the upcoming discussion, we will consider the case of a razor-thin axisymmetric disc of stars, whose secular evolution is described by equation (6.79), but the same idea applies to the 3D case. In the resonance condition from equation (6.79), let us recall that the precession frequency Ω s , as defined in equation (6.50), is composed of two components. The first one comes from the system's self-consistent potential (equation (6.36)) and reads
Ω s self (L 1 , I 1 ) = ∂ ∂L 1 Φ(L 1 , I 1 ) = ∂ ∂L 1 dE 2 F (E 2 ) U 12 . (6.92)
The second contribution comes from the relativistic effects occurring in the vicinity of the BH. These precession frequencies are briefly recovered in Appendix 6.A. For a razor-thin disc, they read
Ω s rel (L, I) = 1 2π M • M (GM • ) 4 c 2 - 3 I 3 L 2 + GM • c 6s I 3 L 3 .
(6.93)
Let us now study how these precession frequencies depend on the distance to the central BH. Following the timescale comparisons of [START_REF] Kocsis | [END_REF], one expects the relativistic precession frequency Ω s rel to dominate close to the BH (and diverge as stars get closer to capture), while the self-consistent one, Ω s self , will be the largest in the vicinity of the considered disc. Figure 6.6.1 illustrates the typical behaviour of these precession frequencies. In order to induce a diffusion, the Balescu-Lenard equation (6.79) requires the resonance condition Ω s tot (J 1 )-Ω s tot (J 2 ) = 0 to be satisfied. In figure 6.6.1, we illustrate that for a given value of the precession frequency ω s , one can identify the associated locations in the disc where the resonance condition is satisfied. Let us also recall that equation (6.79) involves the quadratic factor F (J 1 ) F (J 2 ), i.e. the product of the system's DF in the two locations which are in resonance. As a consequence, because the disc is only located in the outer regions of the BH, the resonant coupling between two locations within the disc will be much stronger, than the resonant coupling involving one resonant location in the very inner regions of the system close to the BH. In figure 6.6.1, this corresponds to the fact that the resonant coupling between the two outer dots will be much larger than the couplings involving the inner dot in the vicinity of the BH. As stars migrate even closer to the BH, the situation gets even worse, because the required precession frequency to allow for a resonant coupling then becomes too large to resonate with any part of the disc. For such a situation, no efficient resonant couplings are possible and the diffusion is drastically suppressed. As a conclusion, the divergence of the relativistic precession frequencies in the neighbourhood of the BH implies that stars whose orbits diffuse inwards closer to the BH will experience a steep rise in their own precession frequency, which prevents them from being able to resonate with the disc, leading to a strong suppression of any further inward diffusion. This is the so-called Schwarzschild barrier. Such an explanation of the Schwarzschild barrier via the notion of resonant coupling is directly related to the explanation proposed in Bar-Or & Alexander (2014) rel (equations (6.92) and (6.93)) as a function of the distance to the central BH. The relativistic precession frequencies Ω s rel diverge as stars get closer to the BH, while the self-consistent precession frequencies Ω s self are typically the largest for stars in the neighbourhood of the considered disc. The black dots give all the locations in the disc, whose precession frequency is equal to ωs, as illustrated by the dotted horizontal line. Because these disc's locations are in resonance they will contribute to the Balescu-Lenard equation (6.53). Equation (6.53) involves the product of the system's DF in the two resonating locations. As a consequence, here the resonant coupling between the two outer points, which both belong to the region where the disc dominates, will be much stronger, than the couplings involving the inner point, which does not belong to the core of the disc. As stars move inward, because of the relativistic corrections, their precession frequencies increase up to a point where it prevents any resonant coupling with the disc's region. This drastically suppresses the diffusion and induces a diffusion barrier.
invariance. In this picture, a test star may undergo resonant relaxation if the timescale of its relativistic precession is longer than the coherence time of the perturbations induced by the field stars and felt by this test star. As the typical coherence time of the perturbations scales like the inverse of the typical frequency of the field stars (which lie within the cluster), the requirement for an efficient resonant diffusion from the adiabatic invariance point of view is equivalent to the requirement from the point of view of the resonance condition of the Balescu-Lenard equation.
Let us illustrate this diffusion barrier in the neighbourhood of the BH by considering the orbitaveraged motion of individual Keplerian wires. The degenerate Balescu-Lenard equation (6.79) for razor-thin discs, is a diffusion equation in action space, which describes self-consistently the evolution of the whole system's DF. Instead of describing the dynamics of the system's DF, one could also be interested in describing the associated stochastic evolution of individual Keplerian wires. From the ensemble average of these individual dynamics, one should recover the self-consistent DF's diffusion equation (6.79). Following Appendix 6.C, let us rewrite equation (6.79) as In equation (6.95), the 1D Langevin coefficients h and g describe the diffusion of the wire in the L-direction. They follow from equation (6.130) and read (6.96) while the stochastic Langevin force Γ(τ ) follows the statistics from equation (6.129). In equation (6.95), as already underlined in equation (6.62), we recover the fact that the individual fast action J f = I is conserved during the wire's resonant relaxation.
∂F ∂τ = ∂ ∂L A(J , τ ) F (J , τ ) + D(J , τ ) ∂F ∂L . ( 6
h = -A + ∂D ∂L - √ D ∂ √ D ∂L ; g = √ D ,
Let us insist on the fact that equation (6.95) is a rewriting of the Balescu-Lenard equation (6.79) to capture individual dynamics. The Langevin equation (6.95) therefore describes the diffusion of an individual test Keplerian wire embedded in the self-induced noisy environment described by the drift and diffusion coefficients from the Balescu-Lenard equation (6.79). Let us note that because the rewriting from equation (6.95) is a self-consistent rewriting of the system's dynamics, it could be used iteratively to integrate forward in time the . Rather than having to integrate forward in time the system's DF as a whole, equation (6.95) only requires to integrate forward in time first order stochastic differential equations. To do so, one would discretise equation (6.95) in time as L i+1 = L i +(dL/dτ ) i ∆τ , while sampling initial L 0 s to match the system's initial DF. Let us emphasise that the individual stochastic equations (6.95) share some striking similarities with the individual Hamilton's equations associated with the total Hamiltonian from equation (6.7). However, the gain of the present Langevin rewriting is to allow for individual timesteps, ∆τ , orders of magnitude larger than the original ones required to solve for the trajectories of individual stars. Indeed, the Langevin equation focuses directly on the dynamics of Keplerian wires instead of stars themselves. The integration of the fast Keplerian orbital motion does not need to be performed. In addition, it also deals seamlessly with the relativistic corrections, which are already integrated upon the fast angles.
Let us finally illustrate qualitatively in figure 6.6.2 the dynamics of individual wires as given by equation (6.95). space, as given by the Langevin equation (6.95). The grey region corresponds to the capture region, within which stars inevitably sink into the BH. Because the fast action I is conserved during the diffusion (see equation (6.95)), wires' diffusion is one-dimensional, conserves a, and occurs only in the j-direction. The background contours illustrate the contour lines of the precession frequency, i.e. of the function (j, a) → Ω s (j, a). As illustrated in figure 6.6.1, the precession frequencies drastically increase as wires approach the central BH, because of the contributions from the relativistic corrections. The blue and red wires precess at the same frequency ωs, as they belong to the same critical line γω s . This allows them to resonate one with another. Because the precession frequencies diverge in the vicinity of the BH, such resonant couplings are much less likely as wires get closer to the BH. This effectively creates a diffusion barrier in action space, the so-called Schwarzschild barrier. diffusion of wires in the (j, a) = (L/I, I 2 /(GM • )) space. As emphasised in equation (6.95), during the diffusion, the fast action I of the stars is conserved, so that they diffuse only in the j-direction, along a = cst. lines. Individual wires may resonate with other wires precessing at the same frequency, such as the blue and red wires in figure 6.6.2. However, as already illustrated in figure 6.6.1, because of the relativistic corrections, the precession frequencies diverge as stars get closer to the BH. This increase in the precession frequencies forbids then any resonant coupling between a star in the inner fast precessing region and stars belonging to the disc itself, where the precession frequencies are much smaller. Resonances becoming impossible, the diffusion is suppressed and wires cannot keep diffusing closer to the central BH. This strong suppression of the diffusion is the Schwarzschild barrier. The present explication of the Schwarzschild barrier is essentially the same than the one proposed in Bar-Or & Alexander (2014), which relied on the adiabatic invariance of the angular momentum induced by the fast relativistic precessions in the vicinity of the BH.
Our previous calculations explained the existence of a Schwarzschild barrier, which strongly suppresses the supply of tightly bound matter to the BH. As a final remark, let us note that the numerical analysis of [START_REF] Merritt | [END_REF] suggested that, in practice, this suppression is most probably tempered by simple two-body relaxation (not accounted for in the orbit-averaged approach followed in this section). Two-body relaxation then provides an additional mechanism to transport stars even closer to the BH, once resonant relaxation becomes inefficient. This was recently demonstrated in detail in Bar-Or & Alexander (2016), which showed that adiabatic invariance (i.e. the damping of resonant relaxation) limits the effects of resonant relaxation to a region well away of the loss lines. The dynamics of accretion of stars by the BH is then only very moderately affected by such resonant diffusions.
Conclusion
Super massive BHs absorb stars and debris whose orbits reach the loss-cone (Frank & Rees, 1976;Vasiliev & Merritt, 2013), the region of phase space associated with unstable orbits, which take them directly into the BH or close enough to strongly interact with it. Such accretions affect the secular evolution of the BH's mass and spin, which is of particular interest to understand BH's demographics and AGN feedback (Volonteri et al., 2016). These accretion events also provide information on stars, debris and gas blobs in the vicinity of the BH: for example, the continuous loss of stars can effectively reshape the central stellar distribution of the cluster (e.g., Genzel et al., 2000). All these processes have specific observable signatures, such as the binary capture and ejection of hyper-velocity stars (Hills, 1988), tidal heating and disruption (Frank & Rees, 1976), and eventually gravitational waves emission produced by inspiraling compact remnants (Abbott et al., 2016). These various mechanisms also offer the possibility for indirect detections of BHs, and for tests of general relativity in the strong field limit (Blanchet, 2014). A new generation of interferometers, such as Gravity [START_REF] Jocou | Optical and Infrared Interferometry IV[END_REF], now have for primary goal to understand the dynamics of stars in the vicinity of super massive BHs.
In this chapter, we presented how the generic Balescu-Lenard formalism can be tailored to describe the secular evolution of quasi-Keplerian systems, such as galactic nuclei, by appropriately dealing with the dynamical degeneracy of the mean motion. We therefore derived the collisional degenerate kinetic equation (6.53) describing the secular evolution of such systems at order 1/N . Because purely Keplerian orbits do not precess, the dynamical evolution of degenerate systems may significantly differ from that of fully self-gravitating systems, such as the stellar discs considered in the previous chapters. In the quasi-Keplerian context, stars behave as if they were smeared out onto their orbit-averaged Keplerian wires, and the secular collisional evolution of the system is then described by accounting for the dressed interactions between such wires. These wires undergo a resonant relaxation, sourced by the system's intrinsic Poisson shot noise, leading to the appearance of sequences of uncorrelated polarised density waves, whose net effect is to diffuse the system's orbital structure on secular timescales. The degenerate Balescu-Lenard equation (6.53) satisfies some essential properties. It is quadratic in the angle-averaged system's DF, accounts for the system's self-gravity as well as possible post-Newtonian corrections. This equation is sourced by the discreteness of the cluster and describes the resonant coupling between the system's wires. It can also account for a spectrum of masses via equation (6.63). The degenerate Balescu-Lenard equation (6.53) is therefore the quasilinear self-consistent master equation quantifying the effects of resonant relaxation. It provides a rich framework in which to describe the evolution of quasi-Keplerian systems on cosmic times, such as galactic centres, or debris discs, which are an interesting venue in the context of planet formation (Tremaine, 1998).
The principal ingredient used in the proposed derivation was the phase averaging of the two first equations of the BBGKY hierarchy over the fast angle associated with the BH-induced dominant Keplerian motion. Direct consequences of this phase average include that the associated fast actions are adiabatically conserved. As such, this description of the dynamics of Keplerian wires does not allow for the capture of the effects associated with mean motion resonances and direct 2-body relaxation. However, this is usually appropriate because the derivation ignored terms of order O(1/N 2 ), so that it is valid for timescales of order N T sec . This timescale is expected to be much shorter than the 2-body relaxation time. In sections 6.6.1 and 6.6.2, we specified the degenerate Balescu-Lenard equation to razorthin axisymmetric discs and spherical systems. Finally, in section 6.6.3, we investigated in particular the properties of resonant relaxation in the vicinity of super massive BHs. We showed that the degenerate Balescu-Lenard equation naturally captures the presence of Schwarzschild barrier (see figure 6.6.2), where the efficiency of the resonant collisional diffusion is significantly suppressed.
Various recent papers have tackled as well the question of describing the long-term dynamics of quasi-Keplerian systems. The closest to the present derivation is the recent sequence of papers Sridhar & Touma (2016a,b), which obtained evolution equations equivalent to equations (6.45) and (6.46), by following a different route based on the approach of Gilbert (1968), which itself extended the works of Balescu (1960); Lenard (1960) from plasma physics. In Sridhar & Touma (2016c), they relied on the "passive response approximation" when considering razor-thin axisymmetric discs, which only allowed for the recovery of the 2D razor-thin bare susceptibility coefficients from equation (6.81) and the Landau version of equation (6.79).
Another very efficient way of modelling such quasi-Keplerian systems is by relying on Monte Carlo methods, for which the internal Poisson shot noise due to the finite number of wires is treated as an externally imposed perturbation (e.g., Madigan et al., 2011;Bar-Or & Alexander, 2014). This is a very flexible method, especially if one wants to account for additional perturbations external to the cluster, such as those coming from the near neighbourhood of the BH. The η-formalism recently introduced in Bar-Or & Alexander (2014) and implemented in detail in Bar-Or & Alexander (2016) is one such scheme. After imposing plausible constraints on the power spectrum of the self-induced discreteness noise, they recovered the location of the Schwarzcshild barrier (interpreted in terms of adiabatic avariance), and investigated the role of 2-body relaxation in the loss-cone problem. They showed in particular that on longer timescales, 2-body non-resonant relaxation erases the Schwarzschild barrier, and argued that resonant relaxation is effective only in a restricted region of action space away from the loss lines, so that its overall effect on plunge rates remains small. These approaches suffer from two shortcomings, which are the need for ad hoc assumptions on the statistical characteristics of the cluster's shot noise, and the difficulty to account for the cluster's selfgravity. These two elements are self-consistently accounted for in the Balescu-Lenard equation. Finally, at the heart of the η-formalism lies an important distinction between field and test stars. Indeed, the dynamics of the test stars are followed as they undergo the stochastic perturbations generated by the field stars. Such a split was also used in the recent restricted N -body simulations presented in Hamers et al. (2014). In these simulations, the motion of each field star is followed along its precessing Keplerian orbit (with a precession induced by both relativistic effects and the cluster's self-consistent potential), but interactions among field stars are ignored. The test stars are then followed by direct integration of their motion in the time-varying potential due to the field stars. Such a method is especially useful in order to characterise the typical properties of the stochastic perturbations generated by the field stars. Similarly to the η-formalism, this approach ignores interactions among field stars (and among test stars), and there is no back-influence of the test stars on the field ones. Let us finally note that in the course of this chapter, we also presented in Appendix 6.C a Langevin rewriting of the Balescu-Lenard equation. This approach combines the flexibility of Monte Carlo realisations with the self-consistent treatment offered by the Balescu-Lenard approach. A subsequent improvement of this stochastic rewriting lies in the possibility of adding to the dominant resonant relaxation, the secondary effects of two-body relaxation and gravitational waves losses.
Future works
The previous specialisation of the Balescu-Lenard formalism to quasi-Keplerian systems offers possibilities for numerous follow-up works. Let us first note that here we mainly focused on the dynamics of galactic centres, but such methods could also be applied to the secular dynamics of protoplanetary systems and debris discs, which form another vast class of quasi-Keplerian systems.
In the presence of external perturbations, one should be in a position to generalise the collisionless formalism from section 2.2 to account for the long-term effects of stochastic perturbations. The main difficulty here is that because the dynamics was described w.r.t. the central BH (see section 6.2), such external perturbations may decentre the system and introduce non-trivial fictive forces, whose effects on secular timescales have to be carefully studied. Another generalisation of this formalism would be to consider the secular dynamics of quasi-stationary lopsided configurations. Because such configurations precess as a whole, this would first involve identifying new angle-action variables within which the system could be considered as quasi-stationary and then extend the formalism to such configurations.
In order to illustrate qualitatively the mechanisms described in this chapter, one would benefit from implementing the inhomogeneous degenerate Landau equation (i.e. without collective effects) for razor-thin axisymmetric discs. Most of the methods required for such a computation were already presented in chapter 4. The most challenging part of such a computation is the wire-wire interaction potential (Touma et al., 2009;Touma & Sridhar, 2012), thanks to which the self-consistent precession frequencies as well as the bare susceptibility coefficients can be estimated.
We emphasised in section 6.6.3, that as stars diffuse closer to the central BH, their precession frequencies increase up to a point where resonant relaxation becomes inefficient: this is the Schwarzschild barrier. The dynamics of stars in such a configuration is then driven by 2-body relaxation effects, which allow stars to diffuse even further in. Such effects are induced by direct star-star interactions and cannot be accounted for in the present orbit-averaged approach which focuses only on wire-wire interactions. In order to get a better estimate of the infalling rate onto the BH, the next step would therefore be to improve the present formalism to also account for direct 2-body relaxation, which has proven essential for the late stages of the diffusion.
In Appendix 6.C, we described how the self-consistent Balescu-Lenard could be rewritten as a stochastic Langevin equation to describe the collisional evolution of individual Keplerian wires. Such a rewriting offers a promising way to integrate forward in time the diffusion equation. As this directly involves Keplerian wires, one does not need to integrate the fast Keplerian motion induced by the central BH. This offers a significant timescale speed up for such N -wires implementation. This integration in time would also allow us to self-consistenly account for the growth rate and spin up of the central BH, as stars get absorbed. In this context, one could also investigate multi-component situations, where mass segregation is expected to play an important role.
Let us finally write the explicit expression of the averaged potential corrections Φ r appearing in equations (6.45) and (6.46). One has to pay a careful attention to the normalisation conventions introduced in equations (6.2), (6.14), and (6.38). One gets Let us finally note that gravitational waves and the associated dissipations (Hopman & Alexander, 2006) are not accounted for in equation (6.104), hence the possibility to obtain a Hamiltonian formulation for these precessions.
6.B Multi-component BBGKY hierarchy
In this Appendix, let us detail how one can adapt the formalism presented in section 6.2 to the case where the system is composed of multiple components. The different components are indexed by the letters "a", "b", etc. We assume that the component "a" is made of N a particles of individual mass µ a . The total mass of the component "a" is written as M a . When accounting for multiple components and placing ourselves within the democratic heliocentric coordinates from equation (6.3), the system's total Hamiltonian from equation (6. where we noted as Γ a i = (x a i , v a i ) the position and velocity of the i th particle of the component "a". In equation (6.105), the various terms are respectively the kinetic energy of the particles, the Keplerian potential due to the central BH, the relativistic potential corrections Φ r , the self-gravity among a given component, the interactions between particles of different components, and finally the additional kinetic terms introduced by the change of coordinates from equation (6.3). One should also pay attention to the normalisation of the relativistic component Φ r , as we wrote this potential as µ a M Φ r , where we introduced the system's total active mass as M = a M a , to have a writing similar to equation (6.7). Let us now introduce the system total PDF P tot (Γ a 1 , .., Γ a Na , Γ b 1 , .., Γ b N b , ...) which gives the probability of finding at time t, the particle 1 of the component "a" at position x a 1 and velocity v a 1 , etc. We normalise P tot following the convention from equation (2.94). Similarly to equation (6.8), P tot evolves according to Liouville's equation which reads Following equation (2.97), we define the system's reduced PDFs P a1,...,an n by integrating P tot over all particles except n particles belonging respectively to the components a 1 , ..., a n . Our aim is now to write the two first equations of the associated BBGKY hierarchy. In order to clarify the upcoming calculations, let us from now on neglect any contributions associated with the last kinetic terms from equation (6.105). Indeed, in the single-component case, we justified in equation (6.41) that these terms, once averaged over the fast Keplerian motion, do not contribute to the system's dynamics at the order considered in our kinetic developments. To get the evolution equation for P a 1 , one integrates equation (6.106) over all phase space coordinates except Γ a 1 and relies on the symmetry of P In equation (6.107), we used the same notations as in equation (6.9), and introduced as F 1 a 0 the force exerted by the BH on particle 1 a , F 1 a r as the force acting on particle 1 a due to the relativistic corrections, and finally F ij as the force between two particles. In order to get the second equation of the BBGKY hierarchy, one should proceed similarly and integrate equation ( 6 Let us now adapt the definition of the reduced DFs from equation (2.99) to the multi-component case. We therefore introduce the system's renormalised DFs f a 1 , f ab 2 , and f abc 3 as f a 1 = µ a N a P a 1 ; f aa 2 = µ 2 a N a (N a -1)P aa 2 ; f ab 2 = µ a µ b N a N b P ab 2 ; (6.110) (6.111) where one should note that the sum over "b" runs for all components. Similarly, equations (6.108) and (6.109) can both be cast under the same generic form reading In equation ( 6.117), one should pay attention to the slight change in the normalisation of C ab . This ensures a symmetric rescaling w.r.t. "a" and "b". Let us now follow equations (6.13) and (6.14) to rescale the pairwise interaction potential as well as the relativistic corrections. Following these various renormlisations, equation (6.115) becomes (6.118) where we introduced the small parameter ε = M /M • = ( a M a )/M • . Similarly, equation (6.116) becomes (6.119) where we introduced in the second line the small parameter η a = µ a /M of order 1/N a . Equations (6.118) and (6.119) are the direct multi-component equivalents of equations (6.15) and (6.16).
f
∂F a ∂t + v a 1 • ∂F a ∂x a 1 + F 1 a 0 • ∂F a ∂v a 1 + ε b dΓ b 2 F 1 a 2 b F b (Γ b 2 ) • ∂F a ∂v a 1 + εF 1 a r • ∂F a ∂v a 1 + ε b dΓ b 2 F 1 a 2 b • ∂C ab ∂v a 1 = 0 ,
∂C ab ∂t + v a 1 • ∂C ab ∂x a 1 + v b 2 • ∂C ab ∂x b 2 + F 1 a 0 • ∂C ab ∂v a 1 + F 2 b 0 • ∂C ab ∂v b 2 + εF 1 a r • ∂C ab ∂v a 1 + εF 2 b r • ∂C ab ∂v b 2 + εη b F 1 a 2 b • ∂F a ∂v a 1 F b (Γ b 2 ) + εη a F 2 b 1 a • ∂F b ∂v b 2 F a (Γ a 1 ) + ε c dΓ c 3 F 1 a 3 c F c (Γ c 3 ) • ∂C ab ∂v a 1 + ε c dΓ c 3 F 2 b 3 c F c (Γ c 3 ) • ∂C ab ∂v b 2 + ε c dΓ c 3 F 1 a 3 c C bc (Γ 2 , Γ c 3 ) • ∂F a ∂v a 1 + ε c dΓ c 3 F 2 b 3 c C ac (Γ a 1 , Γ c 3 ) • ∂F b ∂v b 2 = 0 ,
As presented in section 6.3, let us now rewrite the two previous BBGKY equations within the angleaction coordinates appropriate for the Keplerian motion induced by the central BH. Let us consequently perform the degenerate angle-average from equation (6.26) and assume that F a and C ab satisfy the crucial assumptions from equations (6.34) and (6.39). One can then rewrite equation ( 6 (6.121) where the averaged potential Φ a due to the component "a" follows from equation (6.36) and reads Φ a (E 1 ) = dE 2 F a (E 2 ) U 12 (E 1 , E 2 ) . (6.122)
In equation ( 6.122), we relied on the averaged wire-wire interaction potential U 12 from equation (6.37). Following the same approach, equation (6.119) can be rewritten as
∂C ab ∂τ + C ab (E 1 , E 2 ), Φ(E 1 )+Φ r (E 1 ) (1) + C ab (E 1 , E 2 ), Φ(E 2 )+Φ r (E 2 ) (2) + c dE 3 C bc (E 2 , E 3 ) F a (E 1 ), U 13 (1) + c dE 3 C ac (E 1 , E 3 ) F b (E 2 ), U 23 (2) + 1 (2π) d-k η b F a (E 1 )F b (E 2 ), U 12 (1) + η a F a (E 1 )F b (E 2 ), U 21 (2) = 0 .
(6.123)
The two coupled evolution equations (6.120) and (6.123) are the direct multi-component equivalents of equations (6.45) and (6.46). The main differences here are the changes in the mass prefactors in the last term (the source term) of equation (6.123). Indeed, it mixes the two small parameters η a = µ a /M and η b = µ b /M . This change is the one which allows for mass segregation in multi-component systems, as briefly discussed in section 6.5.2. Starting from equations (6.120) and (6.123), one can then follow the method presented in section 6.5 to derive the associated kinetic equation for F a . This is the multicomponent inhomogeneous degenerate Balescu-Lenard equation (6.63).
6.C From Fokker-Planck to Langevin
The degenerate inhomogeneous Balescu-Lenard equation (6.53) is a self-consistent integro-differential equation describing the evolution of the system's DF as a whole under the effect of its own graininess. Instead of describing the statistical dynamics of the full system's DF, one could be interested in characterising the individual dynamics of one test particle in this system. Following [START_REF] Risken | The Fokker-Planck Equation[END_REF] (6.124) where, following the notations from equation (6.59), we introduced the system's total drict vector A(J , τ ) and diffusion tensor D(J , τ ) as Let us recall here that the Balescu-Lenard equation being self-consistent, the drift and diffusion coefficients depend secularly on the system's DF, F , but this was not written out explicitly to shorten the notations. Following the notations from equation (4.94a) in [START_REF] Risken | The Fokker-Planck Equation[END_REF], let us rewrite equation (6.124) as ∂F ∂τ = ∂ ∂J s • -D (1) (J , τ ) F (J , τ ) + ∂ ∂J s • D (2) (J , τ ) F (J , τ ) , (6.126)
A(J , τ ) =
where we introduced the first-and second-order diffusion coefficients as D (1) (J , τ ) = -A(J , τ ) + ∂ ∂J s •D(J , τ ) ; D (2) (J , τ ) = D(J , τ ) . (6.127)
Here, let us emphasise that the diffusion of the Keplerian wires takes place in the full action domain J , while equation (6.126) only involves gradients w.r.t. the slow actions J s . This leads, amongst others, to the conservation of the fast actions J f during the resonant diffusion, as noted in equation (6.62). Of course, by enlarging the diffusion coefficients D (1) and D (2) with zero coefficients for all the adiabatically conserved fast actions J f , it is straightforward to rewrite equation (6.126) as a diffusion equation in the full action space involving derivatives w.r.t. all action coordinates J .
Let us now focus on the dynamics of a given test Keplerian wire. We denote as J (τ ) its position in action space a time τ . On secular timescales, this test particle undergoes an individual stochastic diffusion consistent with the system's averaged diffusion captured by the diffusion equation (6.126). This diffusion follows a stochastic Langevin equation reading dJ dτ = h(J , τ ) + g(J , τ )•Γ(τ ) , (6.128)
where we introduced the Langevin vector and tensor h and g, as well as the stochastic Langevin forces Γ(τ ), whose statistics are given by Γ(τ ) = 0 ; Γ(τ )⊗Γ(τ ) = 2Iδ D (τ -τ ) , (6.129)
with I the identity matrix. Following equation (3.124) of [START_REF] Risken | The Fokker-Planck Equation[END_REF], let us finally express the Langevin coefficients from equation (6.128) as a function of the drift and diffusion coefficients appearing in equation (6.126). The second-order diffusion tensor D (2) is definite positive, so that we may introduce as √ D
(2) one of its square root. One then has the components relations
h i = D (1) i - j,k √ D (2) kj ∂ √ D (2) ij ∂x k ; g ij = √ D
(2) ij .
(6.130)
Thanks to equation (6.130), one can fully specify the detailed characteristics of the diffusion of an individual orbit as described by the Langevin equation (6.128). The self-consistency of the diffusion imposes to the diffusion coefficients D (1) and D (2) , and therefore to the Langevin coefficients h and g, to be updated as the system's DF F secularly changes. Let us finally emphasise that the previous presentation of the associated Langevin equation was made for quasi-Keplerian systems governed by the degenerate . It is straightforward to follow the same approach to write the Langevin equation associated with the non-degenerate Balescu-Lenard equation (2.67), which can indeed also be cast as an anisotropic Fokker-Planck equation, as in equation (2.68).
Chapter 7
Conclusion
Overview
Since the seminal works of Einstein and Langevin, physicists understand how blue ink slowly diffuses in a glass of water. The fluctuations of the stochastic forces acting on water molecules drive the diffusion of the ink in the fluid. This is the archetype of the so-called fluctuation-dissipation theorem, which relates the rate of diffusion to the autocorrelation of the fluctuating forces. For galaxies, a similar process occurs but with two main differences related to the long-range nature of the gravitational force: (i) for the diffusion to be effective, stars need to resonate, i.e. present commensurable frequencies, otherwise they follow the mean path imposed by the mean field, (ii) the amplitude of the induced fluctuating forces are significantly boosted by collective effects, i.e. the fact that, because of self-gravity, each star polarises its neighbours. This thesis was concerned with studying the secular implications of this fluctuationdissipation theorem by considering either externally-driven or self-induced fluctuations.
Self-gravitating systems are highly complex objects which undergo a wide variety of dynamical processes, depending on their internal "temperature", i.e. depending on whether they are pressure or centrifugally supported. Astrophysics is now in a position to investigate the secular dynamics of these systems. Of particular interests are cold systems which have the opportunity to reshuffle their orbital structure towards more likely configurations. First, the success of the ΛCDM model now offers a consistent paradigm in which to statistically characterise the cosmic environment. Self-gravitating systems should be seen as embedded in a lively environment, with which they interact throughout their lifetime. In addition, recent developments in kinetic theories now offer various self-consistent frameworks allowing for the description of these systems' secular dynamics. Whether they are external or internal, the long-term resonant effects of perturbations can be accounted for in detail. Let us also emphasise that the steady increase in computing power now allows for detailed simulations of ever greater resolution and complexity. It not only allows for simulations of isolated and idealised setups, but also for cosmological simulations, where environmental effects can play a role. Finally, upcoming observations, such as GAIA for the Milky Way, or Gravity for the Galactic centre, will soon offer unprecedented detailed surveys of the phase space structure of these systems.
These joint progresses in the characterisation of the cosmic environment, the diffusion theory, the simulation power and the details of the observations offer the ideal context in which to study the secular dynamics of self-gravitating systems. While the seminal works of Goldreich, Lynden-Bell, Kalnajs, Toomre had opened the way to deriving a self-consistent linear response theory for self-gravitating systems, it should be emphasised that thanks to the recent works of Binney & Lacey (1988), Weinberg (2001a), Heyvaerts (2010), etc., galactic dynamics has now entered a phase of quantitative statistical predictability on secular timescales. This thesis has contributed to illustrating how this line of work could be applied to varied challenges such as radial migration, disc thickening, or black hole feeding. Let us now detail the main conclusions drawn from each chapter.
In chapter 2, we presented two complementary formalisms to describe the secular evolution of selfgravitating systems. Because they are assumed to be quasi-stationary and stable, such systems can only evolve driven by fluctuations. These may first originate from an external perturber, leading a collisionless diffusion. Another source of fluctuations is associated with the system's own discreteness. This induces finite-N effects, whose contributions on secular timescales are described by the inhomogeneous Balescu-Lenard equation. To derive these equations, we relied on angle-action coordinates to deal with the complexity of individual orbits. For both formalisms, we emphasised the importance of accounting for the system's self-gravity. It dresses perturbations and can very significantly hasten the system's secular evolution. These polarisation effects are especially important in cold dynamical systems such as stellar discs. These two frameworks allow for quantitative comparisons of the respective effects of nurture vs. nature, i.e. externally-induced vs. self-induced effects, on the long-term evolution of self-gravitating systems.
In chapter 3, we considered the case of tepid razor-thin stellar discs. In order to overcome the two principal hurdles associated with the diffusion formalisms, namely the need to consider angle-action coordinates and the difficulty to estimate the system's non-local gravitational susceptibility, we relied on two additional assumptions. These were the epicyclic approximation, i.e. the restriction to cold quasicircular orbits, as well as a tailored WKB approximation, i.e. the restriction to radially tightly wound perturbations. We illustrated how the WKB formalism offers simple quadratures for the diffusion flux in both collisionless and collisonal frameworks. This provided us with a straightforward tool to estimate the locii of maximum diffusion within the disc. When applied to a discrete stable self-gravitating razor-thin stellar disc, we recovered qualitatively the formation of ridges of resonant orbits observed in numerical simulations. One additional strength of the Balescu-Lenard formalim is to offer explicit estimations of the collisional timescale of diffusion in the disc. We noted a discrepancy between the prediction from the WKB kinetic theory and the much shorter timescale inferred from numerical simulations. This was interpreted as due to the incompleteness of the WKB basis, which cannot account for the strong dressing of loosely wound perturbations, a well known linear mechanism coined swing amplification.
In chapter 4, we returned to the case of discrete razor-thin stellar discs. In order to fully account for the disc's self-gravity, we implemented the matrix method. When combined with the collisional Balescu-Lenard equation, our prediction for the initial diffusion flux recovered the formation of the resonant ridge observed in numerical simulations. Because we had correctly taken into account self-gravity, we also matched the timescales of diffusion. To fully emphasise the relevance of the Balescu-Lenard formalism, we resorted to our own N -body simulations. We recovered the expected scalings of the system's response with the number of particles and the fraction of mass within the disc, as predicted by the collisional theory. When considered on even longer timescales, we recovered the mechanism of dynamical phase transition identified in Sellwood (2012). Stable and quasi-stationary systems can become dynamically unstable on the long-term, as the result of the slow, progressive, and irreversible build-up of collisional effects. This is a striking outcome of the large dynamical freedom given by the interactions captured by the second-order equation of the BBGKY hierarchy, which allows for spontaneous reshufflings of orbital structures towards states of higher entropy.
In chapter 5, we investigated the secular dynamics of thickened stellar discs. Similarly to the razorthin case, we devised a new thickened WKB approximation offering explicit expressions for both collisionless and collisional diffusion fluxes. As a side product, this formalism also offered a new thickened Q stability parameter. Following these derivations, we considered various mechanisms of secular thickening, such as internal Poisson shot noise, series of central bars, or even the diffusion acceleration induced by giant molecular clouds. We emphasised how each of these mechanisms have differerent signatures in the diffusion features appearing in the disc. As already noted in the razor-thin case, while qualitatively correct, one limitation of the thickened WKB approximation is the underestimation of the diffusion timescale, which can be significantly hastened by swing amplification in cold stellar discs.
In chapter 6, we focused on quasi-Keplerian systems, and in particular galactic centres. The particularity of such systems is that their dynamics is mostly dominated by one central massive object. This makes the system dynamically degenerate. Individual particles follow Keplerian wires, which get slowly distorted on secular timescales. By paying careful attention to this degeneracy, we detailed how one could tailor the collisional formalism to describe the long-term evolution of such systems. We especially emphasised how this new kinetic equation is the master equation to describe the resonant relaxation of Keplerian wires. In the context of galactic centres, we illustrated how the divergence of the relativistic precession frequencies as stars move closer to the central black hole, leads to a drastic reduction of the diffusion efficiency. This is the phenomenon of "Schwarzschild barrier" first observed in numerical simulations.
Outlook and future works
Throughout this thesis, we detailed at the end of each chapter some possible future works w.r.t. the themes investigated there. As a closing section for this thesis, let us replace these various prospects in a broader context.
The aim of this thesis was to offer galactic dynamics an analytical framework in which to describe evolutions on cosmic times. It appears now as a powerful approach because it allows for a tractable capture of numerous complex non-linear processes. In the continuation of the initial seminal works describing the linear response of self-gravitating systems, one now has at its disposal a self-consistent formulation to understand analytically the non-linear and secular response of these systems. The recent developments of kinetic theory offered us quite an unique opportunity: implementing for the first time in astrophysics a new diffusion equation, the inhomogeneous Balescu-Lenard equation. In the course of this manuscript, we emphasised how these approaches now provide us with the master equations to describe simultaneously and self-consistently a vast class of astrophysical processes. These include the mechanism of stellar migration (both churning and blurring) and disc thickening for stellar discs, but also resonant relaxation and BH feeding in galactic centres. Analytic galactic dynamics has entered the cosmic framework.
Such a formalism is rewarding both for its conceptual contributions but also for its practical usefulness. From the conceptual point of view, these approaches encompass all the wealth and complexity of self-gravitating systems' dynamics. The example of galactic centres illustrates it very clearly. This framework captures the non-trivial effects that a system's discreteness can have on the long-term, and provides a fascinating illustration of the fluctuation-dissipation theorem. It may also be used to study and understand entropy production. Secular dynamical phase transitions are as well an important prediction of this formalism. It describes how the slow and irreversible build up of collisional effects inevitably leads, on the long-term, to a destabilisation of secularly metastable states.
From the pratical point of view, this framework can account for polarisation which accelerates considerably the diffusion in cold systems. It also naturally offers a new dressed multi-component Langevin rewriting, which allows for much larger timesteps. As an example, between two timesteps, a series of swing amplifications can take place: there is no need anymore to resolve them individually. Finally, it can be used to propagate statistics. From detailed measures of environmental perturbations, one can infer the typical evolutions that will be undergone by the systems. One can now treat statistically a galaxy on multiple orbital times, while accounting for the dynamical wealth associated with self-gravity.
When considering the long-term evolution of self-gravitating systems, an important dichotomy has to be drawn between self-induced and externally-induced secular dynamics. One should also pay attention to the system's initial reservoir of free energy, which differs greatly between, e.g., spirals and ellipticals. These distinctions allow us to disentangle the respective roles of nature vs. nurture in sourcing secular evolution, as one can quantify the diffusion signatures associated with different sources of fluctuations. These may then be compared to observations. For example, for stellar discs, one could investigate the expected diffusion associated with various sources of fluctuations: discreteness noise, clumps within the halo, central bars, or tidally induced spirals. Once all these mechanisms are statistically characterised, their predictions (e.g., for the metallicity-dispersion relation) could be compared to detailed observations of the structure of the Milky Way's DF, e.g., soon provided by GAIA. This would allow us to disentangle a posteriori the importance of these various mechanisms throughout the evolution of the Milky Way. This framework is expected to be very powerful to provide explicit predictions in the context of Galactic archeology (Binney, 2013a).
All the applications presented in this thesis were restricted to computing the initial diffusion flux at t = 0 + . In order to probe later stages of the secular evolution, one would have to integrate forward in time these diffusion equations. There are at least two anticipated difficulties. The first arises from the self-consistency of the diffusion equations. The system's drift and diffusion coefficients depend on the current value of the system's DF and have to be updated as the system evolves (Weinberg, 2001b). The integration in time has to be made step by step, with successive updates of the system's DF, potential, and diffusion flux. Another difficulty is that these equations describe the diffusion of the system's DF as a whole. Integrating such a partial differential equation is a cumbersome numerical calculation, which most likely requires to rely on finite elements methods. An alternative approach, inspired from Monte Carlo simulations, follows from the Langevin rewriting of the diffusion equation presented in section 6.C. One samples the system's DF with individual particles, and integrates the first-order stochastic ordinary differential equations describing the dynamics of test particles. With these equations, the involved timesteps are commensurable with a Hubble time. Such integrations of the diffusion could be used as valuable probes to validate the accuracy and robustness of N -body codes on secular timescales. Indeed, one of the only theoretical predictions to which N -body implementations are compared are derived from linear theory: one aims at recovering unstable modes in integrable systems (see Appendix 4.C). These tests can then only check the relevance of numerical codes on a few dynam-ical times. The formalisms developed in this thesis provide new test cases to quantify the validity of N -body implementations on secular timescales.
These various diffusion equations would also benefit from being generalised to describe wider classes of dynamical processes. As already detailed, one can naturally extend these approaches to account for multiple components and describe the corresponding expected mass segregation. See, e.g., section 5.7.6 for an illustration of the role played by giant molecular clouds. Similarly, the system's DF can also be extended with additional parameters, such as metallicity. This drives the interplay between dynamics and chemistry. See section 3.8.1 for an illustration of how to construct such extended DFs. In this regime, one accounts for the birth (and possibly death) of stars, i.e. for the possibility of sources and sinks of particles. Focusing on open systems is another promising regime for which these investigations should be pursued. Similarly, this formalism could be generalised to systems with a small or fluctuating number of "effective" particles, for example as a result of the progressive dissolution of overdensities.
Note that we assumed here integrability, i.e. the existence of angle-action coordinates. It was either guaranteed by the system's symmetry or by additional assumptions, such as the epicyclic approximation. When this is not the case, the system's dynamics may become chaotic and its secular evolution can possibly be driven by chaotic diffusion. For example, in the context of stellar discs, these chaotic effects are prone to play a role in the presence of a central bar. Indeed, the bar's potential makes the system's dynamics chaotic in some regions. Such secular dynamics associated with the formation of strong nonaxisymmetric structures were not investigated in the present thesis. They definitely deserve a thorough investigation on their own. Finally, in some regimes, the resonant orbital diffusion described by the Balescu-Lenard equation may vanish. This can for example occur in galactic centres, as illustrated by the Schwarzschild barrier driven by the divergence of the relativistic precession frequencies as stars move closer to the BH. This suppression can also be imposed by symmetry, e.g., the Balescu-Lenard collision operator vanishes for 1D homogeneous systems. Finally, it may occur on secular timescales if steady states for the Balescu-Lenard equation exist and can be reached by the system (though this is not always possible for self-gravitating systems). In such regimes, the dynamics is not described anymore by the Balescu-Lenard equation, and additional effects have to be accounted for. For galactic centres, this can be direct 2-body effects, i.e. star-star interactions, which allow stars to diffuse even closer to the BH. In addition to strong collisions, resonant effects associated with 1/N 2 correlations can also drive the dynamics. This requires to consider the third equation of the BBGKY hierarchy and focus on slower effects associated with 3-body correlations. Generalising the Balescu-Lenard formalism to account for these two contributions is another interesting direction of improvement.
Figure 1
1 Figure 1.1.2: Two examples of galactic morpohologies located in the nearby Virgo Cluster. Left panel: Spiral galaxy NGC 4321 (=M100) (credit: ESO). Spiral galaxies possess a disc shape and display spiral patterns. Right panel: Giant elliptical galaxy NGC4486 (=M87) (credit: Australian Astronomical Observatory). Elliptical galaxies possess very little substructures and have a roughly ellipsoidal shape.
Figure 1
1 Figure 1.3.1: Illustration of the phase space diagram of a harmonic oscillator. Left panel: Illustration of the parti-
Figure 1
1 Figure 1.3.4: Inspired from figure 2 ofLynden-Bell (1967). Illustration of phase mixing, similarly to figure 1.3.3, in angle-action space, for various times. Here, within the angle-action coordinates, as a result of the conservation of actions, trajectories are simple straight lines. Provided that the intrinsic frequencies Ω = ∂H/∂J change with the actions, particles of different actions dephase. This phase mixing in the angles θ is one of the main justifications for the consideration of orbit-averaged diffusion, i.e. the assumption that the system's mean DF depends only on the actions. This is at the heart of both diffusion equations presented in chapter 2.
Figure 1
1 Figure 1.3.5: Extracted from figure 4.28 of Binney & Tremaine (2008). Illustration of the mechanism of violent re-
Figure 2
2 Figure 2.3.1: Illustration in angle-action space of the resonance condition δD(m1 •Ω1 -m2 •Ω2) occurring in the Balescu-Lenard equation (2.67).In angle-action space, the trajectories of the particles are straight lines, with an intrinsic frequency Ω(J). This frequency depends only on the actions and is illustrated by the left-hand curve. Here the frequency associated with the red particle is twice the one of the blue particle: the particles are in resonance. These resonant encounters in angle-action space are the ones captured by the Balescu-Lenard equation (2.67).
Figure 2
2 Figure 2.3.2: Illustration of the resonance condition δD(m1 •Ω1 -m2 •Ω2) of the Balescu-Lenard equation (2.67)in the case of a razor-thin stellar disc. Top-left panel: A set of two resonant orbits in the inertial frame. Topright panel: The same two orbits in the rotating frame in which they are in resonance -here through an ILR-COR coupling (see figure 3.7.4). Bottom panel: Fluctuations in action space of the system's DF sourced by finite-N effects, exhibiting overdensities for the blue and red orbits. The dashed lines correspond to 3 contour levels of the intrinsic frequency respectively associated with the resonance vector m1 (grey lines) and m2 (black lines). The two sets of orbits satisfy the resonance condition m1 •Ω1 -m2 •Ω2 = 0, and therefore lead to a secular diffusion of the system's orbital structure according to the Balescu-Lenard equation (2.67). Let us emphasise that resonant orbits need not be caught in the same resonance (m1 = m2), be close in position space nor in action space.
Figure 3
3 Figure 3.2.2: Illustration of an epicyclic orbit for a razor-thin Mestel disc (see section 3.7.1), following the angle-
Figure 3.3.1 illustrates the radial dependence of these basis elements, while figure 3.3.2 illustrates them in the polar (R, φ)-plane.The next step of the construction of these WKB basis elements is to compute the associated surface density basis elements Σ [k φ ,
Figure 3
3 Figure 3.3.1: Illustration of the radial dependence of two WKB basis elements from equation (3.11). Each Gaussianis centred around a radius R0, with a typical extension given by the decoupling scale σ, and is modulated at the radial frequency kr.
Figure 3
3 Figure 3.4.1: Illustration of the behaviour of the function χr → K 0 (χr), for which one can identify the maximum amplification K 0 max 0.534 reached for χ 0 max 0.948. This maximum is directly related to Toomre's Q parameter.
Figure 3.7.1: Illustration of the active surface density Σstar of the tapered Mestel disc from equation (3.92). Because of the two tapers from equation (3.90), the self-gravity of the disc is turned off in its inner and outer regions.
Figure 3
3 Figure 3.7.2: Contours of the initial distribution function Fstar from equation (3.91) in action space (J φ , Jr). Contours are spaced linearly between 95% and 5% of the distribution function maximum.
Figure 3
3 Figure 3.7.3: Dependence of the local Toomre's parameter Q as a function of the angular momentum (i.e. the position within the disc) for the tapered Mestel disc from equation (3.92). It is scale invariant except in the inner and outer regions as a result of the presence of the tapering functions Tinner and Touter.
Figure 3.7.4: Inspired from figure 1.10 of[START_REF] Kormendy | Secular Evolution in Disk Galaxies[END_REF]. Illustration of stellar orbits -within the epicyclic
Figure 3
3 Figure 3.7.6: Extracted fromSellwood (2012). Illustration in S12's simulation of the dependence of the peak overdensity δmax = δΣstar/Σstar as a function of time and the number of particles in the simulation (represented by different colors). Because of the initial Poisson shot noise in the sampling, the larger the number of particles, the smaller the initial value of δmax, which decreases like 1/ √ N . The initial systematic steep rises in δmax in the very first dynamical times correspond to the swing amplification of the system's initial Poisson shot noise. Two phases can then be identified in the growth of δmax. The first slow phase, up to δmax 0.02, corresponds to a slow collisional dynamics driven by finite-N effects, which gets slower as the number of particles increases. The second faster phase, for δmax 0.02, corresponds to an unstable collisionless evolution whose growth rate is independent of the number of particles used. See section 4.4 for a detailed discussion on these various dependences.
Figure 3
3 Figure 3.7.7: Variations of the response matrix amplification eigenvalues λ from equation (3.47) with the WKB radial frequency kr, for m = (m φ , mr) = (2, 0) and two different values of J φ . The curve that peaks at large kr is for the smaller value of J φ . We illustrated as well the domain of maximum amplification given by kr ∈ [k inf r ; k sup r ] for which one has λmax/2 ≤ λ(kr) ≤ λmax, where λmax = λ(k max r
Figure 3
3 Figure 3.7.8: Dependence of the maximum amplification factor 1/(1-λmax) for various resonances as a function of the position J φ within the disc. The amplification associated with the COR is always larger than the one associated with the ILR or OLR. Self-gravity is turned off in the inner and outer regions as a result of the tapering functions from equation (3.90).
Figure 3
3 Figure 3.7.9: Illustration of the norm of the collisionless diffusion flux |F tot| summed over the three resonances (ILR, COR and OLR) in bold lines. The contours are spaced linearly between 95% and 5% of the maximum norm.The grey vector gives the direction of the particle's diffusion vector associated with the norm maximum (arbitrary length). The background thin lines correspond to the diffused distribution from S12, which exhibits a narrow resonant ridge of diffusion.
Figure 3
3 Figure 3.7.10: Illustration of the norm of the collisionless diffusion flux |F tot| summed over the three resonances (ILR, COR and OLR), as one varies the disc's active fraction ξ. From left to right: ξ = 0.65, 0.68, 0.71. Such values of ξ still comply with the stability constraint Q(Rg) > 1 everywhere in the disc. The contours are spaced linearly between
Figure 3
3 Figure 3.7.11: Map of N div(F tot) where the total flux F tot has been summed over the three resonances (ILR,
Figure 3
3 Figure 3.7.13: Overlay of the WKB predictions for the divergence of the diffusion flux N div(F tot) on top of the contours of the DF in action space measured in S12's simulation. The black background contours are the level contours of the DF at time tS12 = 1400 (see the lower panel of figure 7 of S12). These contours are spaced linearly between 95% and 5% and clearly exhibit the appearance of a resonant ridge. The coloured transparent contours correspond to the predicted values of N div(F tot), within the WKB approximation, using the same conventions as in figure3.7.11. One can note that the late time developed ridge is consistent with the predicted depletion (red) and enrichment (blue) of orbits.
.102) Equation (3.102) describes the dynamics of each Z-slice of the extended DF with its specific source term. Each Z-slice follows an independent diffusion equation, except for the fact that the drift and diffusion coefficients appearing in Diff[F, .] are sourced by the same reduced DF, F , integrated over all metallicities. The Z-slices therefore only see each other through these shared coefficients, i.e. the diffusions are self-consistent and simultaneous.These considerations allow us to describe more precisely how the radial migration of stars interplays with the disc's chemical structure and leads for example to the appearance of metallicity gradients within the disc. One crucial strength of the previous formalisms is that they allow for detailed comparisons of the relative strengths of different diffusion mechanisms, i.e. different characteristics for the diffusion operator Diff[F, .]. One can for example characterise the statistical properties of the dark matter overdensities (i.e. clumps) and investigate how the potential fluctuations they induce may lead to a secular diffusion in the stellar disc.
Figure 3.8.1: Illustration of the dark matter density in a zoomed dark matter only simulation run with the AMR code Ramses(Teyssier, 2002). The two snapshots were taken at the same time and are centred on the same dark matter halo. The left-hand panel corresponds to a cubic region of extension 500kpc, while the right-hand panel only extends up to 100kpc. The halo was chosen to be quiet, i.e. did not undergo any recent major mergers. On large scales, one can note the presence of various clumps in the dark matter density, which get much fainter as one gets closer to the centre of the halo. Here, any infalling clump gets rapidly dissolved by dynamical friction (see figure1.3.6). On the scale of the inner galactic disc (approximately 10kpc), these clumps are therefore expected to be screened by the dark matter halo, and the disc shielded from them. Such simulations seem to indicate that the perturbations induced by the dark matter halo are weak and will not trigger a strong diffusion in the disc.
.11) Finally, in equation (4.10), one should note that θ 1 [r] and (θ 2 -φ)[r] only depend on r, thanks to the mappings from equation (4.8). If U p n p is a real function, then the coefficients W m1p m2n p are real as well. Because these coefficients involve two intricate integrals, they are numerically expensive to compute. However, they satisfy by parity the symmetry relation W(-m1)
Figure 4
4 Figure 4.2.1: Illustration of truncation of the (rp, ra)-domain in small subregions to allow for the calculation of the response matrix. Each region is centred on the position (r i p , r i a ) with an extension characterised by ∆r.
.22) To effectively compute equation (4.22), one has to introduce a bound m max 1
Figure 4
4 Figure 4.3.1: Illustration of four different resonant critical lines in the (rp, ra)-space. As introduced in equation (4.28), a critical line is characterised by two resonance vectors m1, m2, and a location J1 = (r 1p , r 1 a ) in action space. Each of the four plotted critical lines is associated with the same location (r 1 p , r 1 a ) represented by the black dot, but with a different choice for the resonance vectors m1 and m2, among the three resonances ILR, COR and OLR. One can note that for m1 = m2, the critical line goes through the point (r 1 p , r 1 a ).
Figure 4
4 Figure 4.3.2: Map of the diffusion flux -N F tot computed for m1, m2 ∈ mILR, mCOR, mOLR . As defined in the rewriting from equation (2.72), -N F tot corresponds to the direction along which individual particles diffuse in action space.
Figure 4
4 Figure 4.3.3: Left panel: Extracted from Sellwood (2012), figure 7. Illustration of the contours of the changes inthe DF between the time tS12 = 1400 and tS12 = 0, for a run with N = 50×10 6 particles. As in the right panel, red contours correpond to negative differences, i.e. regions emptied from their orbits, while blue contours correspond to positive differences, i.e. regions where the system's DF increases during the diffusion. Right panel: Map of N div(F tot), where the total flux has been computed with m1, m2 ∈ mILR, mCOR, mOLR . Red contours, for which N div(F tot) < 0, correspond to regions from which the orbits will be depleted during the diffusion, while blue contours, for which N div(F tot) > 0, are associated with regions for which the value of the DF will increase as a result of diffusion. The contours are spaced linearly between the minimum and maximum of N div(F tot). The maximum value for the positive blue contours is given by N div(F tot) 350, while the minimum value for the negative red contours is N div(F tot) -250. The contours in both panels are aligned with the ILR direction
Figure 4.3.4: Map of the N div(F bare tot ) corresponding to the bare diffusion flux (i.e. without accounting for collective effects), following the same conventions as in figure 4.3.3. The contours are spaced linearly between the minimum and maximum of N div(F bare tot). The maximum value for the positive blue contours is given by N div(F bare tot ) 0.30, while the minimum value for the negative red contours reads N div(F bare tot ) -0.50. One should note that turning off collective effects led to the disappearance of the strong narrow radial ridge obtained in figure4.3.3. This figure is qualitatively similar to the results presented in figure 3.7.11, obtained via the razor-thin WKB limit of the Balescu-Lenard equation.
Figure 4.3.5: Illustration of the radial dependence of Kalnajs basis elements for = 2 and kKa = 7, as defined in Appendix 4.A. These basis elements are the ones used to estimate the Balescu-Lenard diffusion flux in section 4.3.1.One can note that as the radial index n increases, the basis elements get more and more radially wound.
Figure 4
4 Figure 4.3.6: Map of N div(F WKB tot
Figure 4
4 Figure 4.4.1: Ilustration of the behaviour of the function t → h(t, N ) from equation (4.34), for an active fraction ξ = 0.5.The function is averaged over 32 different realisations with particle numbers
Figure 4
4 Figure 4.4.2: Left panel: Illustration of the behaviour of the function log(N ) → log( h0(N )), where N was rescaled by a factor 10 -5 to clarify the representation. Dots are associated with the values computed thanks to figure 4.4.1, while the line corresponds to a linear fit reading log( h0(N )) 11.75-1.02 log(N ). The coefficients h0(N ) have been uniformly renomarlised to clarify the representation. Right panel: Same conventions as the left panel. Illustration of the behaviour of the function log(N ) → log( h2(N )), whose linear fit takes the form log( h2(N )) 12.36-1.91 log(N ).
Figure 4
4 Figure 4.4.3: Left panel: Illustration of the behaviour of the function t → V (t, N ) defined in equation (4.43) when averaged over 32 different realisations for particle numbers N ∈8, 12, 16, 24, 32, 48, 64 ×10 5 , along with their associated linear fits. To effectively compute V (t, N ), we used the same binning of action space as in figure4.4.1. As predicted in equation (4.44), one recovers that for a fixed value of N , the function t → V (t, N ) is linear. The horizontal dashed line illustrates the threshold value V thold which was used to determine the threshold time t thold as defined in equation (4.45). Right panel: Illustration of the behaviour of the function N → t thold (N ) and its associated linear fit. As predicted by the Balescu-Lenard equation in equation (4.46), one recovers a linear dependence of t thold (N ) with N .
Figure 4.4.4: Top panel: Illustration of the behaviour of the function t → h(t, N ) for an active fraction ξ = 0.6, following the same conventions as in figure 4.4.1. As expected, increasing the active fraction hastens the secular diffusion and therefore hastens the growth of the function h(t, N ). Bottom left panel: Behaviour of the function log(N ) → log( h0(N )) for an active fraction ξ = 0.6, following the conventions from figure 4.4.2. The associated linear fit reads log( h0(N )) 11.90-1.07 log(N ). One recovers the expected scaling of the initial Poisson shot noise sampling as obtained in equation (4.38). Bottom right panel: Illustration of the behaviour of the function log(N ) → log( h2(N )) for an active fraction ξ = 0.6, following the conventions from figure 4.4.2. The associated linear fit takes the form log( h2(N )) 15.50-1.84 log(N ). One recovers the expected Balescu-Lenard collisional scaling obtained in equation (4.42).
Figure 4
4 Figure 4.4.5: Map of N div(F tot) with an active fraction ξ = 0.6, following the same conventions as in figure 4.3.3.
Figure 4
4 Figure 4.4.6: Illustration of the behaviour of the function t → √ N Σ2(t, N ), as introduced in equation (4.50), as one varies the number of particles. The prefactor √ N was added to mask Poisson shot noise allowing for the initial values of√ N Σ2 to be independent of N . This illustrates the out-of-equilibrium transition between the initial Balescu-Lenard collisional evolution, for which low values of Σ2 are expected, and the collisionless Vlasov evolution, for which the system loses its mean axisymmetry and larger values of Σ2 are reached. As expected, the larger the number of particles, the later the transition.
Figure 4
4 Figure 4.4.7: Illustration the disc's active surface density Σstar for a N -body run with N = 8×10 5 particles and restricted to the radial range R ≤ 6. Left panel: At an early time t = 60, for which the mean disc remains axisymmetric.In this regime, the dynamics of the disc is collisional and governed by the Balescu-Lenard equation (2.67). Right panel: At a much later time t = 2400, for which the disc is strongly non-axisymmetric. In this regime, the dynamics of the disc is collisionless and governed by Vlasov equation.
Figure 4 .
4 Figure 4.C.1: Left panel: Zoomed-in Nyquist contours in the complex plane of the function ω0 → det I-M(ω0, η)
Figure 4 .
4 Figure 4.C.2: Illustration of the dominant mp = 2 unstable mode for a truncated νt = 4 Mestel disc as recovered via the matrix method presented in section 4.2.3. Only positive level contours are shown and they are spaced linearly between 10% and 90% of the maximum norm. The three resonance radii, associated with the resonance ILR, COR, and OLR have been represented, as defined by ω0 = mpΩp = m•Ω(Rm), where the intrinsic frequencies Ω(R) = (Ω φ (R), κ(R)) are computed within the epicyclic approximation, as in equation (3.87). See figure 3.7.4 for an illustration of the signification of these resonance radii.
Figure 4.C.3 illustrates such measurements for different values of the truncation index ν t .
Figure 4 .
4 Figure 4.C.3: Illustration of the measurements of the growth rates η (left panel) and pattern speeds ω0 (right panel)
4.C.2, figure 4.C.4 illustrates the unstable mode of the same truncated ν t = 4 Mestel disc, as recovered from N -body simulations.
Figure 4 .
4 Figure 4.C.4: Illustration of the dominant mp = 2 unstable mode for a truncated νt = 4 Mestel disc as recovered via direct N -body simulations. Only positive level contours are shown and they are spaced linearly between 20% and 80% of the maximum norm. Similarly to figure 4.C.2, the radii associated with the resonances ILR, COR, and OLR are represented.
Figure 4.C.5: Measurements of the pattern speed ω0 = mpΩp and growth rate η for unstable mp = 2 modes in truncated Mestel discs.The velocity dispersion is characterised by q = (V0/σr) 2 -1 = 6, and the power indices of the inner taper are given by νt = 4, 6, 8. The theoretical values were obtained from a tailored linear theory inEvans & Read (1998b). Our measurements were performed either via the response matrix as in section 4.C.1, or via direct N -body simulations as in section 4.C.2.
are illustrated in figure 4.D.1.
Figure 4 .
4 Figure 4.D.1: Left panel: Illustration of the spherical harmonics Y m used to construct the 3D basis elements from equation (4.67). From top to bottom, the lines are associated with = 0, 1, 2, and on a given line, the harmonics are represented for -≤ m ≤ . Right panel: Illustration of the radial dependence of the basis function U =2 n , based on spherical Bessel functions and introduced in Weinberg (1989), for various values of the radial index n ≥ 1. Here, the basis elements are defined on a finite radial range R ≤ Rsys.
Figure 4 .
4 Figure 4.D.2: Simulations run by Rebekka Bieri. Illustration of the gas density in a hydrodynamical simulation performed with the AMR code Ramses(Teyssier, 2002). The stellar and gaseous discs are embedded in a static NFW DM halo and are seen from the top (top panel) and the edge (bottom panel). A supernova feedback recipe was implemented followingKimm et al. (2015). As can be seen in these two snapshots, this leads to fluctuations in the gas density, which resonantly couple to the DM halo and induce therein a resonant diffusion.
Figure 4 .
4 Figure 4.D.3: Illustration of the behaviour of the basis coefficients t → bp(t), for three different basis elements, i.e. different values of the basis indices ( , m, n). Each color corresponds to a different realisation. One can note the presence of potential fluctuations associated with supernova feedback. The autocorrelation of these fluctuations, captured by the matrix C, is the driver of a feedback-induced secular diffusion in the DM halo.
Figure 5
5 Figure 5.2.1: Illustration of an epicyclic orbit (left panel) as one respectively increases its radial action Jr (middle panel) or its vertical action Jz (right panel).
Figure 5.3.1: Illustration of the sharp cavity (solid lines), introduced in equation (5.11), consistent with the mean underlying vertical density (dotted-dashed lines). The cavity is constructed to match the total volume of the vertical density profile. In this figure, the mean density profile corresponds to a Spitzer profile as defined in equation (5.71).
Figure 5 Figure 5
55 Figure 5.3.2: Illustration of the quantisation relations for the vertical frequency imposed by the sharp cavity from equation (5.11). Dimensionless quantities are represented using the notations x = kzh and x0 = krh. The top dotteddashed curve is associated witht the symmetric quantisation relation from equation (5.14), which imposes the quantised dimensionless frequencies x s 1 , x s 2 , ... The bottom dashed line is associated with the antisymmetric quantisation relation from equation (5.100) imposing the frequencies x a 1 , x a 2 , ... One can already note the specific role played by the fundamental symmetric frequency x s 1 , which is the only dimensionless frequency inferior to π/2.
Figure 5.4.1: Illustration of the effect of the disc thickness on the WKB amplification eigenvalues for the thickened Mestel disc presented in section 5.7.1, at the location J φ = 2 and for the corotation resonance m = mCOR = (2, 0, 0). The different curves are associated with different values of the disc scale height z0 from equation (5.71). For z0 = 0, we computed λ(kr, k min z
Figure 5
5 Figure 5.4.2: Illustration of the function χr → K(χr, γ), introduced in equation (5.36), for various values of γ. The razor-thin case corresponds to γ = 0 and was already illustrated in figure 3.4.1. As expected, accounting for the finite thickness of the disc reduces the amplification eigenvalues.
Figure 5.4.3: Illustration of the behaviour of the function γ → Kmax(γ) along with its approximation K approx. max (γ), introduced in equation (5.37), as one varies the disc thickness characterised by γ.
.43) where C pq was introduced in equation (2.26) and stands to the autocorrelation of the external perturbations. At this stage, let us note that the Fourier transformed basis elements from equation (5.22) (resp. (5.105)) involve a δ even mz (resp. δ odd mz ) for the symmetric (resp. antisymmetric) elements. As a consequence, in equation (5.43), as ψ (p) m and ψ (q)
Following equation (3.55), let us first express the basis coefficients b p as a function of the external perturbation δψ e . After some calculation, one gets b
Figure 5
5 Figure 5.7.1: Illustration of the active surface density Σstar for the thickened disc presented in section 5.7.1. As a result of the tapering functions, the disc's self-gravity is turned off in its inner and outer regions.
Figure 5 . 7 . 2 :
572 Figure 5.7.2: Left panel: Illustration of the contours of the initial quasi-isothermal DF from equation (5.5) in the plane (J φ , Jr, Jz = 0). Contours are spaced linearly between 95% and 5% of the DF maximum. Right panel: Contours of the quasi-isothermal DF in the plane (J φ , Jr = 0, Jz) following the same conventions as the left panel.
Figure 5
5 Figure 5.7.3: Illustration of the behaviour of the intrinsic frequencies ω = m•Ω as a function of the position within the disc (given by Rg) and the resonance vector m = (m φ , mr, mz). The grey lines correspond to the pattern speed mpΩp introduced in the bar perturbations from equation (5.87) and considered in figure 5.7.13.
Figure 5.7.4: Illustration of the evolution of the function FZ from equation (5.75), as observed in the direct numerical simulation UCB1 of So12. Left panel: Initial contours of FZ(Rg, Jz, t) for t = 0. Such a representation illustrates the distribution of vertical actions Jz as a function of the position within the disc given by the guiding radius Rg.Contours are spaced linearly between 95% and 5% of the function maximum. The red curve gives the mean value of Jz for a given Rg. Right panel: Same as in the left panel but at a much later stage of the evolution t = 3500. In the inner regions of the disc, one clearly notes the formation on secular timescales of a narrow ridge of enhanced vertical actions Jz.
Figure 5
5 Figure 5.7.7: Illustration of the initial contours of ∂FZ/∂t t=0 as predicted by the WKB collisionless diffusion equation from section 5.5, when considering a secular forcing sourced by Poisson shot noise as in equation (5.79).Red contours, for which ∂FZ/∂t t=0 < 0, are associated with regions from which the particles will be depleted and are spaced linearly between 90% and 10% of the function minimum. Blue contours, for which ∂FZ/∂t t=0 > 0 are associated with regions where the number of orbits will increase during the diffusion and are spaced linearly between 90% and 10% of the function maximum. The background contours illustrate the initial contours of FZ(t = 0) spaced linearly between 95% and 5% of the function maximum and computed for the quasi-isothermal DF from equation (5.5).
Figure 5
5 Figure 5.7.13: Illustration of the initial contours of ∂FZ/∂t|t=0 using the same conventions as in figure 5.7.7. Here we consider a secular collisionless forcing by a series of bars, whose perturbations are approximated by equation (5.87), for various precession rates Ωp and temporal decays σp. The diffusion in the disc's inner regions has been turned off by considering a perturbation amplitude A b (Rg) = H[Rg -Rcut] with Rcut = 2.5. One can predict the positions of the various resonance radii thanks to the behaviours of the various intrinsic frequencies ω = m•Ω illustrated in figure 5.7.3. Top-left panel: Ωp = 0.4 and σp = 0.03, i.e. long-lived fast bars. Top-right panel: Ωp = 0.25 and σp = 0.03, i.e. long-lived slow bars. Bottom-left panel: Ωp = 0.4 and σp = 0.06, i.e. short-lived fast bars. Bottomright panel: Ωp = 0.25 and σp = 0.06, i.e. short-lived slow bars.
Figure 5 .
5 Figure 5.B.1: Illustration for χ = 1 of the behaviour of the reduction functions s → F(s, χ) (left panel) and
Figure 5 .
5 Figure 5.B.2: Illustration of the asymptotic behaviours of the modified Bessel function In as given by equa-
Figure 5 .
5 Figure 5.B.3: Illustration of the behaviour of the function λ → f M (λ), whose roots are the eigenvalues of the arrowhead response matrix from equation (5.115).
129) then becomes lim thin C[k 1 z,s ] = 4h C[v = 0]. Using this relation in equation (5.131), one finally gets lim thin D sym m (J ) = dk p r J 2 mr 2Jr
Figure 6
6 Figure 6.1.1: Extracted from figure 16 of Gillessen et al. (2009). Observations of the individual trajectories of twenty stars orbiting in the vicinity of Sgr A * , the super massive black hole at the centre of the Milky Way. Because of the dominant mass of the central BH, the stars follow quasi-Keplerian orbits.
Figure 6
6 Figure 6.5.1: Illustration of the resonance condition δD(m s 1 •Ω s 1 -m s 2•Ω s 2 ) appearing in the degenerate inhomogeneous in the case of a razor-thin axisymmetric disc. Top-left panel: A set of two resonant Keplerian wires precessing at the same frequency ωs. Top-right panel: The same two wires in the rotating frame at frequency ωs in which the two orbits are in resonance. Bottom panel: Fluctuations of the system's DF in action space caused by finite-N effects and showing overdensities for the blue and red wires. The dashed line corresponds to the critical resonant line in action space along which the resonance condition Ω s = ωs is satisfied. The two wires satisfy a resonance condition for their precession frequencies. Uncorrelated sequences of such resonant interactions will lead to a secular diffusion of the system's orbital structure following equation (6.53). These resonances are non-local in the sense that the two resonant orbits need not be close in position nor in action space. As emphasised in section 6.6.1, in razor-thin axisymmetric discs, the system's symmetry enforces m s 1 = m s 2 , i.e. the two orbits are caught in the same resonance.
.70) Following equations (5.20) and (5.22) of[START_REF] Merritt | Astrophysical Black Holes[END_REF], the mapping from the physical polar coordinates to the 2D Delaunay angle-action coordinates reads R = a(1-e cos(η)) ; φ = g+f , (6.71)
Figure 6.6.1: Illustration of the typical dependence of the precession frequencies Ω s self and Ω s rel (equations (6.92) and (6.93)) as a function of the distance to the central BH. The relativistic precession frequencies Ω s rel diverge as stars get closer to the BH, while the self-consistent precession frequencies Ω s self are typically the largest for stars in the neighbourhood of the considered disc. The black dots give all the locations in the disc, whose precession frequency is equal to ωs, as illustrated by the dotted horizontal line. Because these disc's locations are in resonance they will contribute to the Balescu-Lenard equation (6.53). Equation (6.53) involves the product of the system's DF in the two resonating locations. As a consequence, here the resonant coupling between the two outer points, which both belong to the region where the disc dominates, will be much stronger, than the couplings involving the inner point, which does not belong to the core of the disc. As stars move inward, because of the relativistic corrections, their precession frequencies increase up to a point where it prevents any resonant coupling with the disc's region. This drastically suppresses the diffusion and induces a diffusion barrier.
94) is the self-consistent anisotropic Fokker-Planck equation which describes the evolution of the disc's DF as a whole. In Appendix 6.C, we show how one can obtain from equation (6.94) the corresponding stochastic Langevin equation, which captures the secular dynamics of individual test Keplerian wires. Let us therefore denote as J (τ ) = (L(τ ), I(τ )) the position at time τ of a test wire in the 2D action space J = (L, I). Following equation (6.128), the dynamics of this test wire takes the form
Figure 6.6.2: Qualitative illustration of the individual diffusion of Keplerian wires in the (j, a) = (L/I, I 2 /(GM•))
50), this relativistic potential correction immediately leads to the associated precession frequencies Ω s rel w.r.t. the slow angles θ s , which reads
tot w.r.t. interchanges of particles of the same component. One gets
particles do not belong to the same component, the second equation of the hierarchy becomes
=
µ a µ b µ c N a N b N c P abc 3, where "a", "b", and "c" are associated with different components. These detailed normalisations allow us to rewrite equation (6.107) under the general form
+
M • F 1 a 0 +M F 1 a r • ∂f ab 2 ∂v a 1 + M • F 2 b 0 +M F 2 b r •∂f ab (6.12), let us now introduce the system's 1-body DF F a and 2-body autocorrelation C ab as
.118) as∂F a ∂τ + F a , Φ+Φ r + b dE 2 C ab (E 1 , E 2 ), U 12 (1) = 0 , (6.120)where we introduced the rescaled time τ = (2π) d-k εt from equation (6.44) with ε = M /M • . Following equation (6.36), we also introduced the total averaged self-consistent potential Φ as
, let us recall how one may obtain the stochastic Langevin equation describing such an individual dynamics. Let us start from the generic writing of the degenerate Balescu-Lenard equation (6.58) written as an anisotropic Fokker-Planck equation. It reads∂F ∂τ = ∂ ∂J s • A(J , τ ) F (J , τ ) + D(J , τ )• ∂F ∂J s ,
m s m s A m s (J , τ ) ; D(J , τ ) = m s m s ⊗m s D m s (J , τ ) . (6.125)
It is however defined on the (large) space of configurations (Γ 1 , ..., Γ N ). In order to reduce the dimension of the space where the evolution equations are defined, let us introduce the reduced PDFs P n for 1 ≤ n ≤ N asP n (Γ 1 , ..., Γ n , t) = dΓ n+1 ...dΓ N P N (Γ 1 , ..., Γ N , t) .Relying on the symmetry of P N w.r.t. permutations of its arguments, we may integrate equation (2.95) w.r.t. dΓ n+1 ...dΓ N to obtain the evolution equation satisfied by P n . This gives the general equation of the BBGKY hierarchy which reads
this stage, let us insist on the fact that Liouville's equation (2.95) is an exact equation, which encompasses the same information as Hamilton's equation (2.92). (2.97)
.133)Let us now use the two previous rewritings, as well as equation (2.130), to rewrite the collision operator from equation (2.131) in angle-action space. After integrating w.r.t. θ 1 , θ 2 , θ 1 , and θ 2 , it reads
t. J 1 and m 1 . These only act on the two last lines of equation (2.138). As previously, to perform this calculation, we rely on the intrinsic definition of the dressed susceptibility coefficients from equation (2.137). One has to deal with two distinct contributions: the first one C 1 [F ] associated with the term in m 1 •∂F/∂J 1 F (J 2 ) and the second one C 2 [F ] associated with the term in m 2 •∂F/∂J 2 F (J 1 ). The first contribution C 1 [F ] takes the form C 1
Let us now consider the integration w.r.t. ω in equation (2.143). First, one can note that the fourth term of equation (2.143) vanishes when integrated upon ω. Indeed, by construction, the Bromwich contour B has to pass above all the singularities of the functions of +ω. The contour B can then be closed in the upper half complex plane and, because it surrounds no singularities, gives a vanishing result for this term. Equation (2.143), when rearranged, becomes
1/|ω | 2 . One gets
(2.144) = lim p→0 g(ω, -ω+ip) . (2.146)
sourcing of the evolution of 1-body DF under the effect of the 2-body autocorrelation. Similarly, A 2 C encompasses the usual 2-body Vlasov advection term, D 2 C corresponds to the dressing of particles by collective effects, and S 2 is a source term depending only on F , which sources the dynamics of C.
Relying on basic manipulation, one can rewrite equation (2.159) as
, one gets a constraint of the form A 2 C +D 2 C +S 2 = 0, which couples F and C. This constraint must then be inverted to give C = C[F ]. One then uses this substitution in equation (2.159), and functionally integrates this equation w.r.t. λ 1 , to obtain a kinetic equation involving F only. This gives the Balescu-Lenard equation (or the Landau equation when collective effects are not accounted for). This approach is identical to the direct resolution of the BBGKY hierarchy presented in Appendix 2.B. However, based on the rewriting from equation (2.161), Jolicoeur & Le Guillou (1989) suggested a different strategy. One may indeed first integrate functionally equation (2.161) w.r.t. C, to obtain a constraint of the form E[F, λ 1 , λ 2 ] = 0. Once inverted, this offers a relation of the form λ 2
.164)
In order to obtain a closed kinetic equation involving F only, the traditional approach would be to start from equation (2.159) and proceed in the following way. By functionally integrating equation (2.159) w.r.t. λ 2
T → +∞.
Let us finally recall the formula
lim T →+∞ e iT ∆ω -1 ∆ω = iπδ D (∆ω) , (2.171)
so that equation (2.170) immediately gives
lim
Of course, this rewriting immediately illustrates that the larger the number of particles, the slower the secular evolution. One also recovers the fact that the Balescu-Lenard equation was obtained thanks to a kinetic development at first order in the small parameter 1/N 1. Let us therefore introduce the rescaled time
τ = t N , (3.95)
so that equation (3.94) becomes
∂F ∂τ = C BL [F ] , (3.96)
so as to rewrite the Balescu-Lenard equation without any explicit appearance of N . This allows us to quantitatively compare the time during which S12's simulation was performed to the diffusion timescale predicted by the Balescu-Lenard formalism.
Lenard collisional operator, i.
e. the r.h.s. of equation (2.67) multiplied by N = M tot /µ.
the diffusion operator as Diff[F, F ], for both the collisionless and collisional diffusion equation. In this operator, the first occurence of "F " stands for the bath DF, i.e. the DF which secularly sources the drift and diffusion coefficients. The second occurence of "F " stands for the diffusing DF, i.e. the DF whose time and action gradients appear in equation (3.100). See, e.g.,Chavanis (2012b) for a discussion on the distinction between these two DFs. Because the collisionless and collisional diffusion equations are self-consistent, these two DFs are the same. Adding the source term from equation (3.99), the generic diffusion equation (3.100) can be written as , one can then intervert the integration w.r.t. Z and the derivatives w.r.t. the actions occurring in the diffusion operator. By considering each Z-slice independently, one can untangle this equation to obtain a sourced diffusion equation for the extended DF F Z reading
dZ ∂F Z ∂t = Diff F, dZ F Z + dZ ∂F s ∂t . (3.101)
In equation (3.101)
.99) Here, ∂F s /∂t quantifies the amount of new stars created per unit time. The collisionless and collisional diffusion equations (2.31) and (2.67) describe the dynamics of the system's reduced DF, F , and can be written under the shortened form ∂F ∂t = Diff F, F , (3.100)
where we wrote
∝ exp[-i(ω 0 +iη)t], where ω 0 = m p Ω p is the pattern speed of the mode and η its growth rate. If an unstable mode is present in the disc, one therefore expects the relations
d Re log(b p (t)) dt = η ; d Im log(b p (t)) dt = -ω 0 , (4.64)
.63) Because we are looking for unstable modes, we expect the coefficients b p (t) to have a temporal depen-dence of the form b p (t)
.106) w.r.t. all particles except two. At this stage, two different cases should be investigated, depending on whether one considers P aa 2 or P ab 2 (with a = b). Let us first consider the diffusion equation satisfied by P aa 2 , which ensues from equation (6.106) by integrating it w.r.t. all phase space coordinates except Γ a 1 and Γ a 2 . It reads ∂P aa
2 ∂t +v a 1 • ∂P aa 2 ∂x a 1 +v a 2 • ∂P aa 2 ∂x a 2 + µ a F 1 a 2 a • ∂P aa 2 ∂v a 1 +µ a F 2 a 1 a • ∂P aa 2 ∂v a 2
+ M • F 1 a 0 +M F 1 a r • ∂P aa 2 ∂v a 1 + M • F 2 a 0 +M F 2 a r • ∂P aa 2 ∂v a 2
+ (N a -2) µ a dΓ a 3 F 1 a 3 a • ∂P aaa 3 1 ∂v a +F 2 a
Another possibility allowing self-gravitating systems to reach more probable and hotter configurations is for them to spontaneously develop an instability, such as a bar(Hohl, 1971), leading as well to an efficient rearrangement of the orbital structure. Such outcomes are not investigated in the present thesis.
Any double primitive of 1/x would work as well.
There is a subtlety with the first term on the second line of equation (2.105). While being of order 1/N 2 , it can become arbitrarily large when particle 2 approaches particle 1, due to the divergence of the interaction force at small separation. This term describes strong collisions and is not accounted for in the present formalism of resonance-driven diffusion.
Heyvaerts (2010) very interestingly notes that, if one was to account for contributions associated with strong collisions, such as in the first term of the second line of equation (2.105), the previous property of separability would not hold anymore.
This should not be mixed up with the angle-action coordinates (θ, J) from inhomogeneous dynamics.
We refer the reader toDehnen (1999) for a detailed discussion. It especially notices that the epicyclic frequencies (Ω φ (J φ ), κ(J φ )) from equations (3.5) and (3.7) do not satisfy the constraint from Schwarz' theorem: ∂Ω φ /∂Jr = ∂κ/∂J φ , and therefore suggests to replace Ω φ by Ω φ +(dκ/dJ φ )Jr. We do not consider such improvements in the upcoming calculations.
As the basis effectively used may be significantly truncated, one could need to regularise the inversion of [I-M] to avoid Gibbs rigging. This was not needed in the numerical applications presented here.
Boltzmann's DFs of the form F (J) ∝ exp[-βH(J)], when physically reachable, are obvious stationary states of the Balescu-Lenard equation. Let us emphasise that self-gravitating systems cannot in the strict sense reach statistical equilibrium, as entropy is not bounded from above(Padmanabhan, 1990;[START_REF] Chavanis | Dynamics and thermodynamics of systems with long range interactions[END_REF]. Indeed, for a self-gravitating system, it only takes two particles to satisfy the conservation of energy (by bringing them arbitrarily close to each other) and another two to satisfy the conservation of angular momentum (by sending one of them arbitrarily far from the cluster). Lynden-Bell &[START_REF] Lynden-Bell | [END_REF] have shown that, when given the opportunity, waves within the system will reshuffle orbits so that mass flows inwards and angular momentum outwards, which leads to an increase in entropy.
3 Similar dynamical phase transitions have been observed in the long-range interacting HMF (Hamiltonian Mean Field) toy model(Campa et al., 2008). During the slow collisional evolution, finite-N effects get the system's DF to change. In some situations, the system may then become (dynamically) unstable and undergoes a rapid phase transition from a homogeneous phase to an inhomogeneous one. This transition can be monitored by the magnetisation (see figure1inCampa et al. (2008)), which is an order parameter playing a role similar to Σ 2 here.
(J 1 ) . (6.68)
Chapter 3
Razor-thin discs
The work presented in this chapter is based on Fouvry et al. (2015d); Fouvry & Pichon (2015); Fouvry et al. (2015a,b).
Chapter 6
Quasi-Keplerian systems
The work presented in this chapter is based on Fouvry et al. (2016d).
Appendix
2.A Derivation of the BBGKY hierarchy
In this Appendix, let us briefly recover the fundamental equations of the BBGKY hierarchy. This decomposition is at the heart of the derivation of the Balescu-Lenard equation presented in Heyvaerts (2010). Such a derivation with similar notations is presented in Fouvry et al. (2016a). Let us consider an isolated system made of N identical particles of individual mass µ = M tot /N , where M tot is the total mass of the system. The individual dynamics of these particles is exactly described by Hamilton's equation which read
Appendix
4.A Kalnajs 2D basis
Let us detail the 2D basis introduced in Kalnajs (1976) and used in section 4.3 to compute the diffusion flux. A similar rewriting of the basis normalisations can also be found in Earn & Sellwood (1995). This basis depends on two parameters, namely k Ka ∈ N and a scale radius R Ka > 0. In order to shorten the notations in the upcoming expressions, let us write r for the dimensionless quantity r/R Ka . As introduced in equations (4.5) and (4.6), the 2D basis elements depend on two indices: the azimuthal number ≥ 0 and the radial number n ≥ 0. The radial component of the potential elements is given by
α Ka (k Ka , , n, i, j) r 2i+2j , (4.51)
while the radial component of the density elements reads
β Ka (k Ka , , n, j) (1-r 2 ) j . (4.52)
In equations (4.51) and (4.52), we introduced the coefficients P(k, , n) and S(k, , n) as
In equations (4.51) and (4.52), we also introduced the coefficients α Ka and β Ka as
where the two previous expressions relied on the rising Pochhammer symbol [a] i defined as
4.B Calculation of ℵ
In this Appendix, we briefly detail how the analytical function ℵ, introduced in equation (4.21) to compute the response matrix, may be estimated. In order to ease the effective numerical implementation of
Appendix
5.A Antisymmetric basis
In section 5.3, we restricted ourselves to the construction of the symmetric thick WKB basis elements.
One can proceed similarly for the antisymmetric ones. Assuming ψ z (-z) = -ψ z (z), the ansatz from equation (5.12) immediately imposes D = -A and C = -B, while the continuity conditions from equation (5.13) become
(5.99)
Similarly to equation (5.14), we obtain the antisymmetric quantisation relation
which is illustrated in figure 5.3.2. One can also note that the antisymmetric elements also follow the typical step distance ∆k z obtained in equation (5.17). Following equation (5.18), the full expression of the antisymmetric elements is given by
(5.101)
and kr,R0,n] (R, φ, z) Θ z h .
(5.102)
Similarly to equation (5.20), the amplitude of the antisymmetric elements is given by (5.103) where, in analogy with equation (5.21), β n is a numerical prefactor reading
(5.104)
As illustrated in figure 5.3.2, let us note that the antisymmetric quantisation relation (5.100) imposes for the antisymmetric vertical frequency to satisfy k 1 z > π/(2h), and in this domain, one has 1.3 ≤ β n ≤ 1.5. Similarly to equation (5.22), the Fourier transformed antisymmetric basis elements are given by
(5.105)
5.B A diagonal response matrix
In this Appendix, let us detail why we may assume, as in equation (5.23), that the disc's response matrix is diagonal in the thickened WKB limit. Let us first note that because the symmetric (resp. antisymmetric) Fourier transformed basis elements from equation (5.22) (resp. (5.105)) involve a δ even mz (resp. δ odd mz ),
5.C.2 The collisional case
Let us now follow the same approach for the collisional diffusion. We will especially show how one should estimate the system's susceptibility coefficients in the case where the disc is too thin to rely on the continuous expression from equation (5.67), and that this approach allows for the recovery of the razor-thin results previously obtained in section 3.6. As already noted in the previous section, let us recall that in the razor-thin limit, for which h → 0, the quantised vertical frequencies k n z are such that k n z → +∞, except for the fundamental symmetric frequency k 1 z,s . In the expression (5.66) of the dressed susceptibility coefficients, let us also note the presence of a prefactor 1/h, so that in the razor-thin limit, one has to study the behaviour of a term of the form
(5.133)
In the razor-thin limit, because all the other terms appearing in equation ( 5.66) are bounded, one therefore gets
(5.134)
In addition, in the razor-thin limit, the sum on n p appearing in equation ( 5.66) may also be limited to the only fundamental term n p = 1. Moreover, in order to have non-vanishing susceptibility coefficients, as already justified in the collisionless case, only symmetric diffusion coefficients associated with m z 1 = 0 and J 1 z = 0 will not vanish in the razor-thin limit. Finally, let us recall that in the razor-thin limit, one has lim thin λ p = λ thin and lim thin α 1 = 1. Thanks to these restrictions, in the razor-thin limit, the symmetric susceptibility coefficients from equation (5.66) become
where 1/D thin m1,m1 stands for the razor-thin WKB susceptibility coefficients obtained in equation (3.80). In order to recover the razor-thin WKB Balescu-Lenard diffusion flux, let us now consider the expression (5.69) of the thickened WKB drift coefficients and study their behaviour in the razor-thin limit. Let us first rewrite the thick quasi-isothermal DF from equation (5.5) as
where we wrote F thin for the razor-thin quasi-isothermal DF from equation (3.10). In order to illustrate the gist of this calculation, let us now focus only the remaining dependences w.r.t. J 2 z in equation (5.69). This corresponds to an expression of the form
where we relied on the formula from equation (3.42), as well as on the fact that in the razor-thin limit
Using equation (5.137) into the general expression (5.69) of the drift coefficients, one gets (5.138) where one has to restrict oneself to m z 1 = 0 and J 1 z = 0. Following the same approach, the razor-thin limit of the collisional diffusion coefficients from equation (5.70) is straightforward to compute and reads
(5.139)
This concludes our calculations, as we note that the two razor-thin limits obtained in equations (5.138) and (5.139) are in full agreement with the razor-thin results previously obtained in equations (3.83) and (3.84).
degenerate one and multi-component Keplerian Balescu-Lenard equations, whose important properties will be discussed. Finally, in section 6.6, we present some applications of this new degenerate collisional formalism, respectively to razor-thin axisymmetric discs, 3D spherical clusters, and to understand the suppression of resonant relaxation as stars move closer to the central BH, a phenomenon coined the Schwarzschild barrier.
The associated BBGKY hierarchy
Let us consider a set of N stars of individual mass µ, orbiting a central BH of mass M • . We assume the system to be quasi-Keplerian so that defining the total stellar mass M = µN , one has
We place ourselves within an inertial frame and denote as X • the position of the BH and X i the position of the i th star. The total Hamiltonian of the system reads
in which we introduced the canonical momenta as P • = M • Ẋ• and P i = µ Ẋi . We also introduced as U (|X|) the binary interaction potential, i.e. U (|X| = -G/|X|) in the gravitational context. Let us now detail the various interaction terms appearing in equation ( 6.2). The first two terms correspond to the kinetic energy of the BH and the stars. The third term corresponds to the Keplerian potential of the BH, while the fourth term captures the pairwise interactions among stars. Finally, the last term of equation (6.2) accounts for the relativistic correction forces occurring in the vicinity of the BH, such as the Schwarzschild and Lense-Thirring precessions, as detailed in Appendix 6.A. Let us emphasise the normalisation prefactor µM of these relativistic corrections, which was introduced for later convenience.
One can note that equation (6.2) does not contain any additional external potential contributions. As such contributions may offset the system and introduce non-trivial inertial effects, they were not accounted for to clarify the presentation (see item III of section 6.4 for a discussion of how such external contributions may also drive the system's secular dynamics). The Hamiltonian from equation (6.2) is therefore the direct equivalent, in the context of quasi-Keplerian systems, of the Hamiltonian introduced in equation (2.36) when deriving the non-degenerate inhomogeneous Balescu-Lenard equation. Following the method from Appendix 2.A, our aim is now to derive an appropriate BBGKY hierarchy for the Hamiltonian from equation (6.2), to get a better grasp of how finite-N effects may source the long-term evolution of quasi-Keplerian systems. To do so, let us first rewrite the Hamiltonian from equation (6.2) as N decoupled Kepler Hamiltonians plus some perturbations. Such dynamical problems dominated by one central body are extensively studied in the context of planetary dynamics. We follow Duncan et al. (1998) to perform a canonical change of coordinates to a new set of coordinates, the democratic heliocentric coordinates. Let us define the new coordinates (x • , x 1 , ..., x N ) as
In equation ( 6.3), we introduced the total mass of the system M tot = M • +M , and one should pay attention to the fact that this differs from the definition of M tot used in the previous sections. These new coordinates are such that x • corresponds to the position of the system's centre of mass, while x i gives the location of the i th star w.r.t. the BH. These relations can easily be inverted as
Following Duncan et al. (1998), the associated canonical momenta (p • , p 1 , ..., p N ) are given by
These new canonical coordinates allow us to rewrite the Hamiltonian from equation (6.2) as
In equation (6.6), let us note that the two first terms correspond to N coupled Kepler problems (with relativistic corrections), completed with the presence of the two last additional kinetic terms. The coordinates being canonical, the evolution of the total momentum p • is given by Hamilton's equation
Without loss of generality, let us therefore assume that p • = 0. The evolution of the system's barycentre is then given by ẋ• = ∂H/∂p • = p • /M tot = 0, so that we may set as well x • = 0. Let us finally introduce the notation v n = p n /µ = ẋn , so that the Hamiltonian from equation (6.6) becomes
Let us emphasise how the Hamiltonian from equation (6.7) is similar to the one considered in equation (2.36) to describe isolated long-range systems. In equation ( 6.7), one should also note the presence of two additional potential contributions due to the central BH and the relativistic corrections. These only affect each particle individually, which makes them easy to deal with. A second difference comes from the additional kinetic terms in equation (6.7) associated with the change of coordinates from equation (6.3). As will be fully justified in section 6.4, we will be in a position to neglect these contributions at the order considered in our kinetic developments.
Starting from the Hamiltonian from equation (6.7), let us now proceed as in section 2.A to derive the associated BBGKY hierarchy. The upcoming calculations being very similar to the ones presented in section 2.A, we mainly emphasise here the important changes in the quasi-Keplerian context. Following the normalisation convention from equation (2.94), one can obtain a statistical description of the system by considering its N -body probability distribution function P N (Γ 1 , ..., Γ N , t), where we introduced the phase coordinates Γ = (x, v). The dynamics of P N is fully given by Liouville's equation (2.95), which reads
Here, the dynamics of individual particles is given by Hamilton's equations µ dx i /dt = ∂H/∂v i and µ dv i /dt = -∂H/∂x i , for the total Hamiltonian H from equation (6.7). Following equation (2.97) and the associated conventions, let us define the system's reduced PDFs P n , and susbsequently the reduced DFs f n following equation (2.99) and its normalisations. The generic BBGKY equation (2.100) for f n becomes here in the quasi-Keplerian context
In equation (6.9), we introduced the force exerted by particle j on particle i as µF ij = -µ∂U ij /∂x i , with the shortening notation U ij = U (|x i -x j |). We also wrote the force exerted by the BH on particle i as
Finally, the force associated with the relativistic corrections on particle i was written as M F ir = -M ∂Φ rel /∂x i . As expected from the presence of the additional kinetic terms in equation (6.7), one can note that equation (6.9) differs in particular from equation (2.100) via two additional kinetic contributions.
In the evolution equation (6.9), in order to isolate the contributions arising from correlations among stars, let us follow equations (2.101) and (2.102), and introduce the cluster representation of the DFs. Here, we are especially interested in the system's 1-body DF as well as in its 2-and 3-body correlation functions g 2 and g 3 . Relying on the normalisations obtained in equation (2.103), and because the individual mass of the stars scales like µ ∼ 1/N , one immediately has |f 1 | ∼ 1, |g 2 | ∼ 1/N , and |g 3 | ∼ 1/N 2 . Thanks to this cluster decomposition, and starting from equation (6.9), one can write the two first equa-Appendix
6.A Relativistic precessions
In this Appendix, let us briefly detail the content of the averaged relativistic corrections encompassed by the potential Φ r present in equations (6.45) and (6.46). As we aim for explicit expressions of these corrections, let us use the 3D Delaunay variables from equation (6.19). In addition, we assume for simplicity that the spin of the BH is aligned with the z-direction and introduce its spin parameter 0 ≤ s ≤ 1. We follow [START_REF] Merritt | Astrophysical Black Holes[END_REF] in order to recover explicit expressions for these precession frequencies.
The first relativistic correction is associated with a 1PN effect (i.e. a correction of order 1/c 2 ), called the Schwarzschild precession. Equation (5.103) in [START_REF] Merritt | Astrophysical Black Holes[END_REF] gives us that during one Keplerian orbit of duration T Kep = 2π/Ω Kep = 2πI 3 /(GM • ) 2 , the slow angle g is modified by an amount
. (6.97)
This precession corresponds to a precession of the orbit's pericentre, while the orbit remains in its orbital plane. To the change from equation ( 6.97), one can straightforwardly associate an averaged precession frequency ġ1PN rel = ∆g 1PN rel /T Kep reading where we recall that we assume that the BH's spin is aligned with the z-direction. We also introduced the orbit's inclination i such that L z = L cos(i). One can straightforwardly associate a precession frequency ġ1.5PN (6.102) Such a Hamiltonian also induces relativistic precessions w.r.t. the second slow angle h associated with the slow action L z . We do not detail here how these precessions are indeed correctly described by the Hamiltonian H 1.5PN rel . | 598,546 | [
"749029"
] | [
"164"
] |
01480158 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480158/file/978-3-642-36818-9_14_Chapter.pdf | Yusuke Tsuruta
email: [email protected]
Mayumi Takaya
Akihiro Yamamura
email: [email protected]
CAPTCHA Suitable for Smartphones
Keywords: CAPTCHA, Smartphones, Touchscreens, Embodied knowledge
Introduction
A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is one of the reverse Turing tests ( [7]) that distinguish an access from a computer program such as a crawler from an access from human beings using the difference between humans' shape recognition ability and the machine recognitions ( [START_REF] Ahn | Telling Humans and Computers Apart Automatically[END_REF][START_REF] Ahn | reCAPTCHA: Human-Based Character Recognition via Web Security Measures[END_REF]). The computer program might access to network services to acquire a large amount of accounts of the web mail service aiming at an illegal network utilization such as sending spam mails or carrying out APT attacks. A CAPTCHA can be applied to a web security technique preventing such illegal accesses to network service. When facing the existing CAPTCHA, a human recognizes a word, maybe a nonsense sequence of characters, in the image on the display and is required to respond by typing the word through the keyboard (Fig. 1). The character image is distorted in some fashion, and computer programs cannot recognize easily the characters. For example, an OCR program cannot recognize the character, whereas human beings can do it without difficulty. CAPTCHAs have been analyzed and several methods based on different principles have been proposed ( [START_REF] Elson | Asirra: a CAPTCHA that Exploits Interest-Aligned Manual Image Categorization[END_REF]).
Smartphones play an important role in the information-communication society nowadays, and the development of cloud computing promotes the spread of smartphones and has influenced the use of the Internet. Actually, the accesses from smartphones to internet services rapidly are increasing, and actually many web sites have begun to correspond to smartphone users. There are many differences between smartphones and the past computer models from the point of view of human-computer interface. For smartphones, data is inputted through the touchscreen by hands, and the display of a smartphone is comparatively small. A CAPTCHA login is requested when a user is accessing to internet services through a smartphone exactly same as it is requested to accesses from the desktop PCs. When we use smartphones and face a CAPTCHA, both the challenge image and the virtual keyboard are displayed, and we have to type in the word displayed in the image. However, the virtual keyboard occupies almost half of the display and so the CAPTCHA image must be small. To solve this problem, a new image based CAPTCHA is proposed in [START_REF] Gossweiler | What's up CAPTCHA?: a CAPTCHA Based on Image Orientation[END_REF]. In this paper, we propose yet another CAPTCHA suitable for smartphones using embodied knowledge of human beings. Our approach is different form [START_REF] Gossweiler | What's up CAPTCHA?: a CAPTCHA Based on Image Orientation[END_REF].
Embodied knowledge is the control of the muscle acquired by practice and the motor learning of the brain such as the skill remembered in childhood. Realizing embodied knowledge by a computer is one of the challenging problems in artificial intelligence research. The proposed technique is to decide whether or not the response to the challenge is created by human beings or computer programs by checking the existence of embodied knowledge. One-stroke sketch is taken up as an ingredient in the embodied knowledge of our proposal. Human-computer interaction through a touchscreen that is one of the features of smartphones is suitable for faithfully acquiring one-stroke sketch data. One-stroke sketch input data with humans' finger is characterized as a continuous locus resulted by the human hand's physicality and realized as a series of coordinates on the display. The entire character image is not drawn at the same time but it is drawn continuously on the curve along the shape of the character little by little following the tracks of the tip of a finger according to the operation of the arm and the hand.
It is also necessary to give data the continuous order at the pixel level so that the computer program may compose legitimate input data. Therefore, the proposing CAPTCHA is based on not only hardness of image recognition but also the embodied knowledge which is an important theme in artificial intelligence. 2 The Proposed Technique
Basic Idea
As a standard authentication protocol, a CAPTCHA server sends a challenge and the user has to respond to it in a correct way. In the case of the proposed CAPTCHA, the server sends a challenge image that includes a character (or a symbol), and then the user is requested to trace the character by a finger tip and sends back the data representing a one-stroke sketch as a response to the challenge. The smartphone interprets the date inputted as an ordered series of coordinates in which the order is given as the time series. The server receives the ordered series of coordinates and check whether or not it is acceptable as a data obtained by a human using implicitly embodied knowledge. If the data received is determined as an output of a computer program then the access is rejected.
The touchscreen is delimited and the grid is composed. In tracing the character included in the image on the display by the finger, coordinates of the points in the display touched by the finger are acquired by the drug operation of the touchscreen. The server determines whether or not the series of coordinates is acceptable by checking the locus is correct and the data input is continuous in addition to the correctness of the the starting point and the terminal point.
For instance, suppose the image of the character "J" is displayed as a challenge in Fig. 2. The correct response is obtained by dragging the finger along the shape of "J" on a touchscreen of a smartphone. Coordinates of the input data, that is, the series of coordinates of the locus are checked if the first coordinate is included in the small area 4, and if the following coordinates are in the small area 9 and so on. If the series of ordered coordinates is nearly in the order of the small areas 4, 9, 14, 19, 24, 29, 28, and 27, it is accepted (Fig. 2). If the coordinates is in the order of the small areas 2, 8, 14, 19, and 20, then the data is rejected. We shall explain the proposed technique in detail in the expanded version of the paper.
Security
The main objective of CAPTCHAs is to prevent computer programs from accessing to network services for evil purposes. Therefore, the attacker is a computer program disguising as a human being and trying to obtain a legitimate authority to access. Then the security of a CAPTCHA technology is evaluated by the intractability for computer programs to obtain the access permit ( [START_REF] Ahn | CAPTCHA: Using Hard AI Problems for Security[END_REF][START_REF] Mori | Recognizing Objects in Adversarial Clutter : Breaking a Visual CATCHA[END_REF]). We analyze conceivable attacks against the proposal CAPTCHA.
Suppose that the image displayed as a challenge is monochrome, and the character is drawn in the white ground in black. In this case, the coordinate data of the area where the character is drawn can be acquired accurately by examining RGB of the challenge image. Then a computer program should enumerate a series of coordinates at which black RGB is appointed and give order to these coordinates according to the correct writing, that is, following one-stroke sketch. For example, if a human write "J" then the input data trace the small areas 4, 9, 14, 19, 24, 29, 28, and 27 like in Fig. 2. The number of the coordinates should be nearly same as the standard input by human beings to disguise. In general, a computer program has no information on one-stroke sketch, which is considered as an embodied knowledge of human beings. Each human being has learned such an embodied knowledge from their childhood. A computer program should pick one position from the area of a coordinates with black RGB as the starting point and also as the terminal and then compose a series of coordinates with black RGB that connects the starting and terminal points in a correct order. It is impossible to execute this task if there is no information on the stroke order. If many responses are permitted to the same challenge image, the brute force attack becomes possible in principle. However, the brute force attack can be avoided by permitting only one response to each challenge. Moreover, it is realistic to put the limitation on the challenge frequency. Now assume that an attack program has the database concerning characters and the correct order of writing. If a series of coordinates can be correctly obtained, then information on the correct order of writing might be able to be obtained from the database. We note that it costs a lot to make such a database for attack against the CAPTCHA and so this already has some deterrent effect. In addition, the challenge is not necessarily based on a character or a symbol. An arbitrary curve can be used for a challenge instead of a character and then making the database is impossible in principle. We shall discuss this issues in the future work. We may execute transformations on the shapes and colors of the character to perplex computer programs. If the transformation processing is a continuous transformation, this occurs no trouble for human beings and so such a transformation is allowed. It seems difficult for computer programs to respond correctly (Fig. 5). If the challenge is a (not necessarily monochrome) color image, the attacker's program has to carry out an edge detection and specify the character. Using the existing CAPTCHA techniques such as adding the distortion to the character, the attacker's program has the difficulty to detect the character. Moreover, not only adding the distortion transformation but also camouflaging the background with the dazzle paint makes the attacker's program hard to detect the character. Therefore, the security of the proposed CAPTCHA is at least the existing CAPTCHAs because their techniques can be employed to our CAPTCHA as well. In addition, the method requiring the user to input more than one stroke traces is effective to improve the security level. The security level can be adjusted according to the system requirement. To understand the security of the proposed CAPTCHA well, we should examine human embodied knowledge from the standpoint of the cognitive psychology.
Comparison with the Existing CAPTCHA
We discuss the usefulness of the proposed CAPTCHA comparing with the existing techniques provided that a user is accessing using a smartphone. Note that the screen size of smartphones is about 3.5-5 inch. When using smartphones, both a CAPTCHA image and a virtual keyboard are displayed (Fig. 6) and the size of the CAPTCH image is almost half of the screen. It is very inconvenient for most of the users to respond to a CAPTCHA challenge due to this limited size image and the virtual keyboard. One has to type more than once to input a character when using a virtual keyboard. For example, when typing "c" in the lowercase letter, one has to press the button for "c" three times (Fig. 7). When typing "C' ' in the uppercase letter, one needs more operations to change the "lowercase mode" to the "uppercase mode". Therefore, the total number of operations becomes enormous if the words are arbitrarily generated using lowercase letters, uppercase letters and figures. Moreover, a wrong character may be inputted by an unintentional typing mistake. For this reason, some existing CAPTCHA use only figures (0, 1, 2, . . . , 9) without using alphabets to improve user's convenience. Note that if uppercase and lowercase letters are allowed in addition to the figures, 62(= 10 + 52) characters can be used. This results in the deterioration of the security; if the challenge is a word consisting of n letters, there are only 10 n cases compared with 62 n cases.
When using the proposed CAPTCHA, the entire display is used for showing the challenge image, and the input is comparatively easy (Fig. 8); no additional operations such as changing modes are required.
Line Trace Attack
It may seem possible to use the line trace program, which is often used in a robot, to trace the black coordinate area of the challenge image for attacking the CAPTCHA. For this attack, the line trace program has to trace on the black area from the starting point of the character to the terminal point in order to compose the response data. The attack using a line trace program seems the most plausible attack as of now. It is necessary for a line trace program to find the starting point to begin the tracing, however, it seems intractable to find the starting point because the line trace program checks the local area and determines the next action and the starting point is usually given as the input to the program by a human being. A human being looks at the image, comprehends the character and finds the starting point using the embodied knowledge. On the other hand, choosing a starting point is intractable for a line trace program. If a human takes part in the attack, the attacker consists of not only a program but a human, and so this approach is excluded as an attack against the proposed CAPTCHA. For an objective of a CAPTCHA is to prevent programs from accessing without human beings assistance. Even if the starting point is obtained in some ways without human assistance, our approach allows challenges such as separated images (Fig. 3) or deformed images (Fig. 4) to perplex the line trace program, which give no trouble to human beings as we see in the subsequent section. Therefore, an attack using line trace programs seems intractable.
Experiment of Attack Using Line Trace Program
In the following experiments of attacking against the proposed CAPTCHA, a line trace program tries to make an acceptable response to the challenge images (Fig. 3, 5, 9, 10, 11, 12, 13, 14). Each experiment is executed provided the starting point is given to the program beforehand by a human. We use a simulation line trace program [8] in this experiment. The line trace program succeeded in making an acceptable response only to the challenge image in the image 4 (Fig. 12), and it failed against the other images (see Table 1). By these experiments, we conclude that countermeasures leading the line trace program to a dead end or putting the pause in the character shape are considerably effective whereas these do not cause any troubles to human beings. The line trace program also fails to trace when the angle formed in the character shape is too big. As we have already mentioned that the line trace program is given the starting point as an input by human beings. However the actual attack must be carried out without human beings' assistance. Therefore, a simple attack using a line trace program does not seem a serious threat against the proposed CAPTCHA. We shall report the detail of the experiments and discuss more about the results in the expanded version of this paper.
Validity of the Proposed CAPTCHA
We examine the validity of the proposed CAPTCHA by experiments; 22 subjects (humans) are asked to respond to several challenge images that represent the symbol "α".
Experiments
We use a handheld computer (Android 3.1 and processor NVIDIA Tegra 2 mobile processor) equipped with 9.4 type WXGA liquid crystal with the internal organs display touchscreen of the ITO Grid method mirror electrostatic capacity method as the user machine. The platform of the server is constructed on Windows 7 Professional 64bit, 2048MB memory, and Intel Core i3, and the authentication program is written by using c/c++ compiler MinGW. The size of the challenge images is 1200 × 700 pixel. The response is accepted if the locus is passing in a correct order. The purposes of each experiment are summarized as follows (see Table 2). In the experiment 1 and 2, the small zone is set 35 × 35 pixels and 70 × 70 pixels, respectively, and we investigate the differences between these two cases. In the experiment 3, the small zone is set 35 × 35 pixels and we specify the entry speed and the input position and investigate the difference between these cases. In the experiment 4, we investigate the effect caused by the change of characters. In the experiment 5, we investigate the case that the response is accepted only when all set coordinates are passed. In the experiment 6, we investigate the tolerance for human non-intentional errors. experiment 1 The instruction "Please trace on the character shape by one stroke" is displayed and the challenge image Fig. 9 is displayed. The small zone on the grid is 70 × 70 pixel(Fig. 15). experiment 2 The instruction "Please trace on the character shape by one stroke" is displayed and the challenge image Fig. 9 is displayed. The small zone on the grid is 35 × 35 pixel (Fig. 16). experiment [START_REF] Ahn | reCAPTCHA: Human-Based Character Recognition via Web Security Measures[END_REF] The instruction "Please trace on the character shape by one stroke within 5 seconds" is displayed and the challenge image Fig. 9 is displayed. The small zone on the grid is 35 × 35 pixel (Fig. 16). experiment [START_REF] Elson | Asirra: a CAPTCHA that Exploits Interest-Aligned Manual Image Categorization[END_REF] The instruction "Please trace on the character shape by one stroke within 5 seconds" is displayed and the challenge image Fig. 10 is displayed. The small zone on the grid is 35 35 pixel (Fig. 17). experiment [START_REF] Gossweiler | What's up CAPTCHA?: a CAPTCHA Based on Image Orientation[END_REF] The instruction "Please trace on the character shape by one stroke within 5 seconds" is displayed and the challenge image Fig. 10 is displayed. However, the response is accepted only when every small zone from 1 to 40 is passed in order. The small zone on the grid is 35 × 35 pixel (Fig. 17). experiment 6 The instruction "Please perform the input which is not related to the displayed character" is displayed and the challenge image Fig. 9 is displayed. The small zone on the grid is 35 × 35 pixel (Fig. 16).
Table 2. Correspondence Table
Im1 Im2 Im3 Im4 Im5 Im6 Character α √ √ √ - - √ Character β - - - √ √ - 35 × 35 pixel - √ √ √ √ √ 70 ×
Result of Experiments
The results of the experiments in Section 4.1 are summarized in Table 3. In the experiment 1, 2, and 3, one subject is rejected because the responding data is in the order corresponding to the alphabet "a". Recall that the image indicates the symbol "α". When one writes "α", the order is different from the alphabet "a" although the shape is similar. The difference of the handwritten input of "α" and "a" is due to the culture and a social background in which the subject has grown up, and this is considered an embodied knowledge. By the results of experiments 1 and 2, we can conclude that if the zone is bigger, then higher acceptance rate is achieved, on the other hand, if the zone is smaller, then acceptance rate decreased. By the results of experiments 2 and 4, we can conclude that the shape of the character does not affect the acceptance rate and the acceptance rate is stable for any (simple) character. We are convinced that other characters which are written as one-stroke sketch other than "α" can be used in the proposed CAPTCHA as well. By the result of experiment 3, we can conclude that if we allow users to write slowly then the acceptance rate will increase but the transmission data gets larger, which is not desired for network congestions. By the results of experiments 4 and 5, we can conclude that it is necessary to permit width of the order of the inputted coordinates to some degree, that is, we must be tolerant to small errors data, possibly caused by an unintentional errors. More experiments and detailed analysis will be reported in the expanded version of the paper.
Future Work and Summary
We shall discuss several issues on the proposed CAPTCHA for future research.
The response data to a challenge image of the proposed CAPTCHA consists of a series of coordinates. One coordinate consists of a pointer ID, x coordinate, and y coordinate and each data is 9 bytes, where, pointer ID indicates a human action on the touchscreen. The number of input data comprises about 150-200 coordinates in our experiment using the symbol "α". One coordinate is inputted per 0.01-0.02 seconds. Therefore, 1.35-1.8 kilobytes transmission is required for each response for the proposed CAPTCHA. The response data for the existing CAPTCHA is 6-10 characters, and the transmission data is several bytes. Thus, the transmission data is bigger than the existing CAPTCHA. The proposed CAPTCHA is required more computation to check whether or not a response can be accepted than the existing CAPTCHA. We will study how to reduce amount of transmission data and server's information processing.
In our experiment we made the challenge images and the authentication programs by hand. Automatic generation of the challenge image is necessary when we use it in a real system. Because one-stroke sketch is an embodied knowledge, it is important to devise a method to put embodied knowledge in challenge images and to apply continuous transformations to a character in order not to change the writing order. It should be note that there is no difference in the programs on Android OS but the adjustment of coordinates for platform smartphones is necessary. We will discuss these issues in the extended version of the paper.
In this paper, we propose a new CAPTCHA technique utilizing touchscreens to solve an inconvenience caused by the existing CAPTCHAs when smartphones. We implement the proposed technique and carry out experiments to examine the usefulness and compare with the existing techniques. Using a touchscreen, one-stroke sketch is captured and represented as ordered series of coordinates. One-stroke sketch can be considered as one of embodied knowledges of human beings and so computer programs have difficulty to understand onestroke sketch. Our technique is based on embodied knowledge of human beings and so computer programs cannot respond correctly to a challenge image. It is necessary to study more one stroke sketch as an embodied knowledge of human beings and validity and security of the proposed technique in the context of artificial intelligence and cognitive science.
Fig. 1 . 2 .
12 Fig. 1. A typical CAPTCHA Fig. 2. Example("J")
Fig. 3 .
3 Fig. 3. Separator Image Fig. 4. Deformed Image Fig. 5. Distorted Image
Fig. 6 .Fig. 7 .
67 Fig. 6. Existing(num) Fig. 7. Existing(char) Fig. 8. Proposed Method
Fig. 9 .Image 1 Fig. 10 .Image 2 Fig. 11 .Image 3 Fig. 12 .Image 4 Fig. 13 .Image 5 Fig. 14 .
9110211312413514 Fig. 9. Test Image 1 Fig. 10. Test Image 2 Fig. 11. Test Image 3
Fig. 15. exp 1 Fig. 16. exp 2- 3 Fig. 17 .
1317 Fig. 15. exp 1 Fig. 16. exp 2-3 Fig. 17. exp 4-5
Table 1 .
1 Correspondence Table
Trace Success Trace Failure Image1 -√ Image2 -√ Image3 -√ Image4 √ -
Image5 Image6 Fig. 3 Fig. 5
Trace Success Trace Failure -√ -√ -√ -√
Table 3 .
3 Test Result
Test1 Test2 Test3 Test4 Test5 Test6
Number of Subjects
(People) 22 22 22 22 22 22
Acceptance Number
(Time) 21 20 21 22 9 0
Acceptance Rate
(%) 95.5 90.9 95.5 100 40.9 0 | 23,603 | [
"1003058",
"1001267",
"1001268"
] | [
"472230",
"472230",
"472230"
] |
01480168 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480168/file/978-3-642-36818-9_15_Chapter.pdf | Pinaki Sarkar
email: [email protected]
Aritra Dhar
email: [email protected]
Code Based KPD Scheme With Full Connectivity: Deterministic Merging
Keywords: Key predistribution (KPD), Reed Solomon (RS) codes, Combinatorial Designs, Deterministic Merging Blocks, Connectivity, Security
Key PreDistribution (KPD) is one of the standard key management techniques of distributing the symmetric cryptographic keys among the resource constrained nodes of a Wireless Sensor Network (WSN). To optimize the security and energy in a WSN, the nodes must possess common key(s) between themselves. However there exists KPDs like the Reed Solomon (RS) code based schemes, which lacks this property. The current work proposes a deterministic method of overcoming this hazard by merging exactly two nodes of the said KPD to form blocks. The resultant merged block network is fully connected and comparative results exhibit the improvement achieved over existing schemes. Further analysis reveal that this concept can yield larger networks with small key rings.
Introduction
The increasing necessity of dealing with classified information from hazardous deployment area is enhancing the popularity of Wireless sensor networks (WSN). Such networks typically consists of Key Distribution Server (KDS) or Base Station (BS), identical (low cost) ordinary sensors (or nodes) and at times some special nodes. The BS links the network to the user and sometimes these networks have more than one BS. This along with other flexibilities like lack of fixed infrastructure imply that such networks are Ad Hoc in nature. Each entity constituting a WSN typically consists of a power unit, a processing unit, a storage unit and a wireless transceiver. Capacities of each such unit in any ordinary node is quite limited for any WSN while the KDS is quite powerful. Resource constrained nodes are supposed to gather specific information about the surrounding, process them and communicate to the neighboring nodes, i.e., nodes within their (small) radius of communication. The processed data is relayed up to the KDS which has relatively (large) radius of communication for further analysis before informing the user. In spite of all the weaknesses in the basic building blocks of WSNs, these networks have several military applications like monitoring enemy movements, etc. Besides they are utilized for other scientific purposes like smoke detection, wild fire detection, seismic activity monitoring etc. In all its applications, WSNs once deployed are expected to work unattended for long duration of time while its constituent nodes deals with lot of sensitive data.
Related Works
Most recent applications of WSNs require secure message exchange among the nodes. One ideally likes to apply lightweight symmetric key cryptographic techniques in order to avoid heavy or costly computations within the resources constraints nodes. Such cryptographic techniques demands the communicating parties to possess the same key prior to message exchange. Standard online key exchange techniques involving public parameters or using trusted authorities are generally avoided. Instead, Key PreDistribution (KPD) techniques are preferred. Eschenauer and Gligor [START_REF] Eschenauer | A key-management scheme for distributed sensor networks[END_REF] suggested the pioneering idea of KPD scheme where:
-Preloading of Keys into the sensors prior to deployment.
-Key establishment: this phase consists of • Shared key discovery: establishing shared key(s) among the nodes;
• Path key establishment: establishing path via other node(s) between a given pair of nodes that do not share any common key. Random preloading of keys means that the key rings or key chains are formed randomly. In [START_REF] Eschenauer | A key-management scheme for distributed sensor networks[END_REF], key establishment is done using challenge and response technique. Schemes following similar random preloading and probabilistically establishing strategy are called random KPD schemes. More examples of such schemes are [START_REF] Chan | Random key predistribution schemes for sensor networks[END_REF][START_REF] Liu | Establishing pairwise keys in distributed sensor networks[END_REF]. C ¸amptepe and Yener [2] presents an excellent survey of such schemes. On the other hand, there exists KPD schemes based on deterministic approach, involving Mathematical tools. C ¸amptepe and Yener [START_REF] Dhar | Combinatorial design of key distribution mechanisms for wireless sensor networks[END_REF] were first to propose a deterministic KPD scheme where keys are preloaded and established using Combinatorial Designs. Following their initial work, numerous deterministic KPD schemes based on combinatorial designs like [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF][START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF][START_REF] Sarkar | Secure Connectivity Model in Wireless Sensor Networks Using First Order Reed-Muller Codes[END_REF] have been proposed. There exists hybrid KPDs like [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF][START_REF] Sarkar | Assured Full Communication by Merging Blocks Randomly in Wireless Sensor Networks based on Key Predistribution Scheme using RS code[END_REF] that use both random and deterministic techniques. There exists some interesting designs like one in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] using Reed Solomon (RS) code which can be viewed as a combinatorial design. One may refer to [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF] for discussions about various combinatorial designs necessary for this paper. For the sake of completeness, an outline on combinatorial designs is presented in Section 3. The said section establishes that the RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] can be treated as a Group-Divisible Design (GDD) or Transversal Design (TD).
Contributions in this paper
The original scheme of [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF] lacks full communication among nodes as a pair of nodes may not share a common key. This involves an intermediate node which increase the communication overhead. As a remedial strategy, Chakrabarti et al. [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF] first suggested the idea of random merging of nodes to form blocks. Their strategy was to randomly merge 'z' nodes of [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF] to form blocks having bigger key rings. The resultant network thus possessed ' N /z ' blocks where N is the number of nodes in the original KPD [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF]. However full communication was still not guaranteed and many aspects of their design, like the basic concept of merging, choice of nodes while merging, the heuristic in [3, Section 4] etc.have not been explained. A similar random merging concept was proposed by Sarkar and Dhar [START_REF] Sarkar | Assured Full Communication by Merging Blocks Randomly in Wireless Sensor Networks based on Key Predistribution Scheme using RS code[END_REF] for the RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. Full connectivity is guaranteed for z ≥ 4 (see [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF]Theorem 1]) The solution to be presented here is entirely different and performs more efficiently. Motivated by the merging concept, the present authors thought of proposing a deterministic merging technique. Here exactly two (2) nodes of the KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] are merged. Theorem 2 of Section 4 establishes that merging two nodes of the KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] in a certain fashion results in full communication among the newly formed (merged) blocks. The resiliency and scalability are also comparable.
Basics of Combinatorial Design
This section briefly describes some basic notion of combinatorial design necessary for understanding Ruj and Roy [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] scheme. Group-Divisible Design (GDD) of type g u , block size k: is a triplet (X , H , A ):
1. X is a finite set with |X | = gu. 2. H is a partition of X into u parts, that is, H = {H 1 , H 2 , H 3 , . . . , H u } with X = H 1 ∪ H 2 ∪ H 3 ∪ . . . ∪ H u , |H i | = g ∀ 1 ≤ i ≤ u and H i ∩ H j = φ ∀ 1 ≤ i = j ≤ u. 3.
A is the collection of blocks of X having the following properties:
|H ∩ A| ≤ 1 ∀ H ∈ H , ∀ A ∈ A ,
Let (X , A ) is a (v, b, r, k)-configuration. Recall from [6], (X , A ) is called a µ-common intersection design (µ-CID) if: |{A α ∈ A : A i ∩ A α = φ and A j ∩ A α = φ }| ≥ µ whenever A i ∩ A j = φ . While for the sake of consistency, in case A i ∩ A j = φ , ∀ i, j one defines µ = ∞ .
Maximal CID: For any given set of parametric values of (v, b, r, k), such that a configuration can be obtained with them, one would like to construct a configuration with maximum possible µ. This maximal value of µ is denoted µ * . Theorem 14. of [6, Section IV] establishes T D(k, n) designs are k(k -1) * -CID.
KPD Using Reed Solomon (RS) codes
This section is devoted to the description of KPD scheme proposed by Ruj and Roy in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. The scheme uses Reed Solomon (RS) codes to predistribute and establish the communication keys among the sensor nodes. The construction of RS codes has been given in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. Salient features are being sketched below: To construct (n, q l , d, q) RS code having alphabet in the finite field F q (q: prime or prime power > 2), consider the following set of polynomials over F q :
P = {g(y) : g(y) ∈ F q [y] deg(g(y) ≤ l -1}.
Thus the number of elements in P denoted by |P| = q l . Let F * q = {α 1 , α 2 , α 3 , . . . , α q-1 } be the set of non-zero elements of F q . For each polynomial p m (y) ∈ P, Define cp m = (p m (α 1 ), p m (α 2 ), . . . , p m (α q-1 )) to be the m th codeword of length n = q -1. Let C = {cp m : p m (y) ∈ P} be the collection of all such code words formed out of the polynomials over F q . This results in a RS code. Since the number of code-words is q l , the system can support up to q l nodes. Now the polynomial p m and the corresponding codeword cp m are given to the m th node. For the codeword cp m = (a 1 , a 2 , . . . , a n ), one assigns the keys having keyidentifiers (a 1 , α 1 ), (a 2 , α 2 ), . . . , (a n , α n ) where a j = p m (α j ), j = 1, 2, . . . , n to the m th node. The node id of the m th node is obtained by evaluating the polynomial p m at x = q and taking only the numerical value. That is the m th node has the node id p m (q) (without going modulo 'p'). A WSN with 16 nodes based on RS code parameters q = 4, n = 3 and l = 2 is presented in Table 3.1. Here '2' means the polynomial 'x' and '3' means the polynomial 'x + 1' modulo the irreducible polynomial x 2 +x + 1 over F 2 [x] which are commonly referred to as x and x + 1. Thus 0, 1, 2, 3 forms the finite field F 4 . The nodes' polynomials i+ jy ∈ F 4 [y] for 0 ≤ i, j ≤ 3 are given in 2 nd row of Table 3.1. By evaluating these polynomials at non-zero points, the keys (p m (b), b), 0 = b ∈ F q , 0 ≤ i, j ≤ 3 have been derived and tabulated in the corresponding columns. Table 3.1 constructed by similar computations is being presented in a slightly different manner from Ruj and Roy [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. This GDD form of presentation helps one realize the similarity of the RS code based KPD of [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] with the T D(q -1, q) with parameters q -1, q of [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF]. Though in Theorem 6 of [6, Section III], constructions T D(k, p), 2 ≤ k ≤ q, p a prime is given, it can be extended to T D(k, q), q = p r . Since the construction of the KPDs T D(k, p) of [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF] utilized the field properties of F p , one can extend it to F q = F p r . Extending the base field from F p to F q = F p r and following similar constructions as given in [6, Section III] yields T D(k, q), q = p r , r ∈ N. Now taking k = q -1 results in T D(q -1, q). However it is important to state that in T D(q -1, q) design is different from RS code. In T D(q -1, q) design, the evaluation is done for y = 0, 1, . . . , q -2 while in RS code based design, it is done at non-zero points, y = 1, 2, 3, . . . , q -1. N 0 to N 15 denotes the nodes with ids ranging from 0 to 15 whose polynomials are represented in the column immediately below it. Key ids contained in a node are presented in the columns below each node. [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF]. One notes that the scheme under consideration is a (q -1)(q -2)-CID as the number of keys per node = k = q -1 (see Section 2). Thus for nodes not sharing any key, there are enough nodes which can play the role of the intermediate node in multi-hop (2-hop) process. This encourages the search for a deterministic merging design with exactly two nodes per block yielding full communication among the blocks.
V /C denotes the distinct Variety Classes H 1 , H 2 , H 3 , where H d = {(i, d) : 0 ≤ i ≤ 3} for d = 1, 2,
Weakness of the above RS code based KPD
Apart from other possible weaknesses, the RS code based KPD presented in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] lacks full communication among nodes (by the discussions above). So multi-hop communications occur among the nodes. Here multi-hop means some third party node other than the sender and receiver decrypts and encrypts the ciphertext. Other than increasing the cost of communication, this enhances the chances of adversarial attacks on such communication. Thus the energy efficiency as well as the security of message exchange of the entire network might be grossly affected.
Remedy: Deterministic Merging of Nodes
Lack of direct communication for any arbitrarily chosen pair of nodes of the KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] can be tackled by merging certain number of nodes. For this, observe that Table 3.1 indicates the network having 16 nodes can be partitioned into 4 classes each containing 4 nodes on the basis of their key sharing. These classes are separated by double column partitioning lines after each set of 4 nodes: N 0 , N 1 , N 2 , N 3 ;
Nodes N 0 N 1 N 2 N 3 N 4 N 5 N 6 N 7 N 8 N 9 N 10 N 11 N 12 N 13 N 14 N 15 V /C 0y + 0 1 2 3 y y + 1 y + 2 y + 3 2y 2y + 1 2y + 2 2y + 3 3y 3y + 1 3y + 2 3y + 3 H 1 (0, 1) (1, 1) (2, 1) (3, 1) (1, 1) (0, 1) (3, 1) (2, 1) (2, 1) (3, 1) (0, 1) (1, 1) (3, 1) (2, 1) (1, 1) (0, 1) H 2 (0, 2) (1, 2) (2, 2) (3, 2) (2, 2) (3, 2) (0, 2) (1, 2) (3, 2) (2, 2) (1, 2) (0, 2) (1, 2) (0, 2) (3, 2) (2, 2) H 3 (0, 3) (1, 3) (2, 3) (3, 3) (3, 3) (2, 3) (1, 3) (0, 3) (1, 3) (0, 3) (3, 3) (2, 3) (2, 3) (3, 3) (0, 3) (1, 3)
Table 1. Polynomials, Node and Key identifiers for q 2 = 16 nodes. 1(a). Any pair of nodes, other than the ones lying in the same row or column shares exactly 1 key as the equations: ( jj )y = (ii) has unique solution over non-zero points of F 4 (since q = 4) that is with 0 ≤ i = i , j = j ≤ 3. Merging of nodes in pairs for the case q = 4 can be now achieved as indicated by the slanted line in Figure 1(a). Basically the strategy is to merge the nodes: N (i, j) and N (i⊕1, j⊕1) where ⊕: addition in F 4 (addition modulo 2), for j = 0, 2.
A natural deterministic merging strategy of 2 nodes can now be visualized for q = 2 r =⇒ N = q 2 = 2 2r . Figures 1(b) demonstrate the strategy. Nodes occurring at the ends of a slanted line are merged. Idea is to break up the network into pairs of rows, i.e. {1, 2}; {3, 4}, . . . , {2 r-1 , 2 r } and apply similar process. Before explaining the general odd case, it is useful to visualize the case when q = 5, i.e. a network with q 2 = 5 2 = 25 nodes as presented in Figure 2(a). Rest of the discussion is similar to that of the case q = 4 = 2 2 except for the merging of last three rows. As usual the arrows indicate the merging strategy. The strategy indicated in Figure 2(a) is same for the first and second rows while differs in the last three rows when compared to Figure 1(a). This is because a similar strategy like previous cases would imply one row is left out. Of course for q = p = 5 all arithmetic operations are 'modulo 5' arithmetic operations as done in F 5 .
For general odd prime power case q = p r : p is a odd prime, any pair of nodes among {N (i, j) : 0 ≤ j ≤ q -1} for every 0 ≤ i ≤ q -1 (i.e., nodes with ids i + jq and occurring in i th row; q-fixed) do not share any common key. Same is the case with the nodes {N m, j : 0 ≤ m ≤ q -1} for every 0 ≤ j ≤ q -1 with ids m + jq (occurring in j th column of Figure 2(b)). For any other pair of nodes, equating corresponding linear polynomials they find exactly one (1) common shared key between them (since l = 2). The general case of q 2 nodes (l = 2) can visualized as a q × q 'square-grid' as in Figure 2 (b) MB Design 1: General even case for q = 2 r , r ∈ N =⇒ N = 2 2r nodes Fig. 1. Deterministic Merging Blocks Strategy for all even prime power cases q = 2 r , r ∈ N at two rows at a time and form blocks containing one node of each except for last 3 rows. For fixed 0 ≤ i ≤ q -2, merge the nodes N (i, j) and N (i⊕1, j⊕1) (⊕ : addition modulo q), for 0 ≤ j ≤ q -3 (with increment of 2) ∀ q > 4. The last node of every odd row is merged with the first node of the row above it. Since q is odd, taking combination of two row would have left out one row, so top three row are combined separately. Note that, in case merging of nodes is done randomly, one may end up with merged pairs like N (0,0) ∪ N (0,1) and N (0,2) ∪ N (0,3) , (for q ≥ 4) which do not share any common key, thus not be able to communicate even after merging.
Assured full communication: theoretical results
Equating the polynomials of the 4 nodes constituting any 2 merged blocks yields:
Theorem 1. The proposed Deterministic Merging Block Strategy where 2 nodes of the RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] are clubbed to form the merged blocks results in full communication among the merged blocks.
Proof. Consider any two arbitrary blocks A and B. It is evident from the construction that at least node from block A will never lie in the horizontal line as well as the vertical line of either of the two nodes the other block B (refer to Figures 1(a), 1(b) 2(a) and 2(b) for q = 4, 2 r , 5 and for general case). This implies that these two nodes will have a common key as discussed in Section 4. Hence the blocks A and B can communicate through this key. As the two blocks were arbitrarily chosen, one is assured of full communication in the new network consisting of blocks constructed by merging two nodes in the manner explained above (see Figures 1(a), 1(b) 2(a) and 2(b) for q = 4, 2 r , 5 and for general case respectively).
Theorem 2. The resulting Merged Block Design has a minimum of one to a maximum of four common keys between any two given pair of (merged) blocks.
Proof. Any two nodes can share at most one key in original RS code based KPD in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. So there are at most 4 keys common between two blocks. This situation occurs only if both nodes of the 1 st block shares two (2) distinct keys with each node of the 2 nd block.
(a) MB Design 1: Special case for q = p = 5 =⇒ N = 25 = 5 2 nodes.
(b) MB Design 1: General odd prime / prime power q = p r , r ∈ N =⇒ q 2 nodes.
Fig. 2. Deterministic Merging Blocks Strategy for all odd prime power cases q = 2 r , r ∈ N Remark 1. Some important features of the merging block design are as follows:
-Merging does not mean that the two nodes combine physically to become one. Just that they are to be treated as one unit. -The resultant merged block design has full communication among the blocks through at least one common key between any two given pair of merged block ensuring full communication in resultant network. -Full communication can not be assured when nodes are merged randomly to form larger blocks. Probably this is the main reason why authors of [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF] could not justify several issues in their random merging model. -The current authors feel that it is mandatory to have inter nodal communication. Any communication received by either of the two constituent nodes of a block can be passed down to the other node and hence make the other node connected. As such, while proposing the merged block design, this consideration was given importance. -Therefore the total number links in the merged scheme is same as that of the original RS code based KPD of Ruj and Roy in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. This fact will be recalled later while discussing resiliency of the system.
Network Parameters
Some important aspects of the combined scheme like communication probability, computational overhead, resiliency and scalability will be present in this section.
Communication Probability; Overhead; Network Scalability
Communication Probability or Connectivity is defined to be the ratio of number of links existing in the network with respect to the total number of possible links. A link is said to exists between two nodes they share at least one common key. So that the merged network has N mb = 1200 nodes. Clearly the present Merging Block Design provides much better communication than the original RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] and the random models of [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF][START_REF] Sarkar | Assured Full Communication by Merging Blocks Randomly in Wireless Sensor Networks based on Key Predistribution Scheme using RS code[END_REF] (both almost same) even when no. of keys per nodes, (k) decreases. For the RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF], taking the key ring of node (i, j) as {(p i+ jq (α c ), α c ) : h ≤ α c ≤ q -1}, 1 ≤ h ≤ q -2 yields decreasing key rings for increasing h. The present Merging Block Design over [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] possesses full connectivity ∀ k ≥ q+7 4 . This follows from the observations that any 2 block share a min. of 4k -6 there are q keys in the network and the pigeon-hole-principle. Present design connectivity = 1 ∀ k ≥ 14 as q = 49. Communication overhead measures the computational complexity of both the key establishment and the message exchange protocols. During key establishment polynomials of (l -1) th degree are equated which involves computing inverses over F q . Since quadratic, cubic and quartic equations can be solved in constant time, this design is valid for for l = 2, 3, 4 and 5. However complexity of quadratic, cubic and quartic is quite high, specially for the nodes. Even the resiliency falls drastically with increasing value of l. So practically l = 2 is considered. For message exchange, complexity is same as that of original RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. Of course the complexity depends on the cryptosystem being used. Any cryptosytem meant for embedded systems like AES-128, is applicable.
Resiliency
Before proceeding with further analysis in the remaining part of the paper, some terminologies need to be introduced. The term 'uncompromised node(s)' associates to node(s) that are not compromised/captured. The link between any two uncompromised nodes is said to be disconnected if all their shared key(s) gets exposed due to capture of s nodes. Standard resiliency measure, E(s) (recalled below) is considered while analyzing/comparing the scheme with existing KPDs. E(s) measures the ratio of number of links disconnected due to capture of s nodes (here blocks) with respect to the total number of links in the original setup. Mathematically: E(s) = number of links broken due to capture of s nodes (here blocks) total number of link in the original setup One can refer to [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]Section 6.4] for an estimated upper bound of E(s). Construction of the Merging Blocks clearly indicated that a merged network of size N corresponds to ≈ 2N sized original KPD, while capture of s merged blocks is almost equivalent 2s nodes of original. Thus the ratio of links broken (E(s)) remains almost the same (≈ old E(s)) as observed experimentally. Providing accurate theoretical explanations is rather difficult due to the randomness of node capture both the original and its merged KPDs.
Simulation and Comparative Results
Run results after 100 runs for each data set are tabulated in Table 2. N RS (= q 2 ) denotes the number of nodes of the original KPD scheme in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. While N MB denotes the number of blocks in merged network. Clearly N MB = N RS 2 . Here, p is a prime number and q = p r is a prime power. Any given pair of nodes has at most 1 key in common as l = 2. Let s MB and s RS be the number of blocks and nodes captured in the original and its merged KPD respectively. Then E RS (s) and E MB (s) denotes the resiliency coefficients in the original and merged blocks respectively. The mentioned tables compares the simulated values of ratio of links disconnected E(s) in the Merging Block model with its original KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF].
Conclusions and Future Works
Deterministic merging of nodes of RS code based KPD [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] has been proposed in this paper. As a result full communication between nodes is achieved. Though the merging is being done for the particular chosen KPD scheme in [START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF], the approach can be modified and generalized to other schemes. This enhances the applicability of the concept. To understand why deterministic is better than its random counterpart, the mentioned design is viewed combinatorially. Algebraic as well as design theoretic analysis of the mentioned KPD paves the logical approach behind the deterministic merging strategy. Remark 1 of Section 4.1 highlight some of the important features of the deterministic merging strategy. One can readily visualize some immediate future research directions. Application of similar deterministic merging concept to network based on other KPDs which lacks full communication among its nodes like may result in interesting works. The reason of preferring such merging strategy over its random counterpart has been sketched. Deterministic approach can be generalized to other schemes like [START_REF] Lee | A combinatorial approach to key predistribution for distributed sensor networks[END_REF]. Generic survey of deterministic v/s random schemes (like current scheme v/s [START_REF] Chakrabarti | A key pre-distribution scheme for wireless sensor networks: merging blocks in combinatorial design[END_REF][START_REF] Sarkar | Assured Full Communication by Merging Blocks Randomly in Wireless Sensor Networks based on Key Predistribution Scheme using RS code[END_REF]) yielding fully communicating networks can be future research topics. The assurance of connectivity with lower keys (refer in Figure 3(b)) paves a direction of achieving fully communicating deterministic schemes having high resiliency. A priori one must look to design scheme having good node support, small key rings, high resilience and scalability. Mathematical solutions to such fascinating problems will be interesting. The deterministic property of the merging technique may enable it to be combined with other deterministic techniques like the one proposed in [START_REF] Sarkar | Secure Connectivity Model in Wireless Sensor Networks Using First Order Reed-Muller Codes[END_REF]. This will ensure that the merged design is free of 'selective node attack' which it still suffers from as the original KPD did.
(b) with nodes in same row or column having no key in common while any other pair of nodes share exactly one common key. This merging strategy is indicated in Figure 2(b) by slanted arrows as before. Nodes occurring at the ends of a slanted line are merged. The idea is to look (a) MB Design 1: Particular case q = 4 = 2 2 =⇒ N = 16 = 4 2 .
Fig- ure 3
3 presents a comparison between the present scheme v/s existing schemes in terms of (E(s)) and connectivity. The graph in Figure3(a) is plotted with number of nodes in the network in x-axis v/s E(s) in y-axis. It compares the resiliency of Merged Block network with other existing schemes. The graph plotting in Figure3(b) based on the varying number of keys per node, k in x-axis) v/s connectivity in y-axis. The original KPD[START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF] is assumed to have q = 49, N = 2401 many nodes.
Fig. 3 .
3 Fig. 3. Graphs showing the comparison between the MB design over RS code based KPD v/s existing schemes with regards to connectivity and resiliency (E(s)) values.
given any pair of varieties x ∈ H i , y ∈ H j with i = j ∃ unique A ∈ A such that x, y ∈ A. Transversal Designs (TD(k,n)): are special type of GDDs with g = n, u = k, while the parameter k remains the same. These can be shown to form (nk, n 2 , n, k)-
configuration. One is referred to [6, Section III] for definition of configuration
and other related concepts.
Common Intersection Design (CID):
Table adapted from section 3.1 of Ruj and Roy[START_REF] Ruj | Key Predistribution Schemes Using Codes in Wireless Sensor Networks[END_REF]. Alternative presentation: Group-Divisible Design (GDD) form. N 4 , N 5 , N 6 , N 7 ; N 8 , N 9 , N 10 , N 11 ; and N 12 , N 13 , N 14 , N 15 . Every class has the property that the coefficient of y in their respective polynomials is same. Equating each other's polynomials i + jy with 0 ≤ i ≤ 3 for some fixed j = 0, 1, 2 or 3 results in no common solution =⇒ no common key. For eg. with j = 1 with i = 0, 1, 2, 3, the corresponding 4 polynomials: 0 + y, 1 + y, 2 + y, 3 + y do not have any common solution. Hence no shared keys for corresponding nodes. The other case for any pair of nodes not sharing any common key is whenever their constant term is same since only non-zero values of y are allowed. This gives rise to alternative type of partition: N 0 , N 4 , N 8 , N 12 ; N 1 , N 5 , N 9 , N 13 ; N 2 , N 6 , N 10 , N 14 ; and N 3 , N 7 , N 11 , N 15 . This motivates one to visualize the key sharing of the 16 nodes, N 0 to N 15 , like a 'square-grid' as presented in Figure | 32,499 | [
"1003059"
] | [
"367774",
"487125"
] |
01480179 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480179/file/978-3-642-36818-9_25_Chapter.pdf | Yansheng Feng
email: [email protected]
Hua Ma
email: [email protected]
Xiaofeng Chen
email: [email protected]
Hui Zhu
email: [email protected]
Secure and Verifiable Outsourcing of Sequence Comparisons ⋆
Keywords: Outsourcing, Garbled Circuit, Verifiable Computation
With the advent of cloud computing, secure outsourcing techniques of sequence comparisons are becoming increasingly valuable, especially for clients with limited resources. One of the most critical functionalities in data outsourcing is verifiability. However, there is very few secure outsourcing scheme for sequence comparisons that the clients can verify whether the servers honestly execute a protocol or not. In this paper, we tackle the problem by integrating the technique of garbled circuit with homomorphic encryption. As compared to existing schemes, our proposed solution enables clients to efficiently detect the dishonesty of servers. In particular, our construction re-garbles the circuit only for malformed responses and hence is very efficient. Besides, we also present the formal analysis for our proposed construction.
Introduction
Several trends are contributing to a growing desire to outsource computing from a device with constrained resources to a powerful computation server. This requirement would become more urgent in the coming era of cloud computing. Especially, cloud services make this desirable for clients who are unwilling or unable to do the works. Although attractive these new services are, concerns about security have prevented clients from storing their private data on the cloud. Aiming to address this problem, secure outsourcing techniques are developed.
Among the existing secure outsourcing techniques, a specific type receives great attention, namely sequence comparisons. Atallah et al. first propose this concept in [START_REF] Atallah | Secure and private sequence comparisons[END_REF], which achieves the nice property of allowing resource-constrained devices to enjoy the resources of powerful remote servers without revealing their private inputs and outputs. Subsequent works [START_REF] Atallah | Secure outsourcing of sequence comparisons[END_REF][START_REF] Atallah | Secure outsourcing of sequence comparisons[END_REF] are devoted to obtain efficiency improvements of such protocols. Techniques for securely computing the edit distance have been studied in [START_REF] Huang | Faster secure two-party computation using garbled circuits[END_REF][START_REF] Jha | Toward practical privacy for genomic computation[END_REF], which partition the overall computation into multiple sub-circuits to achieve the same goal. Besides, [START_REF] Kolesnikov | Improved garbled circuit building blocks and applications to auctions and computing minima[END_REF][START_REF] Szajda | Toward a practical data privacy scheme for a distributed implementation of the Smith-Waterman genome sequence comparison algorithm[END_REF] introduce the Smith-Waterman sequence comparisons algorithm for enhancing data privacy. Blanton et al. [START_REF] Blanton | Secure outsourcing of DNA searching via finite automata[END_REF] utilize finite automata and develop techniques for secure outsourcing of oblivious evaluation of finite automata without leaking any information. In particular, considering highly sensitive individual DNA information, it is indispensable to privately process these data, for example, [START_REF] Wang | A New Effcient Veriable Fuzzy Keyword Search Scheme[END_REF] encrypts sensitive data before outsourcing. Furthermore, when the length of sequences is large, it is not surprising to alleviate clients from laborious tasks by outsourcing related computation to servers.
Recently, Blanton et al. [START_REF] Blanton | Secure and Efficient Outsourcing of Sequence Comparisons[END_REF] propose a non-interactive protocol for sequence comparisons, namely, a client obtains the edit path by transmitting the computation to two servers. However, the scheme [START_REF] Blanton | Secure and Efficient Outsourcing of Sequence Comparisons[END_REF] is impractical to some extent in that it does not achieve verifiability. Also, the scheme [START_REF] Blanton | Secure and Efficient Outsourcing of Sequence Comparisons[END_REF] has to garble each circuit when the sub-circuit is processed and hence is not efficient. As we know, there seems few available techniques for secure outsourcing of sequence comparisons, which can provide verifiability and enjoy desirable efficiency simultaneously. Among, [START_REF] Blanton | Secure and Verifiable Outsourcing of Large-Scale Biometric Computations[END_REF] also achieves the verifiability that can be done by providing fake input labels and checking whether the output from the servers matches a precomputed value, but it verifies with some probability by using the technologies of commitment and Merkle hash tree that differ from our scheme.
Our Contribution. In this paper, we propose a construction for secure and verifiable outsourcing of sequence comparisons, which enables clients to efficiently detect the dishonesty of servers by integrating the technique of garbled circuit with homomorphic encryption. Specially, our solution is efficient in that it re-garbles the circuit only for malformed responses returned by servers. Formal analysis shows that the proposed construction is proved to achieve all the desired security notions.
Preliminaries
Edit Distance
We briefly review the basic algorithm for the edit distance [START_REF] Atallah | Secure and private sequence comparisons[END_REF]. Let M (i, j), for 0 ≤ i ≤ m, 0 ≤ j ≤ n, be the minimum cost of transforming the prefix of λ of length j into the prefix of µ of length i. So, M (0, 0) = 0, M (0, j) = j k=1 D(λ k ) for 1 ≤ j ≤ n and M (i, 0) = i k=1 I(µ k ) for 1 ≤ i ≤ m. Furthermore, we can recurse to obtain the results:
M (i, j) = min M (i -1, j -1) + S(λ j , µ i ) M (i -1, j) + I(µ i ) M (i, j -1) + D(λ j )
where S(λ j , µ i ) shows the cost of replacing λ j with µ i , D(λ j ) shows the cost of deleting λ j , and I(µ i ) shows the cost of inserting µ i .
Grid Graph Measures
The relevances among the entries of the M -matrix induce an (m + 1) × (n + 1) grid directed acyclic graph (DAG). It is apparent to see that the string editing problem can be regarded as a shortest-path problem on DAG. An l 1 × l 2 DAG is a directed acyclic graph whose vertices are the l 1 l 2 points of an l 1 × l 2 grid, and such that the only edges from point (i, j) are to points (i, j + 1), (i + 1, j), and (i + 1, j + 1). Figure 1 shows an example of DAG and explains that the point (i, j) is at the ith row from the top and the jth column from the left. As special cases of the above definition, the meanings of M (0, 0) and M (m, n) are easy to be obtained. Such edit scripts that transform λ into µ are in one-to-one correspondence to the edit paths of G that start at the source (which represents M (0, 0)) and end at the sink (which represents M (m, n)).
Yao's Garbled Circuit
We summarize Yao's protocol for two-party computation [START_REF] Yao | How to generate and exchange secrets[END_REF], which is initiated by introducing the millionaire problem that is the same as [START_REF] Vivek | A Special Purpose Signature Scheme for Secure Computation of Traffic in a Distributed Network[END_REF]. For more details, we refer to Lindell and Pinkas' description [START_REF] Lindell | A proof of Yao's protocol for secure two-party computation[END_REF].
We assume two parties, A and B, wish to compute a function F over their private inputs a and b. First, A converts F into a circuit C. A garbles the circuit and obtains G(C), and sends it to B, along with garbled input G(a). A and B then engage in a series of OTs so that B obtains G(b) with A learning nothing about b. B then applies the garbled circuit with two garbled inputs to obtain a garbled version of the output: G(F (a, b)). A then maps this into the actual output.
In more detail, A constructs the garbled circuit as follows. For each wire w in the circuit, A chooses two random values k 0 w , k 1 w R ←-{0, 1} λ to represent 0/1 on that wire. Once A has determined every wire value, then forms a garbled version of each gate g (See Fig. 2). Let g be a gate with input wires w a and
γ ij = E k i a (E k j b (k g(i,j) z )), where i ∈ {0, 1}, j ∈ {0, 1} (1)
When k α a , k β b , and the four values γ ij are given, it is possible to compute k g(α,β) z
. By then B transmits k g(α,β) z to A, A can map them back to 0/1 values and hence obtains the output of the function.
Secure Model and Definitions
In our work we transfer the problem of the edit distance computation by a client C for strings µ 1 , ..., µ m and λ 1 , ..., λ n to two computational servers S 1 and S 2 . Furthermore, the security requirement is such that neither S 1 nor S 2 learns anything about the client's inputs or outputs except the lengths of the input strings and the alphabet size. More formally, we assume that S 1 and S 2 they are semi-honest and non-colluding, they follow the computation but might attempt to learn extra information. Here, we assume that S 2 's ability is more powerful than S 1 , we only prove that the attempt of S 2 's attack fails in latter proof. Since the powerful adversary can not attack client successfully, neither can the weak one. Security in this case is guaranteed if both S 1 's and S 2 's views can be simulated by a simulator with no access to either inputs or outputs other than the basic parameters, and such simulation is indistinguishable from the real protocol. We introduce several definitions in the following: Definition 1. We say that a private encryption scheme (E, D) is Yao-secure if the following properties are satisfied :
-Indistinguishable encryptions for multiple messages -Elusive range -Efficiently verifiable range Definition 2. (Correctness). A verifiable computation scheme V C edit is correct if for any choice of function F , the key generation algorithm produces keys
(P K, SK) ← Keygen(F, λ) such that, ∀x ∈ Domain(F ), if ProbGen SK (x) → σ x , Compute P K (σ x ) → σ y , then y = F(x)←Verify SK (σ y ).
this position is expressed as M (m/2, θ(m/2)). Then, we discard cells from the top right and lower left of M (m/2, θ(m/2)). We recursively apply this algorithm to the remaining of the matrix. But this case exposes M (m/2, θ(m/2)) to the servers. With protecting θ(m/2), we form two sub-problems of size 1/2 and 1/4 of the original [START_REF] Blanton | Secure and Efficient Outsourcing of Sequence Comparisons[END_REF].
We now introduce how this computation can be securely outsourced. First, client produces garbled circuit's random labels corresponding to its inputs (two labels per input bit). Then it sends all the labels to S 1 for forming the circuit and one label per wire that corresponds to its input value to S 2 . Once the circuit is formed, S 2 will evaluate it using the labels. The novelty of this way is, scheme without OT protocols is also feasible.
An advanced non-interactive protocol has been proposed in [START_REF] Blanton | Secure and Efficient Outsourcing of Sequence Comparisons[END_REF]. Based on this paper, we achieve the multi-round inputs verifications by integrating the garbled circuit with fully homomorphic encryption [START_REF] Gentry | Fully homomorphic encryption using ideal lattices[END_REF]. Specifically, the client will encrypt labels under a fully homomorphic encryption public key. A new public key is generated for each-round input in order to prevent being reused. The server can then evaluate labels and send them to the client, who decrypts them and obtains F (x). This scheme can reuse the garbled circuit until the client receives a malformed response, which is more efficient than generating the new circuit every time.
The Proposed Construction
The following is a safe and verifiable protocol to achieve the calculation of the edit path. Among, C stands for the client, S 1 , S 2 as two servers. m, n are the lengths of two private strings. This allows us to obtain the overall protocol as follows: Input: C has two private sequences as well as their cost tables, C must generate min(m, n) key pairs of the fully-homomorphic encryption against all the subcircuits. Besides, using generated keys, C encrypts the private input label's values and delivers these into the circuit. Output: C obtains the edit path. S 1 and S 2 learn nothing. Protocol V C edit :
1. Pre-computing: C generates two random labels (l t 0 , l t 1 ) for each bit of its input µ 1 , ..., µ m , λ 1 , ..., λ n , I(µ i ) for each i ∈ [1, m], D(λ j ) and S(λ j , •) for each j ∈ [1, n], I(•), D(•), and
S(•, •), t ∈ [1, S Σ (m + n) + S C (m + 2n + nσ)].
C also generates min(m, n) key pairs of the fully-homomorphic encryption and runs the encryptions, (σ x ← Encrpyt(P K i E , l t bt ), i ∈ [1, min(m, n)]) against all the sub-circuits. 2. Circuit's construction: C sends all (l t 0 , l t 1 ) to S 1 , S 1 uses the pairs of labels it received from C as the input labels in constructing a garbled circuit that produces θ(m/2), strings µ Let the pairs of the output labels that correspond to θ(m/2) be denoted by ( lt 0 , lt 1 ), where t ∈ [1, ⌈log(n)⌉], the labels corresponding to the output labels for the first sub-problem be denoted by (l ′ t 0 , l ′ t 1 ), where t ∈ [1, S Σ (m/2 + n) + S C (m/2 + n + nσ)] and the labels corresponding to the output labels for the second sub-problem be denoted by (l ′′ t 0 , l ′′ t 1 ), where t ∈ [1, S Σ (m/2 + n/2) + S C (m + n + nσ)/2]. Then, S 1 stores three types of labels. 3. Keygen: S 1 transfers ( lt 0 , lt 1 ) to the client as private key SK. In the precomputing above, C has already generated key pairs of the fully-homomorphic encryption, ((P K E , SK E ) ← Keygen(λ)). C stores SK and SK E and exposes P K E to S 2 . 4. Evaluation: C transmits all the labels σ x in the pre-computing to S 2 for storing and evaluation. Then, S 2 uses P K E to calculate Encrypt(P K E , γ i ).
Next, runs Evaluate(C, Encrypt(P K E , γ i ), Encrypt(P K E , l t bt )) with homomorphic encryption's property, we can obtain the σ y ← Encrypt(P K E , l t bt ), which is stored in S 2 later. 5. Sub-circuits evaluation: S 1 and S 2 now engage in the second round of the computation, where for the first circuit S 1 uses pairs (l
′ t 0 , l ′ t
1 ) as the input wire labels as well as the pairs of the input wire labels from C that correspond to cost tables I(•), D(•), and S(•, •). After the circuit is formed, S 1 sends to S 2 who uses the encrypted labels stored before to evaluate this circuit. S 2 saves every result value of the evaluation as σ (i) y . 6. Verification: When S 1 and S 2 reach the bottom of recursion, S 2 sends the all σ (i)
y from each circuit to C. C uses SK E stored before to decrypt σ (i)
y to get l t bt . Further, converts the output labels into the output of the function (e.g., F (x) = θ(a), a = 1, ..., m) by using SK, from which it can reconstruct the edit path.
Analysis of the Proposed Construction
Robust Verifiability
In practice, the view of the client will change after the evaluation. How to deal with this situation that client receives a malformed response? One option is to ask the server to run the computation again. But this repeated request informs the server that its response was malformed, server might generate forgeries. The client aborts after detecting a malformed response, but it can hinder our protocol's execution. We will consider it as follow:
There is indeed an attack if the client does not abort. Specifically, the adversary can learn the input labels one bit at a time by XOR operation [START_REF] Gennaro | Non-interactive Verifiable Computing Outsourcing Computation to Untrusted Workers[END_REF]. So, it can generate cheating. When a malformed response come to the client, this paper needs to continue running the protocols V C edit instead of terminating. we must ask the client to regarble the circuit, every time when a malformed response is received.
Security Analysis
Theorem 1. Let E be a Yao-secure symmetric encryption scheme and E be a semantically secure homomorphic encryption scheme. Then protocol V C edit is a secure and verifiable computation scheme.
Proof: Since E is a Yao-secure symmetric encryption scheme, then V C Y ao is a one-time secure verifiable computation scheme (Proof of Theorem 3 in [START_REF] Gennaro | Non-interactive Verifiable Computing Outsourcing Computation to Untrusted Workers[END_REF]). Our method is transformming (via a simulation) a successful adversary against the verifiable computation scheme V C edit into an attacker for the one-time secure protocol V C Y ao . Next, for the sake of contradiction, let us assume that there is an adversary A such that Adv V erif A (V C edit , F, λ) ≥ ε, where ε is non-negligible in λ. We use A to build another adversary A ′ which queries the ProbGen oracle only once, and for which Adv
V erif A ′ (V C Y ao , F, λ) ≥ ε ′
, where ε ′ is close to ε. Once we prove the Lemma 1 below, we have our contradiction and the proof of Theorem 1 is complete.
Lemma 1. Adv V erif A ′ (V C Y ao , F, λ) ≥ ε ′ where ε ′ is non-negligible in λ.
Proof: This proof proceeds by defining a set of experiments.
H k A (V C edit , F, λ): For k = 0, ..., l -1. Let l be an upper bound on the number of queries that A makes to its ProbGen oracle. Let i be a random index between 1 and l. In this experiment, we change the way the ProbGen oracle computes its answers. For the jth query:
-j ≤ k and j = i: The oracle will respond by (1) choosing a random key pair for the homomorphic encryption scheme (P K j E , SK j E ) and ( 2) encrypting random λ-bit strings under P K j E . -j > k or j = i: The oracle will (1) generate a random key pair (P K i E , SK i E ) for the homomorphic encryption scheme and (2) encrypt σ x (label by label) under P K i E .
We denote with Adv
k A (V C edit , F, λ)= Prob[H k A (V C edit , F, λ)=1]. -H 0 A (V C edit , F, λ) is identical to the experiment Exp V erif A [V C Y ao , F, λ].
Since the index i is selected at random between 1 and l, we have that
Adv 0 A (V C edit , F, λ) = Adv V erif A (V C edit , F, λ) l ≥ ε l (6)
-H l-1 A (V C edit , F, λ) equals the simulation conducted by A ′ above, so
Adv l-1 A (V C edit , F, λ) = Adv V erif A ′ (V C Y ao , F, λ) (7)
If we prove H k A (V C edit , F, λ) and H k-1 A (V C edit , F, λ) are computationally indistinguishable, that is for every
A | Adv k A [V C edit , F, λ] -Adv k-1 A [V C edit , F, λ] |≤ negli(λ) (8)
if we are done above, then that implies that
Adv V erif A ′ (V C Y ao , F, λ) ≥ ε l -l • negli(λ) (9)
The right of inequality is the desired non-negligible ε ′ .
Remark: Eq.( 8) follows from the security of the homomorphic encryption scheme. The reduction of the security of E with respect to Yao's garbled circuits to the basic security of E is trivial. For more details, please refer to [START_REF] Gennaro | Non-interactive Verifiable Computing Outsourcing Computation to Untrusted Workers[END_REF].
Conclusions
This work treats the problem of secure outsourcing of sequence comparisons by a computationally limited client to two servers. To be specific, the client obtains the edit path of transforming a string of some length into another. We achieve this by integrating the techniques of garbled circuit and homomorphic encryption. In the proposed scheme, client can detect the dishonesty of servers according to a response returned from those servers. In particular, our construction re-garbles the circuit only when a malformed response comes from servers and hence is efficient. Also, the proposed construction is proved to be secure in the given security model.
Fig. 1 .
1 Fig. 1. Example of a 3 × 5 grid DAG
Fig. 2 .
2 Fig. 2. Circuit's garbled table w b , and output wire w z . Then the garbled version G(g) consists of simply four ciphertexts:
This verifiable computation V C can be consulted in[START_REF] Parno | How to Delegate and Verify in Public: Verifiable Computation from Attribute-based Encryption[END_REF].
Acknowledgements
We are grateful to the anonymous referees for their invaluable suggestions. This work is supported by the National Natural Science Foundation of China (Nos. 60970144 and 61272455), the Nature Science Basic Research Plan in Shaanxi Province of China (No. 2011JQ8042), and China 111 Project (No. B08038).
We then describe its correctness by an experiment: Experiment Exp veri A [V C edit , F, λ] (P K, SK) ← Keygen(F, λ); For i = 1, ..., l = poly(λ); poly(•) is a polynomial.
The adversary succeeds if it produces an output that convinces the verification algorithm to accept on the wrong output for a given input. Definition 3. (Security). For a verifiable computation scheme V C edit , we define the advantage of an adversary A in the experiment above as:
(2)
where negli(•) is a negligible function of its input. The F in the above descriptions is the function to calculate the θ(•) in our protocol. , the adversary can query the oracle P robGen SK (•) only once and must cheat on that input. V C Y ao is a special case of V C 1 when the input is single.
Similar to formulas (2)(3), we can obtain:
4 Secure and Verifiable Outsourcing of Sequence Comparisons
High-level Description
To gain the edit path, we can use a recursive solution: In the first round, instead of computing all elements of M , we compute the elements in the "top half" and the "bottom half" of the matrix respectively. Then calculate each M (m/2, j) and determine the position with the minimum sum from top to bottom. In [START_REF] Blanton | Secure and Efficient Outsourcing of Sequence Comparisons[END_REF] | 21,730 | [
"1003069",
"1003070",
"993512",
"1003071"
] | [
"469153",
"469153",
"469153",
"469153"
] |
01480183 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480183/file/978-3-642-36818-9_30_Chapter.pdf | Beibei Wu
Ming Xu
Haiping Zhang
email: [email protected]
Jian Xu
Yizhi Ren
Ning Zheng
A Recovery Approach for SQLite History Recorders from YAFFS2
Keywords: Digital forensics, Android, YAFFS2, SQLite, Recovery
Nowadays, forensic on flash memories has drawn much attention. In this paper, a recovery method for SQLite database history records (I.e. updated and deleted records) form YAFFS2 is proposed. Based on the out-of-place-write strategies in NAND flash memory required by YAFFS2, the SQLite history recorders can be recovered and ordered into timeline by their timestamps. The experiment results show that the proposed method can recover the updated or deleted records correctly. Our method can help investigators to find the significant information about user actions in Android smart phones by these history recorders, although they seem to have been disappeared or deleted.
Introduction
With the growth of Android smart phones, the need for digital forensics in this area has shown a significant increase. For the small size, and fast running speed, SQLite is widely used in application software that needs to save simple data in a systematic manner or adopted into embedded device software. In the Android, a large amount of user data is stored in the SQLite database, such as short messages, call logs, and contacts [START_REF] Quick | Forensic Analysis of the Android File System Yaffs2, 9th Australian Digital Forensics Conference[END_REF]. Considering that deletion of data is frequently practiced in order to manage storage space or to update with the latest data, acquiring deleted data information is equally as important as retrieving undamaged information from the database.
Related work
Researches of database record recovery are already begun in 1983 by Haerder [START_REF] Haerder | Principles of transaction-oriented database recovery[END_REF]. He suggested that the deleted record can be recovered using transaction file. This method can be applied to traditional database on PC when the information of deleted record is included in transaction file.
A study conducted by Pereira [START_REF] Pereira | Forensic analysis of the Firefox 3 Internet history and recovery of deleted SQLite records[END_REF] attempted recovering deleted records in Mozilla Firefox 3 using rollback journal file. In the paper, an algorithm to recover deleted SQLite entries based on known internal record structures was proposed and an exception was used that the rollback journal is not deleted at the end of each transaction when the database is used in "exclusive locking" mode.
While the objective of deleted data recovery in SQLite is on the line of Pereira's work, a tool using recovery method that approaches actual data files instead of a journal file that would offer improved practical availability was suggests by Sangjun Jeon [START_REF] Jeon | A Recovery Method of Deleted Record for SQLite Database[END_REF]. They analyzed the file structure of SQLite database and proposed a method to recover deleted records from the unallocated area in page. However, this method is hard to recover deleted records because remained data are partial in deleted area and the length of each field is difficult to estimate. For the android phone that implemented wear-leveling by YAFFS2, the deleted records can be recovered from previous versions of file.
Recovery Elements of SQLite deleted records
From the "out-of-place-write" strategy of YAFFS2 and atomic commit in SQLite [START_REF]Atomic Commit in SQLite[END_REF],
a deleted SQLite file could be recovered. Therefore, the deleted records can also be restored. In YAFFS2, obsolete chunks can only be turned into free chunks by the process of garbage collection. Whenever one or more obsolete chunks exist within a block, the corresponding data is still recoverable until the respective block gets garbage collected. And from the perspective of the storage mechanism of YAFFS2, it can be concluded that every object header corresponds to a version of file. Once a transaction occurs, there will be an object header and a new version is created. So, the deleted file can be recovered until the respective block gets garbage collected. And versions of each database can be restored as much as possible.
The proposed method
The process framework of the proposed algorithm is shown in Fig. 1.
Pre-processing Acquiring image
Recovering Sqlite database file Extracting records Constructing timeline
Fig. 1. The process framework of the proposed algorithm
There are two ways to acquire Android image-physical and logical method. Physical method is carried by JTAG [START_REF] Breeuwsma | Forensic imaging of embedded system using JTAG(boundary-scan)[END_REF] while logical method is carried out by "DD" or "NANDdump" instruction after rooting is executed. In this paper, only the logical method is considered.
The pre-processing sequentially records each chunk's objectID, objectType, chunkID, and chunkType in accordance with the allocation order on chip. All the blocks of a flash chip are sorted by sequence number from the largest to the smallest.
And then these sorted blocks are scanned from the one with the largest sequence number to the one with the smallest, and within a block, its chunks are scanned from the last one to the first.
Here num_chunks is the number of chunks the file occupies, length is the file's length, and size_chunk is the size of a page on NAND flash chip.
_ [ / ] block offset bsq k N (2) _ ( % ) chunk offset N k N (3) _ _ _ chunk address block offset chunk offset (4)
Here k means the k th chunk which was stored, N is the number of chunks in a block, block_offset is the chunk's physical block offset, chunk_offset is the chunk's relative offset in a block, and chunk_address is its physical address.
When scanning the whole chip, all sqlite database file are recovered indicated by object header. During this process, all files are grouped by the filename, and then distinguished by the history version number. In next step, all the records are extracted from each recovered integrated database file and stored into a CSV file, and verify that the integrity of the file that we restored. When all recoverable history version of a same database file are recovered, a timeline for SQLite CRUD operations can be constructed by timestamp recorded in object header. In contrast of the SQLite records extracted from the two adjacent versions files, the SQL event from one change to another can be inferred. Then, through the analysis of the low-level SQL events corresponding to each user action, a user actions timeline can be constructed by analyzing the entire timeline of SQL event. Finally, we can have a global awareness about what the user did and when.
Acknowledgements
. This work is supported by the NSF of China under Grant No. 61070212 and 61003195, the Zhejiang Province NSF under Grant No. Y1090114 and LY12F02006, the Zhejiang Province key industrial projects in the priority themes under Grant No 2010C11050, and the science and technology search planned projects of Zhejiang Province (No. 2012C21040).
Experiments
In this part, a publicly dataset experiment is used to verify the effectiveness of the recovery method we proposed in the real scene.
The DFRWS has created two scenarios for the forensics challenge in 2011 [START_REF] Dfrws | DFRWS-2011-challenge[END_REF]. Images for Scenario 2 were acquired through NANDdump that OOB area can be acquired. 269MB File mtd8.dd for Scenario 2 was used in this experiment because it is the user image that contains a large number of user information. 110 sqlite files of different versions are recovered. Comparing our experimental results with the DFRC team's result, the latest version of all files in our result is equal to the DFRC team's result. In addition, the older versions of the sqlite file can be recovered using our method and the user actions can be analyzed using these files.
For example, the recovered file mmssms.db contains the user's short messages sent and received using this device. In our result, 16 versions file are recovered. Then a timeline Fig. 2 and Fig. 3 can be constructed according to these files. In the Fig. 3, you can clearly see what the user did and when. For instance, insert a record of id=17 in 04:43:04 represents the user received a short message. Update a record in 04:43:13 represents the user read it. Similarly, other database files can be analyzed too.
Through a joint analysis of the results of all databases, a whole timeline of user behavior can be obtained. Experiment in this part shows that our proposed method is suitable for the real case and play an important role in forensic work.
Conclusions and Future work
In this paper, a recovery method for SQLite record based YAFFS2 is proposed. Then construct timeline for SQLite CRUD operations by utilizing timestamp recorded in object header, and these may supply significant information about user behaviors to forensic investigations. The experimental results show the efficiency of the proposed method. This paper proves that file recovering from flash chips is practical, but as ext4 file system is widely used in android phone and Linux system, more forensic research is needed on ext4 in digital forensic perspective. Thus the technology of recovering data records from SQLite in ext4 file system will be our research direction. | 9,443 | [
"1003077",
"1003078",
"773258"
] | [
"128325",
"128325",
"128325",
"128325",
"128325",
"128325"
] |
01480185 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480185/file/978-3-642-36818-9_31_Chapter.pdf | Yuchao She
email: [email protected]
Hui Li
Hui Zhu
UVHM: Model Checking based Formal Analysis Scheme for Hypervisors
Keywords:
Hypervisors act a central role in virtualization for cloud computing. However, current security solutions, such as installing IDS model on hypervisors to detect known and unknown attacks, can not be applied well to the virtualized environments. Whats more, people have not raised enough concern about vulnerabilities of hypervisors themselves. Existing works mainly focusing on hypervisors' code analysis can only verify the correctness, rather than security, or only be suitable for open-source hypervisors. In this paper, we design a binary analysis tool using formal methods to discover vulnerabilities of hypervisors. In the scheme, Z notation, VDM, B, Object-Z or CSP formalism can be utilized as suitable modeling and specification languages. Our proposal sequently follows the process of disassembly, modeling, specification, and verification. Finally, the effectiveness of the method is demonstrated by detecting the vulnerability of Xen-3.3.0 in which a bug is added.
Introduction
Cloud computing is a significant technology at present. The software that controls virtualization is termed as a hypervisor or a virtual machine monitor (VMM) that is seen as an efficient solution for optimum use of hardware, improved reliability and security.
Although there are many benefits, cloud computing encounters critical issues of security and privacy. Hypervisors have already become the path of least resistance for one guest operating system to attack another and it is also the path of least resistance for an intruder on one network to gain access to another network. The most important security issues for hypervisors are typically the risk of information leakage caused by information flow security weakness, etc. Some vulnerabilities of hypervisors have already been reported [START_REF] Marshall | Microsoft Hyper-V gets its first security patch[END_REF] [START_REF]MS11-047 -Vulnerability in Microsoft Hyper-V could cause denial of service[END_REF].
Our contribution. In this paper, we propose UVHM to detect vulnerabilities of hypervisors. In order to find as many vulnerabilities in the hypervisors as possible, the evaluation process must include demonstration of correct correspondences between security policy objectives, security specifications, and program implementation. Thus, we could use model checking theory [3][4] to discover vulnerabilities.
Related Work. Vulnerability analysis on hypervisors basically remains as a challenge. There are some existing works heavily focusing on code verification and hypervisor analysis. VCC [START_REF] Leinenbach | Verifying the Microsoft Hyper-V hypervisor with VCC[END_REF] focuses on verifying the correctness of software rather than the security of it. Moreover, it can only verify C language. The Xenon project [START_REF] Freitas | Formal methods for security in the Xenon hypervisor[END_REF] is only suitable for open-source hypervisors. For Maude [START_REF] Webster | Detection of metamorphic and virtualization-based malware using algebraic specifi cation[END_REF], the algebraic specification-based approach does not apply to analyze the vulnerabilities of VMMs. The existing models have a lot of limitations and can not pretend to address all of the security requirements of a system. Most of the available model checkers [START_REF] Holzmann | The SPIN Model Checker: Primer and Reference Manual[END_REF][9] use a proprietary input model. In summary, new studies have to be carried out basically starting from scratch.
Formal Analysis on Hypervisors
In UVHM, we develop suitable formal models, verification tools and related security policies according to our own needs to conduct more comprehensive studies on different aspects of hypervisor's security. Practical hypervisors' different design, architectures and working mechanisms will lead to different models, security policies, etc.
Formal Analysis on Binary Code
The scheme follows the process of disassembling -modeling -specificationverification. The general flow chart of UVHM is shown below. We shall first disassemble the hypervisor's binary file, and then formally model definitions of security, capture the behaviours of hypervisor's interfaces with such formal model, and verify the security using self-developed prover under the verification conditions.
1) Disassembling
We present static analysis techniques to correctly disassemble binaries and use at least two different disassemblers. The latter disassembler shall help fulfil some special requests/cases which cannot be handled by the former.
2) System Modeling
The self-developed formal models are needed. This model should contain the following characteristics: accurate, unambiguous, simple, abstract, easy to understand and only related with security. Only related with security means that the models only pay attention to the security features, and will not involve too many about functions and details of the implementation.
A great many hypervisors need hardware-assistant virtualization. Thus, we could adopt Z notation, VDM, B, Object-Z or CSP formalism to analyze concurrent process, and choose these formalism to define security. The partial orders of the system can be modelled into a lattice [START_REF] Denning | A lattice model of secure information flow[END_REF]. The most important relationship to be captured is probably the triangular dependency between three major entities from the state space: virtual contexts for guest domains, virtual instruction set processor VCPUs, and virtual interrupts or event channels. The mutual dependence between key components is a common feature in kernel design.
3) Specification
Unambiguous, precise specification of our requirement is needed. Integrality of security policy's specifications need to pay attention to. We could define some special hypercall interface sequences in security policy to identify illegal codes which execute in either guest or host domain and attempt to access another domain without permission.
For inter-domain security infringement, covert channel analysis will be adopted. Meta-flows [START_REF] Shen | A Dynamic Information Flow Model of Secure Systems[END_REF] are combined to construct potential covert channels. Figure 2 shows the scene that the extension of f to mf is supervised by a series of rules. In this framework, we should define illegal flows in the form of information flow sequence, i.e., define the flow security policy.
4) Verification
Automated verification of a representative subset will be able to provide some critical insights into the potential difficulties and reveal the approaches that should be avoided.
System Implementation and Testing
We choose Xen-3.3.0 as our experimental subject. We use UVHM to verify whether the Xen contains the bug numbered 1492 in Xen's official website.
Before disassembling, we add this bug to Xen and compile it into hypervisor's binary file. Then, we use UVHM to get the whole formal analysis tool.
Adding the Bug
Add "free(buf); buf=NULL" to the file "tools/python/xen/lowlevel/acm/acm.c". Xen with the bug above could not detect the installed DEFAULT policy and reports the DEFAULT policy as "None" after initializing XSM-ACM module successfully.
There are two pictures to make a comparison between the installed Xen with the bug and without it. Figure 3A shows that the DEFAULT security policy in the secure Xen is ACM whose version is 1.0, and it could be used as normal. Figure 3B shows that for the vulnerability added Xen, the DEFAULT security policy could not be used.
Implementation Module
1) Disassembling.
We use IDA Pro, and BitBlaze to disassemble acm.o file. We could build up our models through analyzing the assembly language they gives us.
2) Modeling.
What we concern about is whether the buffer where ACM policy loaded in is 'NULL' after the XSM-ACM module was initialized successfully.
Only several states that related with the buffer's state are being defined. We don't capture assignment instructions' behaviors which appeared in the assembly code which have nothing to do with the buffer's state.
3) Specification
If the buffer is 'NULL', of course, there is no policy could be used. We define this situation as a vulnerability. If not, the bug which the Xen contains is not the one defined above. Thus, we can define the following secure policy:
1) The buffer is 'NULL': This is a vulnerability caused by some wrong operations to the buffer, flag = 1 ;
2) The buffer is not 'NULL': Success, flag = 0.
4) Verification
Combining the model and specification together, we can get the tool. The input variables and relations among these variables can be regarded as an initial state.
Based on the different range of the variables, the branch conditions will send them to different states. We could judge whether this is the vulnerability we defined through detecting the value of the flag. The following chart shows the visible model of the assembly code. Now, the binary analysis tool is accomplished. We could use this tool to detect Xen hypervisors whether contains the vulnerability or not.
System Testing and Results Analysis
First, we disassemble the acm.o binary file. According to the assembly code and the defined model, we then sequently input needed variables or relations between them after analyzing the semantics of its assembly code.
1) For Xen with the bug, we input the following information after analysis: x_handle=32, x_op =1, buf != NULL, errno != EACCES. The system's report tells us this Xen contains the vulnerability we defined in the secure policy.
2) For Xen without the bug, we input the information: x_handle=6, x_op=-9, buf != NULL , errno != EACCES. The report says this Xen doesn't contain the vulnerability we defined.
Thus, without installing Xen, we are able to kown whether the Xen contains this bug.
This demonstrates the effectiveness of our formal binary analysis framework. The model and specification are all written in C language. They are linked through the flag.
Fig. 1 .
1 Fig. 1. The Flow Chart of UVHM
Fig. 2 .
2 Fig. 2. Framework of Covert Channel Identification
Fig. 3 .
3 Fig. 3. Comparison Picture
Fig. 4 .
4 Fig. 4. The Visible Model
Conclusion
There are security challenges in the cloud, and a secure cloud is impossible unless the virtual environment is secure. Aiming at this problem, we present our formal method which follows the process of disassembling -modeling -specificationverification to analyze the vulnerabilities of various hypervisors, etc.
We use this idea to realize a system that could verify whether the Xen contains the bug that will prevent the ACM policy from being used although the XSM-ACM module has been initialized successfully through analyzing its binary code. This demonstrates the effectiveness of the above method. This approach can be applied to detect vulnerabilities of various kinds of hypervisors. | 11,052 | [
"1003079"
] | [
"469153",
"469153",
"469153"
] |
01480190 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480190/file/978-3-642-36818-9_36_Chapter.pdf | Shuichiro Yamamoto
email: [email protected]
Tomoko Kaneko
Hidehiko Tanaka
email: [email protected]
A Proposal on Security Case based on Common Criteria
Keywords:
It is important to assure the security of systems in the course of development. However, lack of requirements analysis method to integrate security functional requirements analysis and validation in upper process often gives a crucial influence to the system dependability. For security requirements, even if extraction of menaces was completely carried out, insufficient countermeasures do not satisfy the security requirements of customers. In this paper, we propose a method to describe security cases based on the security structures and threat analysis. The security structure of the method is decomposed by the Common Criteria (ISO/IEC15408).
Introduction
It is important to show how a request such as "The system is acceptably secure" is supported by objective evidence for customers. We show the description method by using Assurance Case and Common Criteria as the objective evidence. In Chapter 2 "Related work," we explain assurance case [START_REF] Kelly | The Goal Structuring Notation -A Safety Argument Notation[END_REF][2][3][START_REF] Omg | [END_REF] and security case approaches [START_REF] Goodenough | Arguing Security -Creating Security Assurance Cases[END_REF][START_REF] Alexander | Security Assurance Cases: Motivation and the State of the Art[END_REF][START_REF] Kaneko | Proposal on Countermeasure Decision Method Using Assurance Case And Common Criteria[END_REF], as well as an overview of common criteria (CC) [5]. In Chapter 3, we show security case reference patterns based on CC. In Chapter 4, some considerations on the method are described. Chapter 5 explains future issues.
2
Related work
Assurance case
Security case is an application of Assurance case, which is defined in ISO/IEC15026 part 2. Security cases are used to assure the critical security levels for target systems. Standards are proposed by ISO/IEC15026 [2] and OMG's Argument Metamodel (ARM) and [3] Software Assurance Evidence Metamodel (SAEM) [START_REF] Omg | [END_REF]. ISO/IEC 15026 specifies scopes, adaptability, application, assurance case's structure and contents, and deliverables. Minimum requirements for assurance case's structure and contents are: to describe claims of system and product properties, systematic argumentations of the claims, evidence and explicit assumptions of the argumentations; to structurally associate evidence and assumptions with the highest-level claims by introducing supplementary claims in the middle of a discussion. One common notation is Goal Structuring Notation (GSN) [START_REF] Kelly | The Goal Structuring Notation -A Safety Argument Notation[END_REF], which widely used in Europe for about ten years to verify system security and validity after identifying security requirements.
Security case
Goodenough, Lipson and others proposed a method to create Security Assurance case [START_REF] Goodenough | Arguing Security -Creating Security Assurance Cases[END_REF]. They described that the Common Criteria provides catalogs of standard Security Functional Requirements and Security Assurance Requirements. They decomposed Security case by focusing on the process, such as requirements, design, coding, and operation. The approach did not use the Security Target structure of the CC to describe Security case. Alexander, Hawkins and Kelly overviewed the state of the art on the Security Assurance cases [START_REF] Alexander | Security Assurance Cases: Motivation and the State of the Art[END_REF]. They showed the practical aspects and benefits to describe Security case in relation to security target documents. However they did not provide any patterns to describe Security case using CC.
Kaneko, Yamamoto and Tanaka recently proposed a security countermeasure decision method using Assurance case and CC [START_REF] Kaneko | Proposal on Countermeasure Decision Method Using Assurance Case And Common Criteria[END_REF]. Their method is based on a goal oriented security requirements analysis [START_REF] Kaneko | SARM --a spiral review method for security requirements based on Actor Relationship Matrix[END_REF][START_REF] Kaneko | Specification of Whole Steps for the Security Requirements Analysis Method (SARM)-From Requirement Analysis to Countermeasure Decision[END_REF]. Although the method showed a way to describe security case, it did not provide Security case graphical notations and the seamless relationship between security structure and security functional requirements.
Common criteria
Common Criteria (CC: equivalent to ISO/IEC15408) [5] specifies a framework for evaluating reliability of the security assurance level defined by a system developer. In Japan, the Japan Information Technology Security Evaluation and Certification Scheme (JISEC) is implemented to evaluate and authenticate IT products (software and hardware) and information systems. In addition, based on CC Recognition Arrangement (CCRA), which recognizes certifications granted by other countries' evaluation and authorization schemes, CC accredited products are recognized and distributed internationally. As an international standard, CC is used to evaluate reliability of security requirements of functions built using IT components (including security functions). CC establishes a precise model of Target of Evaluation (TOE) and the operation environment. And based on the security concept and relationship of assets, threats, and objectives, CC defines ST (Security Target) as a framework for evaluating TOE's Security Functional Requirement (SFR) and Security Assurance Requirement (SAR). ST is a document that accurately and properly defines security functions implemented in the target system and prescribes targets of security assurance. ST is required for security evaluation and shows levels of adequacy in TOE's security functions and security assurance.
Security case reference patterns
Issues to describe Security case
Product and process are both important to assure system security. In this paper we propose a hierarchical method to describe Security case. We decompose Security case based on Security Target structure in the upper part. And then we describe bottom part of the Security case based on security analysis process. The figure shows the Security Target structure and security analysis process consists of the two decomposition layers. In the first decomposition, ST overview, TOE description, TOE security environment, Security measures and policies, IT security requirements, and TOE summary specification are described. For each decomposed claim, arguments are also attached to decompose it by security analysis process. For example, to assure the dependability of the TOE security environment, the security analysis process is decomposed by three claims, i.e., Analyzing protection policies, Clarifying security policies, and threat analysis.
Security case based on Security Target structure
3.3
Security case to assure security requirements against threats The sample case is created based on PP [13] provided by IPA (Informationtechnology Promotion Agency) and is not the example of actual specific system. Thus, the sample case should be regarded as a reference model of Security case.
Considerations
Describing Security case according to ST structure of CC has an advantage in validating objective assurance levels based on an international standard notation. It is possible to properly define and implement security functions in line with ST structure and appropriate threat analysis. We also can implement negotiated security functions based on structured way of Security case and international standardized terminologies in CC of catalogued security function levels.
The relationship between security structure of CC and Security case structure is mandatory for compatibility. As shown in the examples of section 3, the Security case structure is seamlessly correspondent to CC. We also confirmed a way to integrate Security cases between Security Target structure and Security functional requirements as shown in the goal relationship of two figures.
5
Future issues
There are some unsolved issues in security case development presented in this paper.
Our study is still in a preliminary phase and further evaluation needs to be done in future. It is necessary to evaluate the proposed method for designing actual system development. The proposed approach provides a reference Security case structure. Therefore, it can also be used to effective validation of the target systems compatibility to CC. This kind of application of our method will provide a simple integration process between security design and validation.
We also have a plan to develop Security case patterns based on this paper. This will ease to reuse Security cases based on CC. This research is an extension of Safety case pattern proposed by Kelly and McDermid [11].
In terms of CC based security requirement analysis, goal oriented methods and use-case based methods are proposed [START_REF] Saeki | Security Requirements Elicitation Using Method Weaving and 3 Common Criteria[END_REF]. Therefore, it is desirable to verify effectiveness of our method by comparing our method with these methods.
Fig. 1 .
1 Fig.1. describes an example pattern for Security case based on CC.
Fig. 1 .
1 Fig.1. Security case pattern for CC based Security Structure
Fig. 2 .
2 Fig.2. describes security case to assure security functional requirements. It consists of the following hierarchical layers, Threats category, Activity of threats, and Security function layers. The security case can be considered as the decomposition of the claim G_6 in Fig.1.
Fig. 2 .
2 Fig.2. Security case pattern for security function specification based on CC | 9,887 | [
"993472",
"1003086",
"1003087"
] | [
"472208",
"487149",
"487144"
] |
01480191 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480191/file/978-3-642-36818-9_37_Chapter.pdf | Wentao Jia
email: [email protected]
Rui Li
email: [email protected]
Chunyan Zhang
email: [email protected]
An Adaptive Low-Overhead Mechanism for Dependable General-Purpose Many-core Processors
Keywords: Many-Core, Redundant Execution, Adaptive Dependable, Low-Overhead
Future many-core processors may contain more than 1000 cores on single die. However, continued scaling of silicon fabrication technology exposes chip orders of such magnitude to a higher vulnerability to errors. A low-overhead and adaptive fault-tolerance mechanism is desired for general-purpose many-core processors. We propose high-level adaptive redundancy (HLAR), which possesses several unique properties. First, the technique employs selective redundancy based application assistance and dynamically cores schedule. Second, the method requires minimal overhead when the mechanism is disabled. Third, it expands the local memory within the replication sphere, which heightens the replication level and simplifies the redundancy mechanism. Finally, it decreases bandwidth through various compression methods, thus effectively balancing reliability, performance, and power. Experimental results show a remarkably low overhead while covering 99.999% errors with only 0.25% more networks-on-chip traffic.
Introduction
Transistors continue to double in number every two years without significant frequency enhancements and extra power costs. These facts indicate a demand for new processors with more than 1000 cores and an increasing need to utilize such a large amount of resources [START_REF] Borkar | Thousand core chips: a technology perspective[END_REF]. As transistor size decreases, the probability of chip-level soft errors and physical flaws induced by voltage fluctuation, cosmic rays, thermal changes, or variability in manufacturing further increases [START_REF] Srinivasan | The impact of technology scaling on lifetime reliability[END_REF], which causes unavoidable errors in many-core systems.
Redundant execution efficiently improve reliability, which can be applied in most implementations of multithreading such as simultaneous multithreading (SMT) or chip multi-core processors (CMP). Current redundant execution techniques such as SRT, CRT and RECVF [START_REF] Subramanyan | Energy-Efficient Fault Tolerance in Chip Multiprocessors Using Critical Value Forwarding[END_REF] entail either high hardware costs such as load value queue (LVQ), buffer, comparer, and dedicated bus, or significant changes to existing highly optimized micro-architectures, which may be affordable for CMP but not for general-purpose many-core processors (GPMCP). Same core-level costs in many-core processors result in an overhead up to 10 or 100 times higher than that in CMP. High overhead may be reasonable for fixed highreliable applications but not for all applications in GPMCP as the latter induce large overhead for applications that do not require reliability.
GPMCP present two implications. First, not all applications require high reliability. Second, the chip contains more than 1000 cores and some cores usually become idle. We proposed high-level adaptive redundancy (HLAR), an adaptive, low-overhead mechanism based application assistance and system resource usage for GPMCP. Our evaluation-based error injection shows that HLAR is capable of satisfying error coverage with minimal NoC traffic that covers 99.999% errors with 0.25% more NoC traffic.
Background and related work
Many-core architectures such as Tilera Tile64, Intel MIC, and NVIDIA Fermi, among others show good perspective. A challenge for many-core architecture is that hardware-managed Cache entail unacceptable costs. As alternative model, software-managed local memory (LM), exposes intermediate memories to the software and relies on it to orchestrate the memory operations. The IBM C64 is a clear example of a device that benefits from the use of LMs.
Applications in GPMCP are parallel and consist of threads with different reliability requirements. Some applications are critical. Wells [START_REF] Wells | Mixed-mode multicore reliability[END_REF] managed two types of applications: reliable applications and performance applications. Kruijf [START_REF] De Kruijf | Relax: An architectural framework for software recovery of hardware faults[END_REF] argued that some emerging applications are error-tolerant and can discard computations in the event of an error.
The following have prompted this study: DCC [START_REF] Lafrieda | Utilizing dynamically coupled cores to form a resilient chip multiprocessor[END_REF] allows arbitrary CMP cores to verify execution of the other without a static core binding or a dedicated communication hardware. Relax [START_REF] De Kruijf | Relax: An architectural framework for software recovery of hardware faults[END_REF] proposed a software recovery framework in which the code regions are marked for recovery. Fingerprinting [START_REF] Smolens | Fingerprinting: bounding soft-error detection latency and bandwidth[END_REF] proposed compressing architectural state updates into a signature, which lowers comparison bandwidth by orders of magnitude.
HLAR Mechanism
Redundancy overhead. HLAR redundancy are not executed for the whole chip, thus, we define redundancy entry as cores that execute redundancy and outside redundancy entry as cores that do not execute redundancy. Considering processors without redundancy as the baseline, we classify redundancy overhead.
(i) Fixed overhead in all cores because of hardware cost or performance lost due to hardware modification (O F A ).
(ii) Special overhead in redundancy entry (O SRE ).
(iii) Temporal overhead in redundancy entry (O T RE ).
(iv) Overhead outside redundancy entry due to bandwidth and other shared resources (O ORE ). If redundancy cores utilize additional shared resources, other cores may cost more to obtain the resources. NoC bandwidth is a global resource and affects the performance of the die.
HLAR architecture.HLAR is a low overhead GPMCP redundancy mechanism It supports redundancy between arbitrary cores and causes minimal modifications to the core. Fig. 1 shows the HLAR architecture. HLAR employs a many-Fig. 1. HLAR architecture core processor interconnected with a mesh network. For each core, we added a redundancy control module called redundancy manager (RM). RM consists of small control circuits and two FIFO buffers. The RM area is small, compared with cores/networks. HLAR uses the existing chip networks in transferring trace data and supports input replication via a remote value queue (RVQ).
HLAR cores can be a checked core, a checker core, or a non-redundancy core. Non-redundancy cores do not execute redundancy. In the checked core, the RM receives register updates from the core and writes these updates onto the sender FIFO. Typically, the register update information consists of address/value pairs (8bit/32bit). The RM interfaces with the NoC, compresses the trace data, and sends messages to the checker core. The RM of the checker core writes its compressed register updates into its sender FIFO. A comparator compares elements in the two FIFOs. When these two vary, the RM raises a redundancy error exception. The RM controller includes a simple state-machine that stalls the processor when the FIFOs become full.
Input replication. Unlike the previous work, only the remote value but the load value requires replication in HLAR. HLAR supports input replication via RVQ. The checker core reads values from the RVQ rather than access the remote node. Programs and data in the checked core's local memory must be copied onto the checker core's, thus, address space replication is needed. A register named reg addr remap is used to replicate the address space.
Trace compression. We employed CRC-1, CRC-5, and CRC-8 to obtain an adaptive compression rate, and found that these methods adequately satisfy cov-erage. To obtain more effective compression, HLAR summarizes many updates for once CRC check. CRC-1/10 means one-bit CRC-1 check for every set of 10 trace values, hence, CRC-5/100, CRC-8/1000 and so on. NoC overhead is very small in HLAR(for example, CRC-8/1000 only costs 0.025% more bandwidth)
Recovery. Redundant execution is often combined with a checkpoint retry to gain recovery. However, the checkpoint overhead increases, especially if a short checkpoint interval is employed. HLAR indicates the checkpoint interval for a configurable checkpoint mechanism. Aside from the checkpoint, a simple forward error recovery (FER) mechanism is employed, which discards incorrect results and continues to make progress.
Application framework in HLAR. HLAR for applications can be as simple as system devices, in which only require configuring and enabling. The device views simple usage in applications and management in system. The application first configures HLAR through HLAR config() and the control registers in RM are then set. When HLAR enable() is prompted, the Hypervisor selects the appropriate core, copies the state from the checked core to initialize the checker core, and then begins the redundant execution. The hypervisor completes the redundancy and disables the RM until HLAR disable() is called.
Evaluation
Methodology
HLAR is implemented based on the OpenRISC 1200 (OR1200) [START_REF] Lampret | OpenRISC 1200 IP Core Specification[END_REF] core. The OR1200 configuration is used as the default parameter, except for the following: 8 KB local instructions Cache, 8 KB local data Cache, and 32 KB local memory.
Our experimental results are based on OR1200 cores in Altera Stratix IV FPGA prototyping boards. The primary advantage of using prototyping boards instead of simulation is speed. We evaluated the error coverage and latency by error injection and evaluated the overhead based hardware counters and EDA tools(Quartus II 11.0).
Workload and fault model. The applications evaluated include the following: MatrixMult (4M) and BubbleSort (5M). The fault model consists of single bit-flip (95%) and double bit-flip (5%). The number of experiments is 20,000 + 40,000 + 100,000 (fault sites for CRC-1, CRC-5, and CRC-8) * 6 (summarized interval) * 2 (applications) = 1,920,000.
Error injection. One register from the 32 GPRs and 6 control registers (PC, SR, EA, PICMR, PICPR, and PICSR) were randomly chosen. Likewise, 1-or 2-bit out of 32-bit locations were randomly selected and flipped.
Experimental results
Temporal overhead in redundancy entry. O T RE is usually shown as performance degradation. Performance degradation shown in Fig. 2(a) is 0.71% for MM and 0.68% for BS at 100,000 instructions of the checkpoint. These rates increase to 12.2% and 9.8% at 1000 instructions. When a discard recovery mechanism is employed, the degradation is negligible at 0.42% and 0.23%, as shown in Fig. 2(b).
Fixed overhead. The only fixed overhead in HLAR is RM. Logic utilization is shown in Fig. 2(c). RM utilizes 359 combinational ALUTs and a 160 byte memory, which only use 2.46% and 0.25% of the total (OR1200 core and router). The fixed overhead in HLAR is much less compared with Reunion or RECVF. Error coverage. HLAR compresses traces with CRC-1, CRC-5, CRC-8, and summarizes CRC to balance reliability and NoC performance. The uncoverage error rate is shown in Fig. 2(d) and(e). Comparing traces without compression can obtain a 100% coverage. CRC-1/1 reduces bandwidth by up to 40 times without losing coverage, 0.53% for MM, and below 0.001% for BS. When the summarized interval increases by 10 times, uncoverage increases (denoted by a line) on a small scale. Uncoverages in CRC-5 and CRC-8 are low even at intervals of 10,0000 instructions; uncoverage rates are 1.5% and 0.8%, respectively, for MM and are 0.086% and 0.012% for BS. As the interval decreases, uncoverage decreases significantly. An uncoverage rate below 0.001% in Fig. 2(d) and (e) indicates that no SDC occurs after error injection. Minimal uncoverage (below 0.001%) occurs at 10 and 100 instructions in CRC-5 and at 100 and 1000 instructions in CRC-8 in MM and BS, respectively, which means only 0.25% or 0.025% more NoC traffic.
Detection latency. NoC communicating with a distant core may incur greater latency than communicating with an adjacent core. The summarized CRC may also lead to larger latency. However, the results in Fig. 2(f) and (g) show that the detected latency in HLAR is bounded. Mean error detection latency (MEDL) for MM is consistent with the summarized interval, increasing from 27 in CRC-1/1 to 86,490 in CRC-1/100000. CRC-5 and CRC-8 show lower MEDL than CRC-1. For instance, at the interval of 1000 instructions, MEDL is 1424 in CRC-1, 539 in CRC-5, and 515 in CRC-8.
Conclusion
We analysed the redundant execution overhead and proposed HLAR, an adaptive low-overhead redundancy mechanism for GPMCP. Unlike prior mechanisms, HLAR can sense application requirements and system resource usage to reconfigure redundancy. Thus, HLAR decreases the overhead by executing only the necessary redundancy and using the idle core for this redundancy. HLAR expands the local memory within the replication sphere, which provides relaxed input replication, distributes the memory access, and allows the core pairs to progress simultaneously. HLAR is capable of perfect error coverage with a minimal overhead, covering 99.999% of errors with less than 0.25% more commutation.
Fig. 2 .
2 Fig. 2. Result: (a)Temporal overhead for checkpoint and for (b)discard mechanism; (c)Fixed overhead; (d)Error coverage rate for MM and (e)BS; (f)Mean detection latency for MM and (g)BS.
This work was Supported by the National Nature Science Foundation of China under NSFC No. 61033008, 60903041 and 61103080. | 13,759 | [
"1003088",
"1003089",
"1003090"
] | [
"302677",
"302677",
"302677"
] |
01480192 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480192/file/978-3-642-36818-9_39_Chapter.pdf | Adela Georgescu
Anonymous Lattice-Based Broadcast Encryption
Keywords: broadcast encryption, anonymity, Learning With Errors, Lattices
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In this paper, we translate the anonymous broadcast encryption scheme from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] into the lattices environment. Lattices are more and more studied recently and lattices environment is becoming wider and more populated with different cryptographic primitives. They offer certain undeniable advantages over traditional cryptography based on number theory: hard problems which form the security basis of many cryptographic primitives, great simplicity involving linear operations on small numbers and increasingly efficient implementations. A very important issue is that they are believed to be secure against quantum attacks in an era where quantum computers are a great promise for the near future. It is not surprising that lately we are witnessing a great development of cryptographic constructions secure under lattice-based assumptions. This is the main motivation for our current work: we want to propose a lattice-based variant of this cryptographic primitive (i.e. anonymous broadcast encryption) existent in classical cryptography.
Authors from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] use two cryptographic primitives in order to achieve anonymous broadcast encryption: IND-CCA public key encryption scheme and anonymous tag-based hint system. We employ variants of both these primitives derived from the Ring-Learning With Errors problem (RLWE) introduced recently in [START_REF] Lyubashevsky | On Ideal Lattices and Learning with Errors Over Rings[END_REF]. This problem is the ring-based variant of Regev's Learning With Errors problem [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF]. Lyubashevsky et al. [START_REF] Lyubashevsky | On Ideal Lattices and Learning with Errors Over Rings[END_REF] show that their problem can be reduced to the worst-case hardness of short-vector problems in ideal lattices. The advantage of RLWE based cryptographic primitives over LWE-based cryptographic primitives is that they achieve more compact ciphertext and smaller key sizes by a factor of n, thus adding more efficiency.
The RLWE problem has already been used as underlying hardness assumption for many cryptographic constructions, starting with the original cryptosystem from [START_REF] Lyubashevsky | On Ideal Lattices and Learning with Errors Over Rings[END_REF] and continuing with efficient signature schemes [START_REF] Lyubashesky | Lattice Signatures Without Trapdoors[END_REF], [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF], pseudorandom functions [START_REF] Banerjee | Pseudorandom functions and lattices[END_REF], fully homomorphic encryption [START_REF] Brakerski | Fully homomorphic encryption from ring-lwe and security for key dependent messages[END_REF] and also NTRU cryptosystem [START_REF] Stehlé | Making NTRU as secure as worst-case problems over ideal lattices[END_REF]. So it is a natural question to ask if we can achieve anonymous broadcast encryption from lattices. As one can see in the rest of the paper, we found it is not hard to construct this kind of primitive. IND-CCA cryptosystem based on LWE problem (and also RLWE) were already introduced in the literature (see section 6.3 [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF] for LWE-based IND-CCA cryptosystem). We also prove that it is feasible to construct tag-based hint anonymous systems from RLWE following the model of DDH hint system from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF]. For this specific task, we deal with the Hermite Normal Form variant of RLWE and with an equivalent version of DDH problem based on RLWE introduced in [START_REF] Georgescu | Lattice-based key agreement protocols[END_REF].
Related work
There is another candidate in the literature for lattice-based broadcast encryption scheme introduced in [START_REF] Wang | Lattice-based Identity-Based Broadcast Encryption[END_REF]. Anyway, there are some important differences between our scheme and this one: the latter does not offer anonymity but it is an identity-based scheme. Our scheme can also be transformed into identitybased broadcast encryption by replacing the LWE-based IND-CCA secure PKE with identity-based encryption (IBE) from LWE as the one from [START_REF] Gentry | Trapdoors for hard lattices and new cryptographic constructions[END_REF]. On the other hand, the CCA-secure PKE scheme from [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF] we employ in our construction has better efficiency and simplicity due to the simple structure of the new trapdoor they introduce, thus also making our construction more efficient.
Preliminaries
Lattices
Let B = {b 1 , ...b n } ∈ R n×k be linearly independent vectors in R n . The lattice generated by B is the set of all integer linear combinations of vectors from B
L(B) = { n i=1 x i • b i : x i ∈ Z}.
Matrix B constitutes a basis of the lattice. Any lattice admits multiple bases, some bases are better than others.
We introduce here a function that we'll apply in section 3.1, the round(•) function. This function was first used with its basic variant in [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF] for decryption, and later on to almost all the lattice-based cryptosystems :
round(x) = 1, x ∈ [0, q/2 ] 0, otherwise
In our construction, we use the extended variant of the function which rounds to smaller intervals, namely round(x) = a if x ∈ [a • q/A, (a + 1) • q/A] where A is the total number of intervals. We suggest setting A = 4.
We employ this function in order to derive the same value from numbers that are separated only by a small difference (Gaussian noise).
The Learning With Errors problem
The learning with errors problem (LWE) is a recently introduced (2005, [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF]) but very famous problem in the field of lattice-based cryptography. Even if it is not related directly to lattices, the security of many cryptographic primitives in this field rely on its hardness believed to be the same as worst-case lattice problems.
Informally, the problem can be described very easily: given n linear equations on s ∈ Z n q which have been perturbed by a small amount of noise, recover the secret s.
We present here the original definition from [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF].
Definition 1 (The Learning With Errors Problem [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF]) Fix the parameters of the problem: n ≥ 1, modulus q ≥ 2 and Gaussian error probability distribution χ on Z q (more precisely, it is chosen to be the normal distribution rounded to the nearest integer, modulo q with standard deviation αq where α > 0 is taken to be 1/(poly(n))). Given an arbitrary number of pairs (a, a T s + e) where s is a secret vector from Z n q , vector a is chosen uniformly at random from Z n q and e is chosen according to χ, output s with high probability.
Proposition 1 [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF] Let α = α(n) ∈ (0, 1) and let q = q(n) be a prime such that αq > 2 √ n. If there exists an efficient (possibly quantum) algorithm that solves LW E q,χ , then there exists an efficient quantum algorithm for approximating SIVP in the worst-case to within O(n/α) factors.
The Ring-Learning With Errors problem
The ring learning with errors assumption introduced by Lyubashevsky et al. [START_REF] Lyubashevsky | On Ideal Lattices and Learning with Errors Over Rings[END_REF] is the translation of the LWE into the ring setting. More precisely, the group Z n q from the LWE samples is replaced with the ring R q = Z q [x]/ x n + 1 , where n is a power of 2 and q is a prime modulus satisfying q = 1 mod 2n. This is in fact a particularization of the ring-LWE problem introduced in the original paper, but for our construction, as for many others, it is enough. The ring Z q [x]/ x n + 1 contains all integer polynomials of degree n -1 and coefficients in Z q . Addition and multiplication in this ring are defined modulo x n + 1 and q.
In ring-LWE [START_REF] Lyubashevsky | On Ideal Lattices and Learning with Errors Over Rings[END_REF], the parameter setting is as follows: s ∈ R q is a fixed secret, a is chosen uniformly from R q and e is an error term chosen independently from some error distribution χ concentrated on "small" elements from R q . The ring-LWE (RLWE) assumption is that it is hard to distinguish samples of the form (a, b = a • s + e) ∈ R q × R q from samples (a, b) where a, b are chosen uniformly in R q . A hardness result based on the worst-case hardness of short-vector problems on ideal lattices is given in [START_REF] Lyubashevsky | On Ideal Lattices and Learning with Errors Over Rings[END_REF]. An important remark is that the assumption still holds if the secret s is sampled from the noise distribution χ rather than the uniform distribution; this is the "Hermite Normal Form (HNF)" of the assumption (HNF-ring-LWE). The advantage of the RLWE problem is that it represents a step forward in making the lattice-based cryptography practical. In most applications, a sample (a, b) ∈ R q × R q from RLWE distribution can replace n samples (a, b) ∈ Z n q × Z q from the standard LWE distribution, thus reducing the key size by a factor of n.
We note that in our construction of the broadcast encryption scheme, we will make use of the HNF form of the RLWE problem.
We present in the following the correspondent of the Decisional Diffie-Hellman based on the Ring-LWE problem, which was first introduced in [START_REF] Georgescu | Lattice-based key agreement protocols[END_REF] and which is derived from the ring-LWE cryptosystem from [START_REF] Lindner | Better Key Sizes (and Attacks) for LWE-Based Encryption[END_REF], section 3.1. The security of this cryptosystem is proven conditioned by the fact that an adversary cannot solve the below problem, which is essentially its view from the cryptosystem.
DDH-RLWE Problem. [START_REF] Georgescu | Lattice-based key agreement protocols[END_REF] Given a tuple (s,
y 1 = s • x + e x , y 2 = s • y + e y , z)
where s is chosen uniformly at random from R q , x, y, e x , e y are sampled from χ distribution, one has to distinguish between the tuple where z = y 1 • y + e 3 , with e 3 sampled independently from χ and the same tuple where z is chosen uniformly and independently from anything else in R q .
We present a hardness result for the above problem but, due to lack of space, we defer a complete proof to [START_REF] Georgescu | Lattice-based key agreement protocols[END_REF].
Proposition 2 [START_REF] Georgescu | Lattice-based key agreement protocols[END_REF] The DDH-RLWE problem is hard if the RLWE problem in its "Hermite normal form" (HNF) is hard.
Anonymous Broadcast Encryption
In this section we recall a general Broadcast Encryption model from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] which allows anonymity.
Definition 2 A broadcast encryption scheme with security parameter λ and U = {1, ..., n} the universe of users consists of the following algorithms.
Setup(λ, n) takes as input security parameter λ and the number of users and outputs a master public key MPK and master secret key MSK.
KeyGen(M P K, M SK, i) takes as input MPK, MSK and i ∈ U and outputs the private key sk i corresponding to user i. Enc(M P K, m, S) takes as input MPK and a message m to be broadcasted to a set of users S ⊆ U and it outputs a cipheretxt c. Dec(M P K, sk i , c) takes as input MPK, a private key sk i and a ciphertext c and outputs either the message m or a failure symbol.
We provide the same security model as in [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] for the anonymous broadcast encryption scheme we'll describe later.
Definition 3
We define the ANO-IND-CCA security game (against adaptive adversaries) for broadcast encryption scheme as follows.
Setup The challenger runs the Setup to generate the public key MPK and the corresponding private key MSK and gives MPK to the adversary A. Phase 1. A can issues two types of queries:
private key extraction queries to an oracle for any index i ∈ U ; the oracle will respond by returning the private key sk i = KeyGen(M P K, M SK, i) corresponding to i; decryption queries (c, i) to an oracle for any index i ∈ U ; the oracle will respond by returning the Dec(M P K, sk i , c). Challenge. The adversary selects two equal length messages m 0 and m 1 and two distinct sets S 0 and S 1 ⊆ U of users. We impose the same requirements as in [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF]: sets S 0 and S 1 should be of equal size and A has not issued any query to any i ∈ (S 0 \S 1 )∪(S 1 \S 0 ). Further, if there exists an i ∈ S 0 ∩S 1 for which A has issued a query, then we require that m 0 = m 1 . The adversary gives m 0 , m 1 and S 0 , S 1 to the challenger. The latter picks a random bit b ∈ {0, 1}, computes c * = Enc(M P K, m b , S b ) and returns it to A.
λ) = |P r[b = b]-1 2 | (
where λ is the security parameter of the scheme.
Generic constructions for anonymous broadcast encryption can be obtained exactly as in Section 3 and 4 from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF], but they require linear time decryption. Thus, we follow the idea of introducing tag-based anonymous hint system as in [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF], but we construct it from the ring-LWE problem. The construction has the advantage of achieving constant time decryption.
Tag-Based Anonymous Hint Systems
A tag-based anonymous hint system (TAHS) [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] is a sort of encryption under a tag t and a public key pk. The output is a pair (U, H) where H is a hint. This pair should be hard to distinguish when using two different public keys. Such a system consists of the following algorithms: KeyGen(λ) on input security parameter λ, outputs a key pair (sk, pk). Hint(t, pk, r) takes as input a public key pk and a tag t; outputs a pair (U, H) consisting of a value U and a hint H. It is required that U depends only on random r and not on pk. Invert(sk, t, U ) takes as input a value U, a tag t and a private key sk. It outputs either a hint H or ⊥ if U is not in the appropriate domain.
Correctness implies that for any pair (sk, pk) ← KeyGen(λ) and any random r, if (U, H) ← Hint(t, pk, r), then Invert(sk, t, U ) = H.
Definition 4 [7]
A tag-based hint system as defined above is anonymous if there is no polynomial time adversary which has non-negligible advantage in the following game:
1. Adversary A chooses a tag t and sends it to the challenger. 2. The challenger generates two pairs (sk 0 , pk 0 ), (sk 1 , pk 1 ) ← KeyGen(λ) and gives pk 0 , pk 1 to the adversary. 3. The following phase is repeated polynomially many times: A invokes a verification oracle on a value-hint-tag triple (U, H, t) such that t = t . In reply, the challenger returns bits d 0 , d 1 ∈ {0, 1} where d 0 = 1 if and only if H = Invert(sk 0 , t, U ) and d 1 = 1 if and only if H = Invert(sk 1 , t, U ). 4. In the challenge phase, the challenger chooses random bit b ← {0, 1} and random r ← R q and outputs (U , H ) = Hint(t , pk b , r ). 5. A is allowed to make any further query but not involving target t . To show that this primitive can be constructed in the lattice-based environment, we give an example of an anonymous hint system based on the DDH-RLWE assumption. This is the equivalent of the hint system based on the classical DDH assumption from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF].
Let R q be the ring of polynomial integers as described in section 2.3 i.e. R q = Z n q / x n + 1 where n is a power of 2 and q is a prime modulus such that q = 1 mod 2n. Remember that χ is the noise distribution concentrated on "small" elements from R q ; s is a fixed element from R q .
We draw attention to the fact that, unlike in the tag-based hint system from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF], the Hint algorithm outputs a value H 1 which is slightly different from the value H 2 recovered by Invert algorithm (by a small quantity from χ as shown below) and only the holder of the secret key sk can derive a value H from both H 1 and H 2 . We stress that the final value H is the same for every use of the tag-based hint scheme, just that is somehow hidden by the output of Hint algorithm. KeyGen(λ) take random x 1 , x 2 , y 1 , y 2 , e 1 , e 2 , e 1 , e 2 ← χ and compute
X i = s • x i + e i and Y i = s • y i + e i . The public key is pk = (X 1 , X 2 , Y 1 , Y 2 ) and the private key is sk = (x 1 , x 2 , y 1 , y 2 ). α, β 1 , β 2 ← χ and computes X b,1 = X , X b,2 = X • (-t * ) + s • β 1 , Y b,1 = s • β 2 + X • α and Y b,2 = s • (-β 2 ) • t * . The adversary is given the public keys (X 0,1 , X 0,2 , Y 0,1 , Y 0,1 ) and (X 1,1 , X 1,2 , Y 1,1 , Y 1,1
).
To answer a verification query (U, (V, W ), t) with t = t * coming from adversary A, B can run algorithm Invert(sk 1-b , t, U ) since he knows sk 1-b . As for Invert(sk b , t, U ), he computes
Z 1 = (V -U • β 1 ) • 1/(t -t * ) Z 2 = (w -U • β 2 (t -t * )) • 1/αt and answers that d b = 1 if and only if round(Z 1 ) = round(U • θ 1 • Z 2 • θ 2 ).
First of all, we note that we are working in the ring Z n q / x n + 1 which is a field , since q is prime and x n + 1 is irreducible. Therefore, the multiplicative inverse is defined and we can compute (1/(t -t * )) for example. Finally, in the challenge phase, B constructs the challenge pair (U * , (V * , W * )) as
U * = Y, V * = Y • β 1 , W * = T • αt * . If T = X • y + e xy
with e xy ← R q , then A's view is the same as in Game 0 (except with small probability) while if T is random in R q A's view is the same as in Game 1. Therefore, we have |P r[S 1 ] -P r[S 0 ]| ≤ Adv DDH (B) + "small". Game 2 is identical to Game 1 but in the challenge phase both V * and W * are chosen uniformly in R q and independent of U * . We argue that adversary A cannot see the difference as long as the DDH-RLWE assumption holds. In this game, the challenge is just a sequence of random ring elements and we have P r[S 2 ] = 1/2. By combing the above informations, we obtain Adv anon-hint (A) ≤ 2Adv DDH (B) + 2q/p.
Anonymous Broadcast Encryption
In this subsection we construct the anonymous broadcast encryption scheme from anonymous hint system S hint = (KeyGen, Hint, Invert) based on LWE and LWE-based public key encryption scheme S pke = (Gen, KeyGen, Encrypt, Decrypt).
We also need a LWE-based signature scheme Σ = (G, S, V). We remark that this is precisely the construction from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF], since in this stage of description, we don't have any contribution to it. Our contribution was mainly to translate the TAHS scheme in the lattice-based environment.
Setup(λ, n) : Obtain par ← Gen(λ) and, for i = 1 to n generate encryption key pairs (sk e i , pk e i ) ← S pke .KeyGen(par) and hint key pairs (sk h i , pk h i ) ← S hint .KeyGen(λ); the master public key consists of
M P K = (par, {pk e i , pk h i } n i=1 , Σ)
and the master secret key is M SK = {sk e i , sk h i } n i=1 KeyGen(M P K, M SK, i) : parse M SK = {sk e i , sk h i } n i=1 and output sk i = (sk e i , sk h i ). Enc(M P K, M, S) : to encrypt a message M for a set of users S = {i 1 , ..., i l } ⊆ {1, ..., n}, generate a signature key pair (SK, V K) = G(λ). Then choose random r, e ← χ and compute (U, H j ) = S hint .Hint(V K, pk h ij , r) for j = 1 to l. Then, for each user index j ∈ {1, ..., l} compute a ciphertext C j = S pke .Encrypt(pk e ij , M ||V K). Choose a random permutation π : {1, ..., l} → {1, ..., l} and output the final ciphertext as
C = (V K, U, (H π(1) , C π(1) ), ..., (H π(l) , C π(l) ), σ)
where σ = S(SK, U, (H π(1) , C π(1) ), ..., (H π(l) , C π(l) )) Dec(M P K, sk i , C) : for sk i = (sk e i , sk h i ) and
C = (V K, U, (H π(1) , C π(1) ), ..., (H π(l) , C π(l) ), σ), return ⊥ if V(V K, U, (H π(1) , C π(1) ), ..., (H π(l) , C π(l) ), σ) = 0 or if U is not in the appro- priate space. Otherwise, compute H = S hint .Invert(sk h i , V K, U
). If H = H j for all j ∈ {1, ..., l}, return ⊥. Otherwise, let j be the smallest index such that H = H j and compute M = S pke .Decrypt(sk e i , C j ). If M can be parsed as M = M ||V K, return M. Otherwise, return ⊥.
We already presented an anonymous tag-based hint system secure under Ring-LWE problem. As for the PKE component of the above scheme, we suggest using the IND-CCA secure scheme described in [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF]. As the authors claim, it is more efficient and compact than previous lattice-based cryptosystems since it uses a new trapdoor which is simpler, efficient and easy to implement. Under the same reasons, we suggest also employing a lattice-based signature scheme from [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF], section 6.2.
Due to lack of space, we can not present any of these two suggested cryptographic primitives here but we refer the reader to [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF] for more details. We just mention that they were proven to be secure under LWE-assumption. We note that the tag-based hint system and the PKE cryptosystem employed are independent in our lattice broadcast encryption scheme. Therefore, the fact that the components of the ciphertext are elements from different algebraic structures is not prohibitive. In order to apply the signature scheme, one needs to first apply a hash function on the input with the aim of "smoothing" it. Theorem 1 The above broadcast encryption scheme is ANO-IND-CCA secure assuming that S hint scheme is anonymous, the S pke scheme is IND-CCA secure and the signature scheme Σ is strongly unforgeable.
We remark that the proof of Theorem 4 from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] is also valid for our theorem since it deals with general IND-CCA encryption scheme and tag-based hint systems, and not with some specific constructions in a certain environment (like traditional cryptography or lattice-based cryptography).
Conclusions
We introduced a lattice-based variant of the anonymous broadcast encryption scheme from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF]. We showed that it is feasible to construct anonymous tag-based hint scheme from the RLWE assumption in order to achieve anonymity of the scheme. We used a variant of RLWE assumption with "small" secrets and proved that the hint scheme is anonymous based on a RLWE-based DDH assumption. For public key encryption, we suggested the use of the IND-CCA secure LWEbased encryption scheme and digital signature scheme from [START_REF] Micciancio | Trapdoors for lattices: Simpler,tighter, faster, smaller[END_REF] as they gain in efficiency and simplicity over the previous similar constructions from lattices.
Phase 2 .
2 A continues to issue private key extraction queries with the restriction that i / ∈ (S 0 \ S 1 ) ∪ (S 1 \ S 0 ); otherwise it is necessary that m 0 = m 1 . A continues to issue decryption queries (c, i) with the restriction that if c = c * then either i / ∈ (S 0 \ S 1 ) ∪ (S 1 \ S 0 ) or i ∈ S 0 ∩ S 1 and m 0 = m 1 . Guess. The adversary A outputs a guess b ∈ {0, 1} and wins the game if b = b . We denote A s advantage by Adv AN O-IN D-CP A A,KT
6 .
6 A outputs a bit b ∈ {0, 1} and wins the game if b = b.
This work was sponsored by the European Social Fund, under doctoral and postdoctoral grant POSDRU/88/1.5/S/56668.
Hint(t, pk, r) choose e, e x , e y from χ distribution and compute (U, H 1 ) as
Invert(sk, t, U ) parse sk as (x 1 , x 2 , y 1 , y 2 ), compute
Let us now check the correctness of the scheme. We note that the output of Hint algorithm is the pair (U, H 1 ) where U = s•r +e. After some simplifications, we obtain
where (e 1 • t + e x + e 2 ) • r and (e 1 • t + e y + e 2 ) • r are "small" since they both belong to the χ distribution.
On the other hand, H 2 will be computed as
Therefore, the difference H 2 -H 1 is small and belongs to χ. Thus, by computing both round(H 1 ) and round(H 2 ), one gets exactly the same value, which is in fact hidden in the output of Hint algorithm.
Lemma 1
The above tag-based hint system is anonymous if the DDH-RLWE assumption holds in the ring R q .
Proof. The proof of this lemma follows closely that of Lemma 1 from [START_REF] Libert | Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model[END_REF] adapted to the LWE environment. We will give a sketch of it in the following.
The proof is modeled by a sequence of games, starting with the first game which is the real game.
Game 0 is the real attack game. Game 1 differs from Game 0 in the following two issues: the challenger's bit b is chosen at the beginning of the game and in the adversary's challenge (U * , (V * , W * )), W * is replaced by a random element of R q . We show that a computationally bounded adversary cannot distinguish the adversary's challenge (U * , (V * , W * )) from the one where W * is replaced by a random element from R q , under the DDH-RLWE assumption. We construct a DDH-RLWE distinguisher B for Game 0 and Game 1 which takes as input (s, X = s • x + e x , Y = s • y + e y , Z) where x, y, e x , e y are from χ and aims at distinguishing whether Z = X • y + e z or Z is random in R q . At the beginning of the game, B chooses θ 1 and θ 2 from χ and defines
When the challenge bit b is chosen, B generates pk 1-b by choosing x 1-b,1 , x 1-b,2 , y 1-b,1 , y 1-b,2 , e 1-b,1 , e 1-b,2 , e 1-b,1 , e 1-b,2 ← χ and setting X 1-b,i = s • x 1-b,i + e 1-b,i , for i ∈ {1, 2}. For pk b , B chooses | 27,459 | [
"1003091"
] | [
"302604"
] |
01480196 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480196/file/978-3-642-36818-9_41_Chapter.pdf | Xingxing Xie
email: [email protected]
Hua Ma
email: [email protected]
Jin Li
Xiaofeng Chen
email: [email protected]
New Ciphertext-Policy Attribute-Based Access Control with Efficient Revocation ⋆
Keywords: Attribute-based encryption, revocation, outsourcing, re-encryption
Attribute-Based Encryption (ABE) is one of the promising cryptographic primitives for fine-grained access control of shared outsourced data in cloud computing. However, before ABE can be deployed in data outsourcing systems, it has to provide efficient enforcement of authorization policies and policy updates. However, in order to tackle this issue, efficient and secure attribute and user revocation should be supported in original ABE scheme, which is still a challenge in existing work. In this paper, we propose a new ciphertext-policy ABE (CP-ABE) construction with efficient attribute and user revocation. Besides, an efficient access control mechanism is given based on the CP-ABE construction with an outsourcing computation service provider.
Introduction
As a relatively new encryption technology, attribute-based encryption(ABE) has attracted lots of attention because ABE enables efficient one-to-many broadcast encryption and fine-grained access control system. Access control is one of the most common and versatile mechanisms used for information systems security enforcement. An access control model formally describes how to decide whether an access request should be permitted or repudiated. Particularly, in the outsourcing environment, designing an access control will introduce many challenges.
However, the user and attribute revocation is still a challenge in existing ABE schemes. Many schemes [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF][START_REF] Boldyreva | Identity-Based Encryption with Efficient Revocation[END_REF][START_REF] Liang | Ciphertext Policy Attribute Based Encryption with Efficient Revocation[END_REF] are proposed to cope with attribute-based access control with efficient revocation. The most remarkable is the scheme proposed by J.Hur and D.K.Noh, which realizes attribute-based access control with efficient fine-grained revocation in outsourcing. However, in the phase of key update, the data service manager will perform heavy computation at every time of update, which could be a bottleneck for the data service manager. Moreover, in the outsourcing environment, external service provider [START_REF] Wang | A New Efficient Verifiable Fuzzy Keyword Search Scheme Journal of wireless mobile Networks[END_REF][START_REF] Zhao | A New Trapdoor-indistinguishable Public Key Encryption with Keyword Search[END_REF] is indispensable. Thus, in this paper, we attempt to solve the problem of efficient revocation in attribute-based data access control using CP-ABE for outsourced data.
Related Work
For the ABE, key-policy ABE (KP-ABE) and ciphertext-policy ABE (CP-ABE) are more prevalent than the others. To take control of users' access right by a data owner, we specify CP-ABE as the data outsourcing architecture. Attribute Revocation Recently, several attribute revocable ABE schemes have been announced [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF][START_REF] Boldyreva | Identity-Based Encryption with Efficient Revocation[END_REF][START_REF] Pirretti | Secure Attribute-Based Systems[END_REF]. Undoubtedly, these approaches have two main problems, which consists of security degradation in terms of the backward and forward security [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF][START_REF] Rafaeli | A Survey of Key Management for Secure Group Communication[END_REF]. In the previous schemes, the key authority periodically announce a key update, that will lead to a bottleneck for the key authority. Two CP-ABE schemes with immediate attribute revocation with the help of semihonest service provider are proposed in [START_REF] Ibraimi | Mediated Ciphertext-Policy Attribute-Based Encryption and Its Application[END_REF][START_REF] Yu | Attribute Based Data Sharing with Attribute Revocation[END_REF]. However, achieving fine-grained user access control failed. Junbeom et al. [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF] proposed a CP-ABE scheme with fine-grained attribute revocation with the help of the honest-but-curious proxy deployed in the data service provider. User Revocation In [START_REF] Ostrovsky | Attribute-Based Encryption with Non-Monotonoc Access Structures[END_REF], a fine-grained user-level revocation is proposed using ABE that supports negative clause. In the previous schemes [START_REF] Ostrovsky | Attribute-Based Encryption with Non-Monotonoc Access Structures[END_REF][START_REF] Liang | Ciphertext Policy Attribute Based Encryption with Efficient Revocation[END_REF], a user loses all the access rights to the data when he is revoked from a single attribute group. Attrapadung and Imai [START_REF] Attrapadung | Conjunctive Broadcast and Attribute-Based Encryption[END_REF] suggested another user-revocable ABE schemes, in which the data owner should take full control of all the membership lists that leads to be not applied in the outsourcing environments.
Our Contribution
In this study, aiming at reducing the overhead computation at data service manager, we propose an ciphertext policy attribute-based access control with efficient revocation. This construction is based on a CP-ABE construction with efficient user and attribute revocation. Compared with [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF], in our proposed construction, in the phase of key update, the computation operated by the data service manager will reduce by half. In Table 1 we summarize the comparisons between our proposed scheme and [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF] in terms of the computations in the phase of key update. Furthermore, we formally show the security proof based on security requirement in the access control system.
2 Systems and Models
System Description and Assumptions
There are four entities involved in our attribute-based access control system:
-Trusted authority. It is the party that is fully trusted by all entities participating in the data outsourcing system. -Data owner. It is a client who owns data and encrypts the outsourced data.
-User. It is an entity who would like to access the cryptographic data.
-Service provider. It is an entity that provides data outsourcing service. The data servers are responsible for storing the outsourced data. Access control from outside users is executed by the data service manager, which is assumed to be honest-but-curious.
Threat Model and Security Requirements
-Data confidentiality. It is not allowed to access the plaintext if a user's attributes do not satisfy the access policy. In addition, unauthorized data service manager should be prevented from accessing the plaintext of the encrypted data that it stores. -Collusion-resistance. Even if multiple users collaborate, they are unable to decrypt encrypted data by combining their attribute keys. -Backward and forward secrecy. In our context, backward secrecy means that if a new key is distributed for the group when a new member joins, he is not able to decrypt previous messages before the new member holds the attribute. On the other hand, forward secrecy means that a revoked or expelled group member will not be able to continue accessing the plaintext of the subsequent data exchanged (if it keeps receiving the messages), when the other valid attributes that he holds do not satisfy the access policy.
3 Preliminaries and Definitions
Bilinear Pairings
Let G and G T be two cyclic group of prime order p. The Discrete Logarithm Problem on both G and G T are hard. A bilinear map e is a map function e: G × G → G T with the following properties:
1. Bilinearity: For all A, B ∈ G, and a, b ∈ Z * p , e(A a , B b ) = e(A, B) ab . 2. Non-degeneracy: e(g, g) = 1, where g is the generator of G. 3. Computability: There exsits an efficient algorithm to compute the pairing.
Decisional Bilinear Diffie-Hellman Exponent (BDHE)
Assumption [START_REF] Waters | Ciphertext-Policy Attribute-Based Encryption: An Expressive, Effcient, and Provably Secure Realization[END_REF] The decisional BDHE problem is to compute e(g, g) a q+1 s ∈ G T , given a generator g of G and elements
-→ y = (g 1 , • • • , g q , g q+1 , • • • , g 2q , g s ) for a, s ∈ Z * p . Let g i denote g a i .
An algorithm A has advantage ǫ(κ) in solving the decisional BDHE problem for a bilinear map group p, G, G T , e , where κ is the security parameter, if
| P r[B( -→ y , g, T = e(g, g) a q+1 s ) = 0] -P r[B( -→ y , T = R) = 0] |≥ ǫ(κ).
p, G, G T , e is deemed to satisfy the decisional BDHE assumption. when for every polynomial-time algorithm (in the security parameter κ) to solve the decisional BDHE problem on p, G, G T , e , the advantage ǫ(κ) is a negligible function.
System Definition and Our Basic Construction
Let U = {u 1 , • • • , u n }
be the whole of users. Let L = {1, • • • , p} be the universe of attributes that defines, classifies the user in the system. Let G i ⊂ U be a set of users that hold the attribute i. In our scheme, KEK tree will be used to re-encrypt the ciphertext encrypted by the owner, which is constructed by the data service manager as in Fig. 1. Now, some basic properties of the KEK tree will be presented as [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF].
Let G = {G 1 , • • • , G p }
¡ ¢ ¡£ ¡¤ ¡ ¥ ¡ ¡ ¦ ¡ § ¡ ¡© ¡ ¦ § ¨¢ © £ ¡ ¡¦ ¡ § ¡¨¡¢
Fig. 1. KEK tree attribute group key distribution
Our Construction
Let e : G × G -→ G T be a bilinear map of prime order p with the generator g.
A security parameter, κ, will decide the size of the groups. We will additionally employ a hash function H : {0, 1} * -→ G that we will model as a random oracle.
System Setup and Key Generation The trusted authority (TA) first runs Setup algorithm by choosing a bilinear map : G × G → G T of prime order δ with a generator g. Then, TA chooses two random α, a ∈ Z p . The public parameters are P K = {G, g, h = g a , e(g, g) α }. The master key is M K = {g α }, which is only known by the TA. After executing the Setup algorithm producing PK and MK, each user in U needs to register with the TA, who verifies the user's attributes and issues proper private keys for the user. Running KeyGen(M K, S, U ), the TA inputs a set of users U ⊆ U and attributes S ⊆ L, and outputs a set of private key components corresponding to each attribute j ∈ S. The key generation algorithm is presented as follows:
1. Choose a random r ∈ Z * p , which is unique to each user. 2. Compute the following secret value to the user u ∈ U as:
SK = (K = g α g ar , L = g r , ∀j ∈ S : D j = H(j) r )
After implementing above operations, TA sends the attribute groups G j [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF] for each j ∈ S to the data service manager. KEK Generation After obtaining the attribute groups G j for each j ∈ S from the TA, the data service manager runs KEKGen(U ) and generates KEKs for users in U. Firstly, the data service manager sets a binary KEK tree for the unniverse of users U just like that described above. The KEK tree is responsible for distributing the attribute group keys to users in U ⊆ U. For instance, u 3 stores P K 3 = {KEK 10 , KEK 5 , KEK 2 , KEK 1 } as its path keys in Fig. 2.
Then, in the data re-encryption phase, the data service manager will encrypt the attribute group keys by no means the path keys, i.e. KEKs. The method of the key assignment is that keys are assigned randomly and independently from each other, which is information theoretic.
Data Encryption To encrypt the data M , a data user needs to specify a policy tree T over the universe of attributes L. Running Encrypt(P K, M, T ), the data M is enforced attribute-based access control. The policy tree T is defined as follows.
For each node x in the tree T , the algorithm chooses a polynomial q x , which is chosen in a top-down manner, starting from the root node R and its degree d x is one less than the threshold value k x of the node, that is, d x = k x -1. For the root node R, it randomly chooses an s ∈ Z * p and sets q R (0) = s. Except the root node R, it sets q x (0) = q p(x) (index(x)) and chooses d x other points randomly to completely define q x for any other node x. Let Y be the set of leaf nodes in T . The ciphertext is then constructed by giving the policy tree T and computing
CT = (T , C = M e(g, g) αs , C = g s , ∀y ∈ Y : C y = g aqx(0) • H(y) -s )
After constructing CT , the data owner outsources it to the service provider securely.
Data Re-Encryption On receiving the ciphertext CT , the data service manager re-encrypts CT using a set of the membership information for each attribute group G ⊆ G. The re-encryption algorithm progresses as follows:
1. For all G y ∈ G, chooses a random K y ∈ Z * p and re-encrypts CT as follows:
CT ′ = (T , C = M e(g, g) αs , C = g s , ∀y ∈ Y : C y = (g aqx(0) • H(y) -s ) Ky )
2. After re-encrypting CT , the data service manager needs to employ a method to deliver the attribute group keys to valid users. The method we used is a symmetric encryption of a message M under a key K, in other words, E K : {0, 1} k -→ {0, 1} k , as follow:
Hdr = (∀y ∈ Y : {E K (K y )} K∈KEK(Gy) )
After above all operations, the data service manager responds with (Hdr, CT ′ ) to the user sending any data request.
Data Decryption Data decryption phase consists of the attribute group key decryption from Hdr and message decryption.
Attribute group key decrypt To execute data decryption, a user u t first decrypt the attribute group key for all attributes in S that the user holds from Hdr. If the user u t ∈ G j , he can decrypt the attribute group key K j from Hdr using a KEK that is possessed by the user. For example, if G j = {u 1 , u 2 , u 5 , u 6 } in Fig. 2, u 5 can decrypt the K j using the path key KEK 6 ∈ P K 5 . Next, u t updates its secret key as follows:
SK = (K = g α g ar , ∀j ∈ S : D j = H(j) r , L = (g r ) 1/Kj )
Message decrypt Once the user updates its secret key, he then runs the Decrypt(CT ′ , SK, K S ) algorithm as follows. The user runs a recursive function DecryptN ode(CT ′ , SK, R), R is the root of T . The recursion function is the same as defined in [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF]. And if x is a leaf node, then DecryptN ode(CT ′ , SK, x) is proceeded as follow when x ∈ S and u t ∈ G x :
Decrypt(CT ′ , SK, x) = e(C x , L) • e(C, D x ) = e((H(x) -s • g aqx(0) ) Kx , (g r ) 1/Kx ) • e(g s , H(x) r ) = e(g, g) raqx(0)
Now we consider the recursion when x is a nonleaf node processed as follows: ∀z is the child of x, it calls DecryptN ode(CT ′ , SK, z) and stores the output as F z . Let S x be an arbitrary k x -sized set of child nodes z, then computes:
F x = z∈Sx F ∆ i,S ′ x (0) z , where i=index(x), S ′ x ={index(z):z∈Sx} = z∈Sx (e(g, g) r•aqz(0) ) ∆ i,S ′ x (0) = z∈Sx (e(g, g) r•aq p(z) (index(z)) ) ∆ i,S ′ x (0) = z∈Sx (e(g, g) r•aqx(i) ) ∆ i,S ′ x (0)
= e(g, g) r•aqx(0)
Where i = index(z) and S ′ x = {index(z) : z ∈ S x }. Finally, if x is the root node R of the access tree T , the recursive algorithm returns A = DecryptN ode(CT ′ , SK, R) = e(g, g) ras . And the algorithm decrypts the ciphertext by computing C/(e(C, K)/A) = C/(e(g s , g α g ra )/e(g, g) ras ) = M.
Key Update
In this section, when a user wants to leave or join several attribute groups, he needs to send the attributes changed to TA. Without loss of generality, assume there is any membership change in G j , which is equal to that a user comes to hold or drop an attribute j at the some instance. Next, we progress the update procedure as follows:
1. The data service manager selects a random s ′ ∈ Z * p and a K ′ i , and re-encrypts the ciphertext CT ′ using PK as
CT ′ = (T , C = M e(g, g) α(s+s ′ ) , C = g (s+s ′ ) , C i = (g a(qx(0)+s ′ ) • H(i) -s ) K ′ i ∀y ∈ Y \ {i} : C y = (g a(qx(0)+s ′ ) • H(y) -s ) Ky ).
2. After updating the ciphertext, the data service manager selects new minimum cover sets for G i changed and generates a new header message as follows:
Hdr = ({E K (K ′ i )} K∈KEK(Gi) , ∀y ∈ Y \ {i} : {E K (K y )} K∈KEK(Gy) ).
Efficiency Analysis
In this section, we analyze the efficiency of the proposed schemes with the scheme [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF]. Table 1 shows the comparisons between our scheme and scheme [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF] in terms of the computations in the phase of key update. In our scheme, the number of exponentiations is reduced to ω+3. However, in the scheme [START_REF] Hur | Attribute-based Access Control with Efficient Revocation in Data Outsourcing System[END_REF], the number of exponentiations unexpectedly is 2ω+3. Thus, it will enormously improve computational efficiency.
Security
In this section, the security of the proposed scheme is given based on the security requirements discussed in Section 2.
Theorem 1. Collusion Resistance. The proposed scheme is secure to resist collusion attack.
Proof. In CP-ABE, the secret s sharing is embedded into a ciphertext, and to decrypt a ciphertext, a user or a colluding attacker needs to recover e(g, g) αs .
To recover e(g, g) αs , the attacker must pair C x from the ciphertext and D x from the other colluding users private key for an attribute x. However, every user's private key is uniquely generated by a random r. Thus, even if the colluding users are all valid, the attacker can not recover e(g, g) αs .
Theorem 2. Data Confidentiality. The proposed scheme prevents unauthorized users and the curious service provider from acquiring the privacy of the outsourced data.
Proof. Firstly, if the attributes held by a user don't satisfy the tree policy T , the user will not recover the value e(g, g) ras , which leads the ciphertext not to be deciphered. Secondly, when a user is revoked from some attribute groups that satisfy the access policy, he will lose the updated attribute group key. If the user would like to decrypt a node x for corresponding attribute, he needs to pair C x from the ciphertext and L encrypted by K x from its private key. As the user cannot obtain the updated attribute group key K x , he cannot decrypt the value e(g, g) raqx(0) . Ultimately, Since we assume that the service provider is honestbut-curious, the service provider cannot be totally trusted by users. The service provider is available to the ciphertext and each attribute group key. However, any of the private keys for the set of attributes are not given to the data service manager. Thus, the service provider will not decrypt the ciphertext.
Theorem 3. Backward and Forward Secrecy. For backward and forward secrecy of the outsourced data, the proposed scheme is secure against any newly joining and revoked users, respectively.
Proof. When a user comes to join some attribute groups, the corresponding attribute group keys are updated and delivered to the user securely. Even if the user has stored the previous ciphertext exchanged and the holding attributes satisfy the access policy, he cannot decrypt the pervious ciphertext. That is because, though he could succeed in computing e(g, g) ra(s+s ′ ) , it will not help to recover the value e(g, g) αs from the updated ciphertext. Furthermore, when a user comes to leave some attribute groups, the corresponding attribute group keys are updated and not delivered to the user. As the user cannot obtain the updated attribute group keys, he cannot decrypt any nodes corresponding to the updated attributes. Moreover, even if the user has stored e(g, g) αs , he cannot decrypt the subsequent value e(g, g) α(s+s ′ ) . Because he is not available to random s ′ .
Conclusion
In this paper, aiming at improving the efficiency of revocation to make CP-ABE widely deployed in access control, we introduced a new CP-ABE construction. In this construction, the overall computation of key update become less. Furthermore, the security proof is also shown based on access control security requirements.
be the whole of such attribute groups. Let K i be the attribute group key that is possessed by users, who own the attribute i. Taking the PK, a message M and an access structure T , this algorithm outputs the ciphertext CT . -Re-Encrypt(CT , G): Taking ciphertext CT and attributes groups G, this algorithm outputs re-encrypted CT ′ . -Decrypt(CT ′ , SK, K S ): The decryption algorithm takes as input the ciphertext CT ′ , a private key SK, and a set of attribute group keys K S . The decryption can be done.
4 Ciphertext Policy Attribute-Based Access Control with
Efficient Revocation
4.1 KEK Construction
Ciphertext Policy Attribute-Based Access Control with User Revoca-
tion.
Definition 1. A CP-ABE with user revocation capability scheme consists of six
algorithms:
-Setup: Taking a security parameter k, this algorithm outputs a public key
PK and a master secret key MK.
-KeyGen(M K, S, U ): Taking the MK, and a set of attributes S ⊆ L and users U ⊆ U, this algorithm outputs a set of private attributes keys SK for each user.
-KEKGen(U ): Taking a set of users U ⊆ U, this algorithm outputs KEKs for each user, which will be used to encrypt attribute group keys. -Encrypt(P K, M , T ):
Table 1 .
1 Result Comparison
the Number of Exponentiations of Key Update
Scheme one 2ω + 3
Our scheme ω + 3
Acknowledgements
We are grateful to the anonymous referees for their invaluable suggestions. This work is supported by the National Natural Science Foundation of China (Nos. 60970144, 61272455 and 61100224), and China 111 Project (No. B08038). | 22,251 | [
"1003096",
"1003097",
"1003098",
"993512"
] | [
"469153",
"469153",
"440588",
"469153"
] |
01480197 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480197/file/978-3-642-36818-9_42_Chapter.pdf | Yinghui Zhang
email: [email protected]
Hui Li
Xiaoqing Li
Hui Zhu
Provably Secure and Subliminal-Free Variant of Schnorr Signature
Keywords: Digital signature, Information hiding, Subliminal channel, Subliminal-freeness, Provable security
Subliminal channels present a severe challenge to information security. Currently, subliminal channels still exist in Schnorr signature. In this paper, we propose a subliminal-free variant of Schnorr signature. In the proposed scheme, an honest-but-curious warden is introduced to help the signer to generate a signature on a given message, but it is disallowed to sign messages independently. Hence, the signing rights of the signer is guaranteed. In particular, our scheme can completely close the subliminal channels existing in the random session keys of Schnorr signature scheme under the intractability assumption of the discrete logarithm problem. Also, the proposed scheme is proved to be existentially unforgeable under the computational Diffie-Hellman assumption in the random oracle model.
Introduction
The notion of subliminal channels was introduced by Simmons [START_REF] Simmons | The prisoner' problem and the subliminal channel[END_REF]. He proposed a prison model in which authenticated messages are transmitted between two prisoners and known to a warden. The term of "subliminal" means that the sender can hide a message in the authentication scheme, and the warden cannot detect or read the hidden message. Simmons discovered that a secret message can be hidden inside the authentication scheme and he called this "hidden" communication channel as the subliminal channel. The "hidden" information is known as subliminal information.
As a main part of information hiding techniques [START_REF] Gupta | Cryptography based digital image watermarking algorithm to increase security of watermark data[END_REF][START_REF] Danezis | Differentially private billing with rebates[END_REF][START_REF] Claycomb | Chronological examination of insider threat sabotage: preliminary observations[END_REF][START_REF] Choi | Detection of insider attacks to the web server[END_REF][START_REF] Lee | Extraction of platform-unique information as an identifier[END_REF], subliminal channels have been widely studied and applied [START_REF] Chen | A fair online payment system for digital content via subliminal channel[END_REF][START_REF] Zhou | An anonymous threshold subliminal channel scheme based on elliptic curves cryptosystem[END_REF][START_REF] Zhang | Exploring signature schemes with subliminal channel[END_REF][START_REF] Hwang | Subliminal channels in the identity-based threshold ring signature[END_REF][START_REF] Dai-Rui Lin | A digital signature with multiple subliminal channels and its applications[END_REF][START_REF] Troncoso | Pripayd: Privacyfriendly pay-as-you-drive insurance[END_REF]. However, they also present a severe challenge to information security. To the best of our knowledge, subliminal channels still exist in Schnorr signature [START_REF] Schnorr | Efficient identification and signatures for smart cards[END_REF].
Our Contribution. In this paper, we propose a subliminal-free variant of Schnorr signature scheme, in which an honest-but-curious warden is introduced to help the signer to generate a signature on a given message, but it is disallowed to sign messages independently. In addition, the signer cannot control outputs of the signature algorithm. To be specific, the sender has to cooperate with the warden to sign a given message. Particularly, our scheme is provably secure and can completely close the subliminal channels existing in the random session keys in Schnorr signature scheme.
Related Work. Plenty of researches have been done on both the construction of subliminal channels and the design of subliminal-free protocols [START_REF] Chen | A fair online payment system for digital content via subliminal channel[END_REF][START_REF] Zhou | An anonymous threshold subliminal channel scheme based on elliptic curves cryptosystem[END_REF][START_REF] Zhang | Exploring signature schemes with subliminal channel[END_REF][START_REF] Hwang | Subliminal channels in the identity-based threshold ring signature[END_REF][START_REF] Dai-Rui Lin | A digital signature with multiple subliminal channels and its applications[END_REF][START_REF] Desmedt | Simmons' protocol is not free of subliminal channels[END_REF][START_REF] Simmons | Subliminal communication is easy using the dsa[END_REF][START_REF] Xiangjun | Construction of subliminal channel in id-based signatures[END_REF][START_REF] Xie | A security threshold subliminal channel based on elliptic curve cryptosystem[END_REF]. Since the introduction of subliminal channels, Simmons [START_REF] Simmons | The subliminal channels of the us digital signature algorithm (DSA)[END_REF] also presented several narrow-band subliminal channels that do not require the receiver to share the sender's secret key. Subsequently, Simmons [START_REF] Simmons | Subliminal communication is easy using the dsa[END_REF] proposed a broad-band subliminal channel that requires the receiver to share the sender's secret key. For the purpose of information security, Simmons then proposed a protocol [START_REF] Simmons | An introduction to the mathematics of trust in security protocols[END_REF] to close the subliminal channels in the DSA digital signature scheme. However, Desmedt [START_REF] Desmedt | Simmons' protocol is not free of subliminal channels[END_REF] showed that the subliminal channels in the DSA signature scheme cannot be completely closed using the protocol in [START_REF] Simmons | An introduction to the mathematics of trust in security protocols[END_REF]. Accordingly, Simmons adopted the cut-and-choose method to reduce the capacity of the subliminal channels in the DSA digital signature algorithm [START_REF] Simmons | Results concerning the bandwidth of subliminal channels. Selected Areas in Communications[END_REF]. However, the complete subliminal-freeness still has not been realized. To be specific, the computation and communication costs significantly increase with the reduction of the subliminal capacity. On the other hand, subliminal channels in the NTRU cryptosystem and the corresponding subliminalfree methods [START_REF] Qingjun | Subliminal channels in the NTRU and the subliminalfree methods[END_REF] were proposed. Also, a subliminal channel based on the elliptic curve cryptosystem was constructed [START_REF] Zhou | An anonymous threshold subliminal channel scheme based on elliptic curves cryptosystem[END_REF][START_REF] Xie | A security threshold subliminal channel based on elliptic curve cryptosystem[END_REF]. As far as the authors know, the latest research is mainly concentrated on the construction [START_REF] Hwang | Subliminal channels in the identity-based threshold ring signature[END_REF][START_REF] Dai-Rui Lin | A digital signature with multiple subliminal channels and its applications[END_REF][START_REF] Xiangjun | Construction of subliminal channel in id-based signatures[END_REF] of subliminal channels and their applications [START_REF] Chen | A fair online payment system for digital content via subliminal channel[END_REF][START_REF] Troncoso | Pripayd: Privacyfriendly pay-as-you-drive insurance[END_REF][START_REF] Sun | Improvement of a proxy multisignature scheme without random oracles[END_REF][START_REF] Jadhav | Effective detection mechanism for tcp based hybrid covert channels in secure communication[END_REF].
Outline of the Paper. The rest of this paper is organized as follows. In Section 2, we introduce some notations and complexity assumptions, and then discuss subliminal channels in probabilistic digital signature. In Section 3, we lay out the abstract subliminal-free signature specification and give the formal security model. The proposed provably secure and subliminal-free variant of Schnorr signature scheme is described in Section 4. Some security considerations are discussed in Section 5. Finally, we concludes the work in Section 6.
Preliminaries
Notations
Throughout this paper, we use the notations, listed in Table 1, to present our construction. the concatenation of bit strings s1 and s2. gcd(a, b) the greatest common divisor of two integers a and b.
x -1 the modular inverse of x modulo q such that x -1 x = 1 mod q, where x and q are relatively prime, i.e., gcd(x, q) = 1.
Gg,p a cyclic group with order q and a generator g, where q is a large prime factor of p -1 and p is a large prime. That is, Gg,p = {g 0 , g 1 , • • • , g q-1 } =< g >, which is a subgroup in the multiplicative group GF * (p) of the finite field GF (p).
Complexity Assumptions
Discrete Logarithm Problem (DLP): Let G be a group, given two elements g and h, to find an integer x, such that h = g x whenever such an integer exists.
Intractability Assumption of DLP:
In group G, it is computationally infeasible to determine x from g and h.
Computation Diffie-Hellman (CDH) Problem:
Given a 3-tuple (g, g a , g b ) ∈ G 3 , compute g ab ∈ G. An algorithm A is said to have advantage ǫ in solving the CDH problem in G if Pr A(g, g a , g b ) = g ab ≥ ǫ,
where the probability is over the random choice of g in G, the random choice of a, b in Z * q , and the random bits used by A. CDH Assumption: We say that the (t, ǫ)-CDH assumption holds in G if no t-time algorithm has advantage at least ǫ in solving the CDH problem in G.
Subliminal Channels in Probabilistic Digital Signature
Probabilistic digital signature [START_REF] Yanai | A certificateless ordered sequential aggregate signature scheme secure against super adverssaries[END_REF] can serve as the host of subliminal channels. In fact, the subliminal sender can embed some information into a subliminal channel by controlling the generation of the session keys. After verifying a given signature, the subliminal receiver uses an extraction algorithm to extract the embedded information. Note that the extraction algorithm is only possessed by the authorized subliminal receiver. Hence, anyone else cannot learn whether there exists subliminal information in the signature [START_REF] Simmons | Subliminal channels: past and present[END_REF], not to mention extraction of the embedded information.
In a probabilistic digital signature scheme, the session key can be chosen randomly, and hence one message may correspond to several signatures. More specifically, if different session keys are used to sign the same message, different digital signatures can be generated. This means that redundant information exists in probabilistic digital signature schemes, which creates a condition for subliminal channels. The subliminal receiver can use these different digital signatures to obtain the subliminal information whose existence can hardly be learnt by the others.
In particular, there exists subliminal channels in a typical probabilistic digital signature, namely Schnorr Signature [START_REF] Schnorr | Efficient identification and signatures for smart cards[END_REF].
3 Definition and Security Model
Specification of Subliminal-Free Signature
A subliminal-free signature scheme consists of three polynomial-time algorithms Setup, KeyGen, an interactive protocol Subliminal-Free Sign, and Verify below. Based on a subliminal-free signature scheme, a sender A performs an interactive protocol with a warden W . And, W generates the final signature σ and transmits it to a receiver B. Note that W is honest-but-curious. That is, W will honestly execute the tasks assigned by the related algorithm. However, it would like to learn secret information as much as possible.
-Setup: It takes as input a security parameter λ and outputs system public parameters P arams. -KeyGen: It takes as input a security parameter λ, system public parameters P arams and returns a signing-verification key pair (sk, pk). -Subliminal-Free Sign: An interactive protocol between the sender and the warden. Given a message M , a signature σ is returned.
-Verify: It takes as input system public parameters P arams, a public key pk and a signature message (M, σ). It returns 1 if and only if σ is a valid signature on message M .
Security Model
In the proposed scheme, the warden participates in the generation of a signature, hence the ability of the warden to forge a signature is enhanced. We regard the warden as the adversary. The formal definition of existential unforgeability against adaptively chosen messages attacks (EUF-CMA) is based on the following EUF-CMA game involving a simulator S and a forger F:
1. Setup: S takes as input a security parameter λ, and runs the Setup algorithm. It sends the public parameters to F. 2. Query: In addition to hash queries, F issues a polynomially bounded number of queries to the following oracles:
-Key generation oracle O KeyGen : Upon receiving a key generation request, S returns a signing key. -Signing oracle O Sign : F submits a message M , and S gives F a signature σ. 3. Forgery: Finally, F attempts to output a valid forgery (M, σ) on some new message M , i.e., a message on which F has not requested a signature. F wins the game if σ is valid.
The advantage of F in the EUF-CMA game, denoted by Adv(F), is defined as the probability that it wins. Definition 1. (Existential Unforgeability) A probabilistic algorithm F is said to (t, q H , q S , ǫ)-break a subliminal-free signature scheme if F achieves the advantage Adv(F) ≥ ǫ, when running in at most t steps, making at most q H adaptive queries to the hash function oracle H, and requesting signatures on at most q S adaptively chosen messages. A subliminalfree signature scheme is said to be (t, q H , q S , ǫ)-secure if no forger can (t, q H , q S , ǫ)-break it.
4 Subliminal-Free Variant of Schnorr Signature
Construction
-Setup: Let (p, q, g) be a discrete logarithm triple associated with group G g,p . Let A be the sender of message M ⊆ {0, 1} * , B be the receiver of M and W be the warden. It chooses t ∈ R (1, q), returns t to W and computes T = g t mod p. Also, let H 0 , H be two hash functions, where H 0 : {0, 1} * → G g,p and H : {0, 1} * × G g,p → (1, q). Then, the public parameters are P arams = (p, q, g, H 0 , H, T ).
-KeyGen: It returns x ∈ R (1, q) as a secret key and the corresponding public key is y = T x mod p.
-Subliminal-Free Sign:
1. W chooses two secret large integers c and d satisfying cd = 1 mod q. Also, W k w ∈ R (1, q), thus gcd(k w , q) = 1. Then W computes α = g kwc mod p and sends α to A.
2. A chooses k a ∈ R (1, q), thus gcd(k a , q) = 1. Then A computes h 0 = H 0 (M ), β = α kah 0 mod p and sends (h 0 , β) to W . 3. W computes r = β d = α kah 0 d = g kakwh 0 cd = g kakwh 0 mod p, v 1 = y k -1
w mod p, and sends (r, v 1 ) to A. 4. A computes e = H(M r), f = e x mod p and v 2 = g kah 0 mod p. Then A prepares a non-interactive zero knowledge proof that DL e (f ) = DL T (y) and sends (e, f, v 2 ) to W . 5.
W computes u = k w v -1 1 f -1 v -1 2
mod p, θ = u -1 t mod p and sends θ to A. 6. A computes s ′ and then sends (M, s ′ ) to W :
s ′ = k a h 0 + θ • (v -1 1 f -1 v -1 2 ) • xe = k a h 0 + (u -1 t) • (v -1 1 f -1 v -1 2 ) • xe = k a h 0 + (v 2 f v 1 k -1 w t) • (v -1 1 f -1 v -1 2 ) • xe = k a H 0 (M ) + k -1
w xte mod q.
7. Sign: Upon receiving (M, s ′ ), W checks if h 0 = H 0 (M ) and e = H(M r). If not, W terminates the protocol, else W computes
s = k w s ′ = k a k w H 0 (M ) + k w k -1 w xte = k a k w H 0 (M ) + xte mod q.
Then W sends the signature message (M, (e, s)) to B.
-Verify: After receiving the signature message (M, (e, s)), B computes r ′ = g s y -e mod p and e ′ = H(M r ′ ). B returns 1 if and only if e = e ′ .
Consistency of Our Construction
On one hand, if the signature message (M, (e, s)) is valid, we have s = k a k w H 0 (M ) + xte mod q. Thus, r ′ = g s y -e = g kakwH 0 (M )+xte mod q y -e = g kakwH 0 (M ) T xe y -e = g kakwH 0 (M ) y e y -e = g kakwH 0 (M ) = r mod p,
and then e ′ = H(M r ′ ) = H(M r) = e.
On the other hand, if e = e ′ , the signature message (M, (e, s)) is valid. Otherwise, we have s = k a k w H 0 (M ) + xte mod q and then r ′ = r. However,
e ′ = H(M r ′ ) = H(M r) = e.
Thus, a collision of the hash function H is obtained, which is infeasible for a secure hash function.
5 Analysis of the Proposed Subliminal-Free Signature Scheme
Existential Unforgeability
Theorem 1. If G g,p is a (t ′ , ǫ ′ )-CDH group, then the proposed scheme is (t, q H 0 , q H , q S , ǫ)-secure against existential forgery on adaptively chosen messages in the random oracle model, where
t ≥ t ′ -(q H + 3.2q S ) • C Exp , (1)
ǫ ≤ ǫ ′ + q S • (q H 0 + q S )2 -l M + q S (q H + q S )2 -lr + q H 2 -lq , (2)
where C Exp denotes the cost of a modular exponentiation in group G g,p .
Proof. (sketch) Let F be a forger that (t, q H 0 , q H , q S , ǫ)-breaks our proposed scheme. We construct a "simulator" algorithm S which takes ((p, q, g), (g a , g b )) as inputs and runs F as a subroutine to compute the function DH g,p (g a , g b ) = g ab in t ′ steps with probability ǫ ′ , which satisfy the Equalities (1) and [START_REF] Gupta | Cryptography based digital image watermarking algorithm to increase security of watermark data[END_REF]. S makes the signer's verification key y = g a mod p public, where the signing key a is unknown to S. Aiming to translate F's possible forgery (M, (e, s)) into an answer to the function DH g,p (g a , g b ), S simulates a running of the proposed scheme and answers F's queries. S uses F as a subroutine. Due to space limitation, we don't present the details here.
Subliminal-Freeness
It can be seen from the proposed scheme that the receiver B can only obtain the signature message (M, (e, s)) and temporary value r in addition to the verification public key y, thus it is necessary for the sender A to use e, s or r as a carrier when transmitting subliminal information.
In the following, we demonstrate that none of e, s and r can be controlled by A. On one hand, although the parameters (α, v 1 , θ) = (g kwc , y k -1 w , u -1 t) mod p can be obtained by A, the secret exponents c, d and the secret parameters t, u are unknowable to him. Thus, A cannot obtain any information about k w and g kw . Particularly, A knows nothing of k w and g kw in the whole process of signing, hence the value of s = k w s ′ mod p cannot be controlled by A. On the other hand, although the signer A computes e = H(M r), nothing of k w and g kw is available to him. Thus, the value of r = g kakwH 0 (M ) mod p cannot be controlled by A, and hence the value of e cannot be controlled. Note that if the value r generated by the warden W is not used by A in the Step 4, W can detect this fact in the Step 7 and terminate the protocol. Furthermore, if A attempts to directly compute k w from g kw , he has to solve the discrete logarithm problem in group GF * (p), which is infeasible according to the intractability assumption of DLP.
Hence, we realize the complete subliminal-freeness of the subliminal channels existing in the random session keys in Schnorr signature scheme.
Conclusions and Future work
In this paper, a subliminal-free protocol for Schnorr signature scheme is proposed. The proposed protocol completely closes the subliminal channels existing in the random session keys in Schnorr signature scheme. More strictly, it is completely subliminal-free in computational sense, and its security relies on the CDH assumption in the random oracle model. In addition, it is indispensable for the sender and the warden to cooperate with each other to sign a given message, and the warden is honest-but-curious and cannot forge a signature independently.
It would be interesting to construct subliminal-free signature schemes provably secure in the standard model.
Table 1 .
1 Meaning of notations in the proposed scheme
Notation Meaning
s ∈R S s is an element randomly chosen from a set S.
ls the bit length of the binary representation of s.
s1 s2
Acknowledgements
We are grateful to the anonymous referees for their invaluable suggestions. This work is supported by the National Natural Science Foundation of China (No.61272457), the Nature Science Basic Research Plan in Shaanxi Province of China (No.2011JQ8042), and the Fundamental Research Funds for the Central Universities (Nos.K50511010001 and K5051201011). In particular, this work is supported by the Graduate Student Innovation Fund of Xidian University (Research on key security technologies of largescale data sharing in cloud computing). | 20,550 | [
"1003099"
] | [
"469153",
"469153",
"469153",
"469153"
] |
01480199 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480199/file/978-3-642-36818-9_44_Chapter.pdf | Ruxandra F Olimid
email: [email protected]
On the Security of an Authenticated Group Key Transfer Protocol Based on Secret Sharing
Keywords: group key transfer, secret sharing, attack, cryptanalysis
Group key transfer protocols allow multiple parties to share a common secret key. They rely on a mutually trusted key generation center (KGC) that selects the key and securely distributes it to the authorized participants. Recently, Sun et al. proposed an authenticated group key transfer protocol based on secret sharing that they claim to be secure. We show that this is false: the protocol is susceptible to insider attacks and violates known key security. Finally, we propose a countermeasure that maintains the benefits of the original protocol.
Introduction
Confidentiality represents one of the main goals of secure communication. It assures that the data are only accessible to authorized parties and it is achieved by encryption. In case of symmetric cryptography, the plaintext is encrypted using a secret key that the sender shares with the qualified receiver(s). Under the assumption that the system is secure, an entity that does not own the private key is unable to decrypt and thus the data remain hidden to unauthorized parties.
The necessity of a (session) key establishment phase before the encrypted communication starts is immediate: it allows the authorized parties to share a common secret key that will be used for encryption.
Key establishment protocols divide into key transfer protocols -a mutually trusted key generation center (KGC) selects a key and securely distributes it to the authorized parties -and key agreement protocols -all qualified parties are involved in the establishment of the secret key. The first key transfer protocol was published by Needham and Schroeder in 1978 [START_REF] Needham | Using encryption for authentication in large networks of computers[END_REF], two years after Diffie and Hellman had invented the public key cryptography and the notion of key agreement [START_REF] Diffie | New directions in cryptography[END_REF].
The previous mentioned protocols restrict to the case of two users. As a natural evolution, group (or conference) key establishment protocols appeared a few years latter. Ingemarsson et al. introduced the first key transfer protocol that permits to establish a private key between multiple parties [START_REF] Ingemarsson | A conference key distribution system[END_REF]. Their protocol generalizes the Diffie-Hellman key exchange.
A general construction for a key transfer protocol when KGC shares a longterm secret with each participant is straightforward: KGC generates a fresh key and sends its encryption (under the long-term secret) to each authorized party. The qualified users decrypt their corresponding ciphertext and find the session secret key, while the unqualified users cannot decrypt and disclose the session key. KGC performs n encryptions and sends n messages, where n is the number of participants. Therefore, the method becomes inefficient for large groups.
Secret sharing schemes are used in group key transfer protocols to avoid such disadvantages. A secret sharing scheme splits a secret into multiple shares so that only authorized sets of shares may reconstruct the secret. Blakley [START_REF] Blakley | Safeguarding cryptographic keys[END_REF] and Shamir [START_REF] Shamir | How to share a secret[END_REF] independently introduce secret sharing schemes as key management systems. The particular case when all shares are required for reconstruction is called all-or-nothing secret sharing scheme; the particular case when at least k out of n shares are required for reconstruction is called (k,n)-threshold secret sharing scheme.
Various group key establishment protocols based on secret sharing schemes exist in the literature. Blom proposed an efficient key transfer protocol in which every two users share a common private key that remains hidden when less than k users cooperate [START_REF] Blom | An optimal class of symmetric key generation systems[END_REF]. Blundo et al. generalized Blom's protocol by allowing any t users to share a private key, while it remains secure for a coalition of up to k users [START_REF] Blundo | Perfectlysecure key distribution for dynamic conferences[END_REF]. Fiat and Naor improved the construction even more by permitting any subset of users to share a common key in the same conditions [START_REF] Fiat | Broadcast encryption[END_REF]. Pieprzyk and Li gave a couple of group key agreement protocols based on Shamir's secret scheme [START_REF] Li | Conference key agreement from secret sharing[END_REF][START_REF] Pieprzyk | Multiparty key agreement protocols[END_REF]. Recently, Harn and Lin also used Shamir's scheme to construct a key transfer protocol [START_REF] Harn | Authenticated group key transfer protocol based on secret sharing[END_REF], which was proved to be insecure and further adjusted [START_REF] Nam | Cryptanalysis of a group key transfer protocol based on secret sharing[END_REF]. Some other examples from literature include Sáez's protocol [START_REF] Sáez | Generation of key predistribution schemes using secret sharing schemes[END_REF] (based on a family of vector space secret sharing schemes), Hsu et al.'s protocol [START_REF] Hsu | A novel group key transfer protocol[END_REF] (based on linear secret sharing schemes) and Sun et al's group key transfer protocol [START_REF] Sun | An authenticated group key transfer protocol based on secret sharing[END_REF], which we will refer for the rest of this paper. Sun et al. claim that their construction is secure and provides several advantages: each participant stores a single long-term secret for multiple sessions, computes the session key by a simple operation and the protocol works within dynamic groups (i.e. members may leave or join the group). We demonstrate that they are wrong. First, we show that the protocol is susceptible to insider attacks: any qualified group member may recover a session key that he is unauthorized to know. Second, we prove that the protocol violates known key security: any attacker who gains access to one session key may recover any other key. We propose an improved version of Sun et al's group key transfer protocol that stands against both attacks and achieves the benefits claimed in the original work.
The paper is organized as follows. The next section contains the preliminaries. Section 3 describes Sun et al.'s authenticated group key transfer protocol. Section 4 introduces the proposed attacks. In Section 5 we analyze possible countermeasures. Section 6 concludes.
Security Goals
Group key transfer protocols permit multiple users to share a common private key by using pre-established secure communication channels with a trusted KGC, which is responsible to generate and distribute the key. Each user registers to KGC for subscribing to the key distribution service and receives a long-term secret, which he will later use to recover the session keys.
We will briefly describe next the main security goals that a group key transfer protocol must achieve: key freshness, key confidentiality, key authentication, entity authentication, known key security and forward secrecy.
Key freshness ensures the parties that KGC generates a random key that has not been used before. Unlike key agreement protocols, the users are not involved in the key generation phase, so the trust assumption is mandatory.
Key confidentiality means that a session key is available to authorized parties only. Adversaries are categorized into two types: insiders -that are qualified to recover the session key -and outsiders -that are unqualified to determine the session key. A protocol is susceptible to insider attacks if an insider is able to compute secret keys for sessions he is unauthorized for. Similarly, it is vulnerable to outsider attacks if an outsider is capable to reveal any session key.
Key authentication assures the group members that the key is distributed by the trusted KGC and not by an attacker. It may also stand against a replay attack: no adversary can use a previous message originated from KGC to impose an already compromised session key.
Entity authentication confirms the identity of the users involved in the protocol, so that an attacker cannot impersonate a qualified principal to the KGC.
Known key security imposes that a compromised session key has no impact on the confidentiality of other session keys: even if an adversary somehow manages to obtain a session key, all the other past and future session keys remain hidden.
Forward secrecy guarantees that even if a long-term secret is compromised, this has no impact on the secrecy of the previous session keys.
Secret Sharing
A secret sharing scheme is a method to split a secret into multiple shares, which are then securely distributed to the participants. The secret can be recovered only when the members of an authorized subset of participants combine their shares together. The set of all authorized subsets is called the access structure. The access structure of a (k, n) threshold secret sharing scheme consists of all sets whose cardinality is at least k. The access structure of an all-or-nothing secret sharing scheme contains only one element: the set of all participants.
Generally, a secret sharing scheme has 3 phases: sharing (a dealer splits the secret into multiple parts, called shares), distribution (the dealer securely transmits the shares to the parties) and reconstruction (an authorized group of parties put their shares together to recover the secret).
Group key establishment protocols use secret sharing schemes due to the benefits they introduce: decrease computational and transmission costs, represent a convenient way to differentiate between principals and their power within the group, permits delegation of shares, accepts cheating detection, permits the sizing of the group, etc. [START_REF] Pieprzyk | Multiparty key agreement protocols[END_REF]. For more information, the reader may refer to [START_REF] Pieprzyk | Multiparty key agreement protocols[END_REF].
Discrete Logarithm Assumption
Let G be a cyclic multiplicative group of order p with g ∈ G a generator.
The Discrete Logarithm Assumption holds in G if given g a , any probabilistic polynomial-time adversary A has a negligible probability in computing a:
Adv A = P r[A(p, g, g a ) = a] ≤ negl(k) (1)
where a ∈ Z * p is random and k is the security parameter.
Sun et al.'s Group Key Transfer Protocol
Let n be the size of the group of participants, U = {U 1 , U 2 , ..., U n } the set of users, G a multiplicative cyclic group of prime order p with g ∈ G as a generator, H a secure hash function. Sun et al. base their protocol on a derivative secret sharing, which we describe next and briefly analyze in Section 5:
Derivative Secret Sharing [START_REF] Sun | An authenticated group key transfer protocol based on secret sharing[END_REF].
Phase 1: Secret Sharing The dealer splits the secret S ∈ G into two parts n times:
S = s 1 + s 1 = s 2 + s 2 = ... = s n + s n (2)
Phase 2: Distribution The dealer sends the share s i ∈ G to U i ∈ U via a secure channel.
Phase 3: Reconstruction 1. The dealer broadcasts the shares s 1 , s 2 , ..., s n at once, when the users want to recover the secret S. 2. Any user U i ∈ U reconstructs the secret as:
S = s i + s i (3)
Next, we review Sun et. al's group key transfer protocol, which we prove vulnerable in Section 4: [START_REF] Sun | An authenticated group key transfer protocol based on secret sharing[END_REF].
Sun et al.'s Group Key Transfer Protocol
Phase 1: User Registration During registration, KGC shares a long-term secret s i ∈ G with each user U i ∈ U.
Phase 2: Group Key Generation and Distribution 1. A user, called the initiator, sends a group key distribution request that contains the identity of the qualified participants for the current session {U 1 , U 2 , ..., U t } to KGC. 12. KGC broadcasts the received list as a response. 3. Each member U i , i = 1, . . . , t that identifies himself in the list sends a random challenge r i ∈ Z * p to KGC. 4. KGC randomly selects S ∈ G and invokes the derivative secret sharing scheme to split S into two parts t times such that S = s 1 + s 1 = s 2 + s 2 = ... = s t + s t . He computes the session private key as K = g S , t messages M i = (g si+ri , U i , H(U i , g si+ri , s i , r i )), i = 1, . . . , t and Auth = H(K, g s1+r1 , ..., g st+rt , U 1 , ..., U t , r 1 , ..., r t ). At last, KGC broadcasts (M 1 , ..., M t , Auth) as a single message. 5. After receiving M i and Auth, each user U i , i = 1, . . . , t computes h = H(U i , g si+ri , s i , r i ) using g si+ri from M i , s i the long-term secret and r i as chosen in step 3. If h differs from the corresponding value in M i , the user aborts; otherwise, he computes K = g s i • g si+ri /g ri and checks if Auth = H(K , g s1+r1 , ..., g st+rt , U 1 , ..., U t , r 1 , ..., r t ). If not, he aborts; otherwise, he consider K to be the session key originated from KGC and returns a value h i = H(s i , K , U 1 , ..., U t , r 1 , ...r t ) to KGC. 6. KGC computes h i = H(s i , K, U 1 , ..., U t , r 1 , ...r t ) using his own knowledge on s i and checks if h i equals h i , certifying that all users possess the same key.
The authors claim that their construction is secure under the discrete logarithm assumption and has multiple advantages: each participant needs to store only one secret share for multiple sessions (the long-term secret s i , i = 1, . . . , n), the dynamic of the group preserves the validity of the shares (if a user leaves or joins the group there is no need to update the long-term secrets), each authorized user recovers the session key by a simple computation. Unlike their claim, in the next section we prove that the protocol is insecure.
The Proposed Attacks
We show that Sun et al.'s protocol is insecure against insider attacks and violates known key security.
Insider Attack
Let U a ∈ U be an authorized user for a session (k 1 ), s a his long-term secret, U (k1) ⊆ U the qualified set of participants of the session, (g
si (k 1 ) +ri (k 1 ) ) Ui∈U (k 1 )
the values that were broadcast as part of (M i ) Ui∈U (k 1 ) in step 4 and K (k1) = g S (k 1 ) the session key.
The participant U a is qualified to determine (k 1 ) session key as:
K (k1) = g s a • g sa (k 1 ) +ra (k 1 ) g ra (k 1 ) (4)
Since g si (k 1 ) +ri (k 1 ) and r i (k 1 ) are public, he is able to compute g s i , for all
U i ∈ U (k1)
:
g s i = K (k1) • g ri (k 1 ) g si (k 1 ) +ri (k 1 ) (5)
Suppose that U a is unauthorized to recover (k 2 ) session key, (k 2 ) = (k 1 ). However, he can eavesdrop the exchanged messages. Therefore, he is capable to determine
g sj (k 2 ) = g sj (k 2 ) +rj (k 2 ) g rj (k 2 ) (6)
for all U j ∈ U (k2) , where U (k2) ⊆ U is the qualified set of parties of the session (k 2 ).
We assume that there exists a participant U b ∈ U (k1) U (k2) that is qualified for both sessions (k 1 ) and (k 2 ). The inside attacker U a can find the key K (k2) of the session (k 2 ) as:
K (k2) = g s b • g s b (k 2 ) = g s b +s b (k 2 ) (7)
In conclusion, an insider can determine any session key under the assumption that at least one mutual authorized participant for both sessions exists, which is very likely to happen.
The attack also stands if there is no common qualified user for the two sessions, but there exists a third one (k 3 ) that has a mutual authorized party with each of the former sessions. The extension is straightforward: let U 1,3 , U 2,3 be the common qualified parties for sessions (k 1 ) and (k 3 ), respectively (k 2 ) and (k 3 ). U a computes the key K (k3) as in the proposed attack due to the common authorized participant U 1,3 . Once he obtains the key K (k3) , he mounts the attack again for sessions (k 3 ) and (k 2 ) based on the common party U 2,3 and gets K (k2) .
The attack extends in chain: the insider U a reveals a session key K (kx) if he is able to build a chain of sessions (k 1 )...(k x ), where (k i ) and (k i+1 ) have at least one common qualified member U i,i+1 , i = 1, . . . , x -1 and U a is authorized to recover the key K (k1) .
Known Key Attack
Suppose an attacker (insider or outsider) owns a session key K (k1) . We also assume that he had previously eavesdropped values r i (k 1 ) in step 3 and g
si (k 1 ) +ri (k 1 )
in step 4 of session (k 1 ) so that for all U i ∈ U (k1) he determines:
g si (k 1 ) = g si (k 1 ) +ri (k 1 ) g ri (k 1 ) (8)
Because the session key K (k1) is exposed, he can also compute g s i , for all U i ∈ U (k1) :
g s i = K (k1) g si (k 1 ) (9)
Let (k 2 ) be any previous or future session that has at least one common qualified participant U b with (k 1 ), i.e. U b ∈ U (k1) U (k2) . As before, the attacker eavesdrops r b (k 2 ) and g s b (k 2 ) +r b (k 2 ) and computes
g s b (k 2 ) = g s b (k 2 ) +r b (k 2 ) g r b (k 2 ) (10)
The attacker can now recover the key K (k2) :
K (k2) = g s b • g s b (k 2 ) = g s b +s b (k 2 ) (11)
The attack may also be mount in chain, similar to the insider attack. We omit the details in order to avoid repetition.
The first attack permits an insider to reveal any session key he was unauthorized for, while the second permits an attacker (insider or outsider) to disclose any session key under the assumption that a single one has been compromised.
In both cases, the attacker computes the session key as the product of two values: g s b (disclosed only by eavesdropping) and g s b (revealed by eavesdropping when a session key is known). We remark that the attacker is unable to determine the long-term secret s b if the discrete logarithm assumption holds, but we emphasize this does not imply that the protocol is secure, since the adversary's main objective is to find the session key that can be disclosed without knowing s b .
Countermeasures
Sun et al.'s group key agreement protocol fails because values g s i , i = 1, . . . , n are maintained during multiple sessions. We highlight that the derivative secret sharing scheme suffers from a similar limitation caused by the usage of the longterm secrets s i , i = 1, . . . , n during multiple sessions: any entity that discloses a secret S determines the values s i = S -s i by eavesdropping s i , i = 1, . . . , n and uses them to reveal other shared secrets.
A trivial modification prevents the proposed attacks: KGC replaces the values s i at the beginning of each session. This way, even if the attacker determines g s i in one round, the value becomes unusable. However, this introduces a major drawback: KGC must share a secure channel with each user for any round. If this were the case, KGC could have just sent the secret session key via the secure channel and no other protocol would have been necessary. In conclusion, the group key transfer protocol would have become useless.
Another similar solution exists: during the user registration phase an ordered set of secrets is shared between KGC and each user. For each session, the corresponding secret in the set is used in the derivative secret sharing. Although this solution has the benefit that it only requires the existence of secure channels at registration, it introduces other disadvantages: each user must store a considerable larger quantity of secret information, the protocol can run for at most a number of times equal to the set cardinality, KGC must broadcast the round number so that participants remain synchronized.
We propose next a countermeasure inspired by the work of Pieprzyk and Li [START_REF] Pieprzyk | Multiparty key agreement protocols[END_REF]. The main idea consist of using a different public value α ∈ G to compute the session key K = α S for each round.
The Improved Version of the Group Key Transfer Protocol .
Phase 1: User Registration During registration, KGC shares a long-term secret s i ∈ G with each user U i ∈ U.
Phase 2: Group Key Generation and Distribution 1. A user, called the initiator, sends a group key distribution request that contains the identity of the qualified participants for the current session {U 1 , U 2 , ..., U t } to KGC. 2. KGC broadcasts the received list as a response. 3. Each member U i , i = 1, . . . , t that identifies himself in the list sends a random challenge r i ∈ Z * p to KGC. 4. KGC randomly selects S ∈ G and invokes the derivative secret sharing scheme to split S into two parts t times such that S = s 1 + s 1 = s 2 + s 2 = ... = s t + s t . He chooses α ∈ G at random, computes the session private key as K = α S , t messages M i = (α si+ri , U i , H(U i , α si+ri , s i , r i , α)), i = 1, . . . , t and Auth = H(K, α s1+r1 , ..., α st+rt , U 1 , ..., U t , r 1 , ..., r t , α). At last, KGC broadcasts (M 1 , ..., M t , Auth, α) as a single message. 5. After receiving M i , Auth and α, each user U i , i = 1, . . . , t computes h = H(U i , α si+ri , s i , r i , α) using α si+ri from M i , s i the long-term secret and r i as chosen in step 3. If h differs from the corresponding value in M i , the user aborts; otherwise, he computes K = α s i •α si+ri /α ri and checks if Auth = H(K , α s1+r1 , ..., α st+rt , U 1 , ..., U t , r 1 , ..., r t , α). If not, he aborts; otherwise, he consider K to be the session key originated from KGC and returns a value h i = H(s i , K , U 1 , ..., U t , r 1 , ...r t , α) to KGC. 6. KGC computes h i = H(s i , K, U 1 , ..., U t , r 1 , ...r t , α) using his own knowledge on s i and checks if h i equals h i , certifying that all users possess the same key.
The countermeasure eliminates both attacks. Under the discrete logarithm assumption, a value α (k1) s i from a session (k 1 ) can no longer be used to compute a session key K (k2) = α
s i +si (k 2 ) (k2)
with (k 2 ) = (k 1 ). The values α are authenticated to originate from KGC so that an attacker cannot impersonate the KGC and use a suitable value (for example α (k2) = α a (k1) for a known a). We remark that the modified version of the protocol maintains all the benefits of the original construction and preserves the computational cost, while the transmission cost increases negligible. However, we admit that it conserves a weakness of the original protocol: cannot achieve forward secrecy. Any attacker that obtains a long-term secret becomes able to compute previous keys of sessions he had eavesdropped before. The limitation is introduced by construction because the long-term secret is directly used to compute the session key.
Conclusions
Sun et al. recently introduced an authenticated group key transfer protocol based on secret sharing [START_REF] Sun | An authenticated group key transfer protocol based on secret sharing[END_REF], which they claimed to be efficient and secure. We proved that they are wrong: the protocol is vulnerable to insider attacks and violates known key security. We improved their protocol by performing a slight modification that eliminates the proposed attacks and maintains the benefits of the original work.
Without loss of generality, any subset of U can be expressed as {U1, U2, ..., Ut} by reordering.
Acknowledgments. This paper is supported by the Sectorial Operational Program Human Resources Development (SOP HRD), financed from the European Social Fund and by the Romanian Government under the contract number SOP HDR/107/1.5/S/82514. | 23,434 | [
"1003101"
] | [
"302604"
] |
01480205 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480205/file/978-3-642-36818-9_51_Chapter.pdf | Hui Zhu
email: [email protected]
Tingting Liu
Guanghui Wei
Beishui Liu
Hui Li
CSP-Based General Detection Model of Network Covert Storage Channels
Keywords: Security modeling, Protocol analysis, Network covert storage channels, Detection, CSP
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction & Related Work
A network covert channel is a malicious communication mechanism which can be utilized by attackers to convey information covertly in a manner that violates the system's security policy [START_REF] Snoeren | Single Packet IP Trace back[END_REF]. The channel is usually difficult to detect and brings serious security threat to security-sensitive systems. Consequently, there is an increasing concern on network covert channels.
There are two types of network covert channels: storage and timing channels. With the widespread diffusion of networks, many methods have been studied by attackers for constructing network covert channels using a variety of protocols including TCP, IP, HTTP and ICMP [START_REF] Zander | A Survey of Covert Channels and Countermeasures in Computer Network Protocols[END_REF][START_REF] Ahsan | Practical Data Hiding in TCP/IP[END_REF][START_REF] Cauich | Data Hiding in Identification and Offset IP Fields[END_REF]. For example, Abad proposed [START_REF] Abad | IP checksum covert channels and selected hash collision[END_REF] an IP checksum covert channels which use the hash collision. Fisk [START_REF] Fisk | Eliminating steganography in Internet traffic with active Wardens[END_REF] proposed to use the RST flag in TCP and the payload of ICMP protocols to transfer covert message. These covert channel implementations are based on common network or application layer internet protocols. Castiglione et al presented an asynchronous covert channel scheme through using spam e-mails [START_REF] Castiglione | An asynchronous covert channel using spam[END_REF]. Moreover, Fiore et al intro-duced a framework named Selective Redundancy Removal (SRR) for hiding data [START_REF] Fiore | Selective Redundancy Removal: A Framework for Data Hiding[END_REF]. It is easy to see that the network covert channels are all based on the various protocols.
More attention has been placed on network covert channels detection. Tumoian et al used the neural network to detect passive covert channels in TCP/IP traffic [START_REF] Tumoian | Network based detection of passive covert channels in TCP/IP[END_REF]. Zhai et al [START_REF] Zhai | A covert channel detection algorithm based on TCP Markov model[END_REF] proposed a covert channel detection method based on the TCP Markov model for different application scenarios. Gianvecchio et al [START_REF] Gianvecchio | An Entropy-Based Approach to Detecting Covert Timing Channels[END_REF] proposed an entropy-based approach to detect various covert timing channels. The above methods are based either on the anomaly data or on unusual traffic patterns in practical network traffic. It induces that they are hard to find those potential and unknown covert channel vulnerabilities. In this paper, we establish a CSP-based general model to analyze and detect the potential exploits of protocols from the perspective of original design of protocols.
The remainder of the paper is organized as follows. Section 2 introduces the basic concepts of CSP. In Section 3, we give the CSP-based general detection model including the details of establishing and detection steps. In section 4, we test our model in Transmission Control Protocol (TCP). The conclusions are in section 5.
CSP (Communicating Sequential Processes)
In CSP [START_REF] Hoare | Communicating Sequential Processes[END_REF][START_REF] Brookes | A theory of Communicating Sequential Processes[END_REF][START_REF] Roscoe | The theory and practice of concurrency. s. l[END_REF][START_REF] Schneider | Concurrent and real-time systems: the CSP approach[END_REF], systems are described in terms of processes which are composed of instantaneous and atomic discrete events. The relations between processes and operations on processes are formalized with operational semantics of the algebra. Many operations on processes and their interrelationships are defined within algebraic semantics of CSP as following:
(Prefixing): The process will communicate the event a and then behave as process P. □ (Deterministic Choice): This process can behave either as P or Q, but the environment decides on which process to run. (Prefix Choice): This represents a deterministic choice between the events of the set A which may be finite or infinite. This notation allows representing input and output from channels. : The input can accept any input x of type A along channel c, following which it behaves as P(x). : The output c!v→P is initially able to perform only the output of v on channel c, then it behaves as P.
(Parallel Composition):Let A be a set of events, then the process behaves as P and Q acting concurrently, with synchronizing on any event in the synchronization set A. Events not in A may be performed by either of the pro-cesses independent of the other. Definition 1: trace:The trace is a model which characterizes a process P by its set traces (P): finite sequences of events which it can perform in a certain period of time. Definition 2: refinement: A relationship between two CSP processes: trace model refinement. A process P is trace-refined by a process Q, that is P Q, and only if traces (Q) traces (P).
3
The CSP-based general detection model Step1: The original design specifications of protocols and the communication procedure of the protocol interacting entity are analyzed, and then a CSPbased process is established.
Step2: The header fields of protocols are classified into three types--Secure fields, exploited fields-I, exploited fields-II. The details of the classification can be obtained in section 3.2.
Step3: Based on the classification of header fields and the status of the protocol interaction system, a CSP-based search process is established to search for the covert storage channels exploits in the header fields of protocols.
Step4: Based on the hypothesis of the network covert storage channel and vulnerability which have been found in Step3, a CSP-based process of network covert storage channels is established.
Step5: The traces (Definition1) of the established processes in Step1 and Step4 are detected at last. It is necessary to find whether the two traces can satisfy the refinement (Definition2) relationship. If the refinement (Definition2) relationship can be satisfied by the traces, then there are covert storage channels vulnerabilities in the header fields of protocols, vice verse.
The CSP-based general detection model reduces the existence problem of the covert channels exploits in protocol to the question whether the CSP description of covert storage channels is a refinement of the protocol specifications, which simply the detection of the network covert storage channels.
Classification of header fields
A property named modified property is defined for every header field. The header fields of protocol can be classified into three types according to the modified property as follows:
Secure fields: This type of fields cannot be modified arbitrarily due to their modifications will impair the normal communications. So these fields are secure. For example, the source port field and the destination field of TCP header are secure fields. Once they are modified, the TCP connection cannot be set up. Exploited fields-I: These fields can be modified arbitrarily with its own merits of making no sense on the normal communication. Such as the reserved fields of TCP headers which are designed for future protocol improvements and the optional header fields of IP. Exploited fields-II: These fields are needed to guarantee the normal communication under some conditions, so the modifications of them have some restrictions. For example, the TCP urgent point field can be modified to convey messages when the urgent flag of the control bits field is not set.
4
The verification of model
The CSP description of TCP connection
In this section, we test and verify the effectiveness and feasibility of the general model in TCP. In order to simplify the analysis and description of TCP, we assume that our TCP interaction system consists of two hosts A and B which are only equipped with an application layer, a TCP entity and two channels between the layers as shown in Figure 2. The model of TCP describes the connection between a client and a server, and the TCP protocol state machine has six states. We regard host A as a client, and B as a server. Let Tstate indicates the set of states. We assume there is a set of different packet types named PacketTypes. Two CSP descriptions for Host A and Host B are shown in Figure 3. The CSP model of the TCP connection is composed of TCP_A_SM and TCP_B_SM.
TCP_CSP=TCP_A_SM |||TCP_B_SM
Classification of header fields in TCP
According to the rules mentioned in section 3.2, the header fields of TCP are classified into three types as shown in Table 1. The process C_exploit(X) search for the covert storage channels exploits in TCP according to the rules based on the modified property and the specifications of TCP connection. Figure 4 shows the CSP process C_exploit(X). After searching for the network covert storage channel exploits through the process C_exlpoit(X), we have obtained three results based on the different states of TCP connection.
The CSP description of covert storage channels
As have been shown in Figure 2, there are two malwares C and D who hide inside the TCP interaction system. Cstate= {C_closed, C_listen, C_connect, C_c1, C_c2, C_e1, C_e2}
The malwares monitor the status of TCP connection and utilizes the ex-C_exploit (X) = learn? Packet:Packet.i→C_ exploit (Judge(X∪{i})) □ (if X= then overt→C_ exploit (X) else covert→C_ exploit (X)) Subprocess Judge (X∪{i} is as follow: Judge (X∪{i})=(if(Tstate==CLOSED)∧(cmd.connect.B)→add.seqNo);
( ploited fields of TCP to transmit the covert message. The CSP process of malware C is shown in Figure 6. The CSP description of malware D is similar to the malware C, so we omit its description here. The process IS_CSP indicates the interaction system of TCP protocol entities and two malwares.
Analysis of the verification
In this section, we analyze and verify the existence of network covert storage channels under the normal communication of TCP using trace model refinement (Definition 2). It means that whether the traces of IS_CSP are subsets of traces of TCP_CSP which allow precisely the valid traces. are possible under the normal communication of TCP. Figure 7 shows an example of traces of TCP_CSP and IS_CSP. As we can see that tr(IS_CSP) is a subset of tr(TCP_CSP): tr(IS_CSP) tr(TCP_CSP) From the trace of IS_CSP, we can see that the malware C modulates the initial sequence number to convey the covert message which has been utilized and found by the previous researchers. For example, Rutkowska [START_REF] Rutkowska | The implementation of passive covert channels in the Linux kernel[END_REF] implemented a network storage channel utilizing the initial sequence number named NUSHU. On basis of the definition 2, we come to the conclusion that the CSP model IS_CSP refines TCP_CSP.
TCP_CSP IS_CSP The traces show us that the malwares C and D utilize the reversed field to convey covert message. This covert storage channel have been found and studied by Handel [START_REF] Handel | Hiding data in the OSI network model[END_REF].
The verification of TCP proves that the CSP-based general model might yield several very similar covert storage channels in other network protocols in terms of the modified property of header fields and the specifications of network protocols.
Conclusion
In this paper, we propose a CSP-based general detection model for analyzing and detecting the network covert storage channels in network protocols. In our model, we describe the protocol interaction system based on the original design specifications of protocols. Besides, we define a modified property for every header field, and classify the header fields into three types based on this property. We establish a search process for searching the potential covert exploits in the header fields of protocols. Then we establish a network covert storage channel based on the hypothesis of network covert storage channels, and verify the covert channels based on trace refinement. Finally, the model of this CSP formal method is illustrated and verified in TCP. The result of the verification shows that the general model is effective and feasible in finding the covert storage channels. The CSP-based general detection model is modular, so it can be easily extended to describe the other network protocols and detect the covert channels hidden in them. In the future, we will try to establish a formalized method for detecting and analyzing the covert timing channels.
Fig. 1 .
1 Fig.1. The CSP-based general model framework We propose a general detection model focus on the original design details of protocols. The model is used to analyze and detect the covert storage channels vulnerabilities in various layers of protocols. The CSP framework diagram is shown in Figure 1, the model framework includes 5 steps as follows.Step1: The original design specifications of protocols and the communication procedure of the protocol interacting entity are analyzed, and then a CSPbased process is established.Step2: The header fields of protocols are classified into three types--Secure fields, exploited fields-I, exploited fields-II. The details of the classification can be obtained in section 3.2.Step3: Based on the classification of header fields and the status of the protocol interaction system, a CSP-based search process is established to search for the covert storage channels exploits in the header fields of protocols.Step4: Based on the hypothesis of the network covert storage channel and vulnerability which have been found in Step3, a CSP-based process of network
Fig. 2 .
2 Fig.2. TCP protocol interaction system
Fig. 4 .
4 Fig.4. The CSP process C_exploit(X)
Fig. 5 .
5 Fig.5. The finite state diagram of TCP interaction system Assume that the malwares C and D set up a covert storage channel, of which C is the sender of the covert channel, D is the receiver. Let the set Cstate indicate the states of covert storage channels. The set IS_state indicates the global states of TCP interaction system which include the states of covert channel. Then, IS_state=Tstate∪Cstate.Figure 5 depicts the state transition diagram of the whole TCP interaction system.Cstate= {C_closed, C_listen, C_connect, C_c1, C_c2, C_e1, C_e2}The malwares monitor the status of TCP connection and utilizes the ex-
Figure 5
5 depicts the state transition diagram of the whole TCP interaction system.
Fig. 6 .
6 Fig.6. The CSP process of malware C The description of network covert storage channels is Covert_channel= C_Channel |||D_Channel Then, the following IS_CSP is obtained. IS_CSP=TCP_CSP |||Covert_channel =TCP_A_SM |||TCP_B_SM||| C_Channel |||D_Channel
Fig. 7 .
7 Fig.7.An example of traces of TCP_CSP and IS_CSP
Fig. 8 .Figure 8
88 Fig.8. Another example of traces of TCP_CSP and IS_CSP Figure 8 depicts other similar traces as well.The traces show us that the malwares C and D utilize the reversed field to convey covert message. This covert storage channel have been found and studied by Handel[START_REF] Handel | Hiding data in the OSI network model[END_REF].The verification of TCP proves that the CSP-based general model might yield several very similar covert storage channels in other network protocols in terms of the modified property of header fields and the specifications of network protocols.
tr(TCP_CSP)= <cmd.connect.B,send.A.B.packet.{syn},receive.B.A.packet.{syn_ack},send.A.B.packet.{ack},…..> tr(IS_CSP)= <cmd.connect.B,send.A.B.packet.{syn}.(reserved_fields=covert_msg),receive.B.A.packet.{syn_ack}.(reser-ved_fields=covert_msg),….> send.A.B.packet.{ack} .(covert_msg),… >
3.1 The general model framework
Protocol CSP
specifications description
Classification
of fields
Search covert CSP
exploits description
Refine?
No Yes
False True
Tstate= {CLOSED, LISTEN, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN_WAIT1} PacketTypes={packet.{syn},packet.{syn_ack},packet.{ack},packet.{rst}, packet.{fin}}
Host A: Host B:
TCP_A_SM= TCP_B_SM=
Tstate==CLOSED ﹠ Tstate==LISTEN ﹠
(chUA!!command→ (chB??packet→
if(command==connect.B) (if(packet.SYN==1)
then (chA!!packet.{syn}→ then (chB!!packet.{syn_ack}→
Tstate:=SYN_SENT)) Tstate:=SYN_RCVD)
else(chUA!!Error)→TCP_A_SM else(discard.packet)) →TCP_B_SM
□Tstate==SYN_SENT ﹠ □Tstate==SYN_RCVD ﹠
(chA??packet→ (chB?? packet→
If((packet.SYN==1) ∧(packet.ACK==1) if(packet.SYN==1) ∧(packet.ACK==1)
∧(packet. ackNo==seqNo+1) ) ∧(packet.ackNo==seqNo+1)
then (chA!!packet.{ack}→ then(Tstate:=ESTABLISHED)
Tstate:=ESTABLISHED)) → TCP_A_SM else (discard.msg)) →TCP_B_SM
□Tstate==ESTABLISHED ﹠ □Tstate==ESTABLISHED ﹠
(chUA??packet→(if(packet.FIN==1) (chuB??packet→(if(packet.FIN==1)
then (Tstate:=FIN_WAIT1;chA!!packet) then (Tstate:=FIN_WAIT1;
else (chA!!packet) ) →TCP_A_SM chB!!packet)
|(chA??packet→(if(packet.FIN==1) else (chB!!packet) ) →TCP_B_SM
then (Tstate:=FIN_WAIT2;chUA!!packet) □Tstate==FIN_WAIT1 ﹠
else (chUA!!packet) →TCP_A_SM (chB??packet→(if(packet.FIN==1)
□Tstate==FIN_WAIT1 ﹠ then(Tstate:=FIN_WAIT2)→TCP_B_SM
(chA??packet→(if(packet.FIN==1) □Tstate==FIN_WAIT2 ﹠
then(Tstate:=FIN_WAIT2)→TCP_A_SM (wait for close)
□Tstate==FIN_WAINT2 ﹠ →TCP_B_SM
(wait for close)→TCP_A_SM
Fig.3. The CSP descriptions of TCP in Host A and Host B
Table 1 .
1 The classification of TCP header
type number header field value
1 Source port 0
2 Destination port 0
5 TCP header length 0
7 URG 0
Secure fields 8 9 ACK PSH 0 0
10 RST 0
11 SYN 0
12 FIN 0
13 Window size 0
exploited fields-I 6 Reserve field(6 bit) 1
3 SeqNo SYN==1,ACK==0 V=1
exploited fields-II 4 AckNo SYN==1,ACK==0 ACK==1 V=1 V=0
14 checksum V=1
15 Urg_p URG==0 URG==1 V=1 V=0 | 18,334 | [
"1003111"
] | [
"469153",
"469153",
"469153",
"469153",
"469153"
] |
01480209 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480209/file/978-3-642-36818-9_56_Chapter.pdf | Sarbari Mitra
email: [email protected]
Sourav Mukhopadhyay
email: [email protected]
Ratna Dutta
Unconditionally Secure Fully Connected Key Establishment using Deployment Knowledge
Keywords:
We propose a key pre-distribution scheme to develop a wellconnected network using deployment knowledge where the physical location of the nodes are pre-determined. Any node in the network can communicate with any other node by establishing a pairwise key when the nodes lie within each other's communication range. Our proposed scheme is unconditionally secure against adversarial attack in the sense that no matter how many nodes are compromised by the adversary, the rest of the network remains perfectly unaffected. On a more positive note, our design is scalable and provides full connectivity.
Introduction
Wireless Sensor Networks (WSN) are built up of resource-constrained, battery powered, small devices, known as sensors, which have capability of wireless communication over a restricted target field. Due to its immense application from home front to battle field, environment monitoring such as water quality control, landslide detection, air pollution monitoring etc., key distribution in sensor network has become an active area of research over the past decade. Usually sensor networks are meant to withstand harsh environments and thus secret communication is very essential. The secret keys are assigned to the nodes before their deployment in a Key Pre-distribution Scheme (KPS) to enable secure communication.
The bivariate symmetric polynomials were first used in key distribution by Blundo et al. [START_REF] Blundo | Perfectlysecure Key distribution for Dynamic Conferences[END_REF]. The scheme is t-secure, i.e., the adversary cannot gain any information about the keys of the remaining uncompromised nodes if the number of compromised nodes does not exceed t. However, if more than t nodes are captured by the adversary, the security of the whole network is destroyed. Blundo's scheme is used as the basic building block in the key pre-distribution schemes proposed in [START_REF] Li | A Hexagon-Based Key Predistribution Scheme in Sensor Networks[END_REF][START_REF] Liu | Improving Key Pre-Distribution with Deployment Knowledge in Static Sensor Networks[END_REF].
We present a deployment knowledge based KPS in a rectangular grid network by dividing the network in subgrids and applying Blundo's polynomial based KPS in each subgrid in such a way that nodes within communication range of each other can establish pairwise key. The induced network is fully connected -any two nodes, lying within communication range of each other, are able to communicate privately by establishing a secret pairwise key. The t-secure property of Blundo's scheme is utilized. A t-degree polynomial is assigned to at most (t -1) nodes, where at least (t + 1) shares are required to determine the polynomial. This results in an unconditionally secure network, i.e., the network is completely resilient against node capture and this is independent of the number of nodes compromised. The nodes need to store at least (t + 1) log q bits (where q is large prime) and a fraction of the total nodes needs to store at most 4(t+1) log q bits. The storage requirement decreases with decreased radio frequency radius of the nodes. Comparison of the proposed scheme with existing schemes indicates that our network provides better connectivity, resilience and sustains scalability, with reasonable computation and communication overheads and slightly large storage for few nodes.
Our Scheme
Subgrid Formation : The target region is an r × c rectangular grid with r rows and c columns i.e., there are c cells in each row and r cells in each column of the grid. Each side of a cell is of unit length. A node is placed in each cell of the grid. Thus the network can accommodate at most rc nodes. Each of the N (≤ rc) nodes are assigned a unique node identifier. All the nodes have equal communication range. Let ρ be the radius of communication range and d be the density of the nodes per unit length. Then m = ρd is the number of nodes lying within the communication radius of each node. We divide this network into a set of overlapping rectangular subgrids SG i,j , for i, j ≥ 1, of size (2m+ 1)×(2m+ 1) each. Each subgrid contains (2m + 1) 2 cells and two adjacent subgrids overlap either in m rows or in m columns. By N x,y we denote the node at row x and column y in our rectangular grid. Deployment knowledge is used to get the idea about the location of the nodes after their deployment in the target field. We have designed the network to enable any pair of nodes lying in the radio frequency range of each other to be in at least one common subgrid. From the construction, t , according to our assumption. Let us assume that R i , 1 ≤ i ≤ r is the i th row and C j , 1 ≤ j ≤ c is the j th column of the rectangular grid. We refer a node to be covered, if it shares at least one common subgrid with each node within its communication range. Note that the nodes that lie at the intersection of the rows R i (1 ≤ i ≤ m) and columns C j (1 ≤ i ≤ m) are covered by subgrid SG 1,1 . We consider sub-grid SG 1,2 and SG 2,1 overlap with SG 1,1 in m columns and m rows respectively, so that the nodes at the intersection of R i and C j , for {1 ≤ i, j ≤ 2m + 1} \ {1 ≤ i, j ≤ m}, are made covered. Similarly, SG 2,2 intersects SG 1,2 and SG 2,1 in m rows and m columns respectively. This automatically covers all the nodes N x,y for 1 ≤ x, y ≤ 2m + 2. The overlapping of subgrids are repeated as described above to make all the nodes in the network covered.
Polynomial Share Distribution : Now, we apply Blundo's KPS in each subgrid. We choose randomly a bivariate symmetric polynomial f ij (x, y) of degree t > (2m + 1) 2 for subgrid SG i,j , i, j ≥ 1 and distribute univariate polynomial shares of the polynomial f ij (x, y) to each of the (2m + 1) 2 nodes. Thus any node with identifier ID in subgrid SG i,j receives its polynomial share P ID (y) = f ij (ID, y) and is able to establish pairwise keys with the remaining nodes in SG i,j following Blundo's scheme. Now, let us discuss the scheme in detail for m = 1 in the following example. Example: when m = 1 Lemma 21 The subgrid SG i,j consists of (2m + 1) 2 = 9 nodes N x,y , where 2i -1 ≤ x ≤ 2i + 1 and 2j -1 ≤ y ≤ 2j + 1.
Proof. From Figure 1, it follows that the result holds for SG1,1. Without loss of generality, let us assume that the result is true for i = i1 and j = ji, i.e., the nine nodes of the subgrid SGi 1 ,j 1 are given by Nx,y, where 2i1 -1 ≤ x ≤ 2i1 +1 and 2j1 -1 ≤ y ≤ 2j1 +1. Now we consider the subgrid SGi 1 +1,j 1 . Each of the sub-grid are in the form of a 3×3 grid. From the construction it follows that the columns of SGi 1 ,j 1 and SGi 1 +1,j 1 are identical, and they overlap in only one row (since m = 1), i.e., R2i 1 +1, which can also be written as R 2(i 1 +1)-1 . Therefore, SGi 1 +1,j 1 consists of the nine nodes lying at the intersection of the rows R 2(i 1 +1)-1 , R 2(i 1 +1) and R 2(i 1 +1)+1 ; and the columns C2j 1 -1, C2j 1 and C2j 1 +1. Thus the nodes of SGi 1 +1,j 1 are given by Nx,y, where 2(i1 + 1) -1 ≤ x ≤ 2(i1 + 1) + 1 and 2j1 -1 ≤ y ≤ 2j1 + 1. Similarly, it can be shown that the rows of SGi 1 ,j 1 and SGi 1 ,j 1 +1 are identical and they overlap in the column C2j 1 +1, which can also be represented as C 2(j 1 +1)-1 . Proceeding in the similar manner the nine nodes of the subgrid SGi 1 ,j 1 +1 are Nx,y, where 2i1 -1 ≤ x ≤ 2i1 + 1 and 2(j1 + 1) -1 ≤ y ≤ 2(j1 + 1) + 1. Thus the result holds for the subgrid SGi 1 +1,j 1 and SGi 1 ,j 1 +1, whenever it is true for the subgrid SGi 1 ,j 1 . Also the result holds for SG1,1. Hence, by the principle of mathematical induction, the result holds for subgrid SGi,j, for all values of i, j. (a) According to the description of the scheme, each subgrid corresponds to a distinct bivariate polynomial, hence, the total number of polynomials required is equal to the total number of sub-grid present in the network. Let us assume that the sub-grid form a matrix consisting of r1 rows and c1 columns. Thus, we must have K = r1c1. This also follows from the construction that the subgrids are numbered in such a way that the coordinates of the k th node of the subgrid SGi,j is less than or equal to the coordinates of the k th node of the subgrid SG i ,j , for 1 ≤ k ≤ 9, whenever
1 ≤ i ≤ i ≤ r1 and 1 ≤ j ≤ j ≤ c1.
According to the assumption, Nr,c ∈ SGr 1 ,c 1 .
From Lemma 21 it follows that 2r1-1 ≤ r ≤ 2r1+1 and 2c1-1 ≤ c ≤ 2c1+1. Hence we must have r1 ≥ r-1 2 and c1 ≥ c-1 2 . Since, r1 and c1 are integers (according to the assumption), we have
r1 = r-1 2 , when r is odd; r 2 ,
when r is even .
and c1 = c-1 2 , when c is odd; c 2 ,
when c is even .
Hence, considering the possible combinations from the above cases and substituting the values in the equation K = r1c1, we obtain the expression given in the first column of the table given in the statement of the theorem. (b) Let the node Nx,y for 1 ≤ x ≤ r, 1 ≤ y ≤ c, stores exactly one univariate polynomial share. The possible values of x and y depends respectively on the number of rows r and number of columns c in the rectangular grid. Then it follows from the construction and can be verified from Figure 1 that
x ∈ {1, 2, . . . , r} \ {2k + 1 : 1 ≤ k ≤ r-3 2 }, when r is odd; {1, 2, . . . , r} \ {2k + 1 : 1 ≤ k ≤ r 2 -1}, when r is even .
Hence, we get r+3 3 and r+2 3 possible cases for r being odd and even respectively. Similarly, we get c+3 3 cases when c is odd and c+2 3 cases when c is even. Hence, considering the possible combinations from the above cases and multiplying the corresponding values, we obtain the expression given in the second column of the table given in the statement of the theorem.
Resilience quantifies the robustness of the entwork against node capture. We consider the attack model as the random node capture, where the adversary captures nodes randomly, extracts the keys stored at them. Blundo's scheme has the t-secure property, as the adversary will not be able to gain any information if less than t nodes are compromised when univariate shares from a t-degree bivariate polynomial are assigned to the nodes. Here, we have assigned univariate shares of a t-degree bivariate polynomial where t > (2m + 1) 2 , to at most (2m + 1) 2 nodes in a subgrid. Hence, even if upto (2m + 1) 2 -2 = 4m 2 + 4m -1 nodes are captured by the adversary, the remaining two nodes will still be able to establish a pairwise key, which is still unknown to the adversary. This happens to all the pairwise independent bivariate polynomials. Hence, the network is unconditionally secure, i.e., no matter how many nodes are captured by the adversary, remaining network will remain unaffected.
Comparison : In Table 1, we provide the comparison of our scheme with the existing schemes proposed by Blundo et al. [START_REF] Blundo | Perfectlysecure Key distribution for Dynamic Conferences[END_REF], Liu and Ning [START_REF] Liu | Improving Key Pre-Distribution with Deployment Knowledge in Static Sensor Networks[END_REF], Das and Sengupta [START_REF] Das | An Effective Group-Based Key Establishment Scheme for Large-Scale Wireless Sensor Networks using Bivariate Polynomials[END_REF] and Sridhar et al. [START_REF] Sridhar | Key Predistribution Scheme for Grid Based Wireless Sensor Networks using Quadruplex Polynomial Shares per Node[END_REF]. Here, t denotes degree of the bivariate polynomial; q stands for the order of the underlying finite field F q ; N is the total number of nodes in the network; s denotes the number of nodes compromised by the adversary and t in [START_REF] Das | An Effective Group-Based Key Establishment Scheme for Large-Scale Wireless Sensor Networks using Bivariate Polynomials[END_REF] is assumed to be sufficiently larger than √ N , c is a constant and F is the total number of polynomials in [START_REF] Liu | Improving Key Pre-Distribution with Deployment Knowledge in Static Sensor Networks[END_REF].
Schemes
Conclusion
Utilizing the advantage of deployment knowledge and t-secure property of Blundo's polynomial based scheme, we design a network, which requires reasonable storage to establish a pairwise key between any two nodes within radio frequency range. The network is unconditionally secure under adversarial attack and can be scaled to a larger network without any disturbance to the existing nodes in the network.
Fig. 1 .
1 Fig. 1. Polynomial assignment to 3×3 overlapping sub-grid in a network, where m = 1
Table 1 .
1 Comparison with existing schemes
Deployment Storage Comm. Comp. Cost Full Con- Resilience Scalable
Knowledge Cost nectivity
[1] No (t + 1) log q O(log N ) t + 1 Yes, 1-hop t-secure No
[6] Yes c (t + 2) log q c log |F | t + 1 No t-secure No
[3] No (t + 2) log q O(log N ) t + 1 Yes, 2-hop secure To some
extent
[7] No 4(t + 1) log q O(log N ) O(t log 2 N ) No depends Yes
on s
Ours Yes ≤ 4(t + 1) log q O(log N ) t + 1 Yes, 1-hop secure Yes
(i) Let both x and y be even. Then i = x 2 , j = y 2 . (ii) Let x be even and y be odd. Then i = x 2 and j = 1, if y = 1;
y-1 2 , y+1 2 , otherwise . (iii) Let x be odd and y be even. Then j = y 2 and i = 1, if x = 1;
x-1 2 , x+1 2 , otherwise .
(iv) Let both x and y be odd. Then
Proof. From the construction of the scheme, it follows that univariate shares of the bivariate symmetric polynomial fij are distributed to each of the nine nodes of the subgrid SGi,j. Thus our target is to find the coordinates of the subgrid SGi,j to which a node Nx,y belong. Lemma 21 suggests that sub-grid SGi,j consists of the nodes Nx,y,
Since i is an integer we must have i = x 2 , when x is even and i = x-1 2 and x+1 2 when x odd. We further observe from Figure 1 that the first coordinate of all the subgrids and hence that of the corresponding bivariate polynomials assigned to the nodes lying in the first row is always 1. Similarly, possible values of j are y-1 2 , y 2 and y+1 2 , follows from Lemma 21. As j is also an integer we have j = y 2 , when y is even and j = y-1 2 and y+1 2 when y is odd. We also observe from the Figure 1 that the second coordinate of all the subgrids and hence that of the corresponding bivariate polynomials assigned to the nodes lying in the first column is always 1, according to the construction of our design. Hence,
x 2 , if x is even;
x-1 2 and x+1 2 , otherwise,
and y+1 2 , otherwise .
Hence, combining all the possible cases for the combination of the values of x and y we obtain the expression given in the statement of the Lemma.
Theorem 23 We define the following variables for our r × c rectangular grid structure where K is the total number of symmetric bivariate polynomial required, M1, M2 and M3 denote the total number of nodes containing only one, two or four polynomial shares respectively. We further identify the following cases as : Case I : -r and c both are odd; Case II : -r is odd and c is even; Case III : -r is even and c is odd and Case IV : -r and c both are even. Then Proof. We provide the proofs in (a) and (b) for the expressions of K and M1 respectively given in the table and leave the other two for page restrictions. | 15,306 | [
"1003115",
"1003116",
"1003117"
] | [
"301693",
"301693",
"301693"
] |
01480211 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480211/file/978-3-642-36818-9_59_Chapter.pdf | Huan Wang
Mingxing He
email: [email protected]
Xiao Li
An Extended Multi-Secret Images Sharing Scheme Based on Boolean Operation
Keywords: Visual cryptography, Boolean operation, Image sharing, Multisecret images
An extended multi-secret images scheme based on Boolean operation is proposed, which is used to encrypt secret images with different dimensions to generate share images with the same dimension. The proposed scheme can deal with grayscale, color, and the mixed condition of grayscale and color images. Furthermore, an example is discussed and a tool is developed to verify the proposed scheme.
Introduction
In traditional confidential communication systems, encryption methods are usually used to protect secret information. However, the main idea of the encryption methods is to protect the secret key [START_REF] Shamir | How to Share a Secret[END_REF]. The concept of visual cryptography is introduced by Naor and Shamir [START_REF] Naor | Visual Cryptography[END_REF], which is used to protect the secret key.
Furthermore, there are a lot of works which are based on multiple-secret sharing schemes. Wang et al. [START_REF] Daoshun | Two Secret Sharing Schemes Based on Boolean Operations[END_REF] develop a probabilistic (2, n) scheme for binary images and a deterministic (n, n) scheme for grayscale image. Shyu et al. [START_REF] Shyu | Sharing Multiple Secrets in Visual Cryptography[END_REF] give a visual secret sharing scheme that encodes secrets into two circle shares such that none of any single share leaks the secrets. Chang et al. [START_REF] Chinchen | Sharing a Secret Two-tone Image in Two Gray-level Images[END_REF] report two spatial-domain image hiding schemes with the concept of secret sharing.
Moreover, many works are based on Boolean operation. Chen et al. [START_REF] Tzung | Efficient Multi-secret Image Sharing Based on Boolean Operations[END_REF] describe an efficient (n + 1, n + 1) multi-secret image sharing scheme based on Boolean-based virtual secret sharing to keep the secret image confidential. Guo et al. [START_REF] Teng | Multi-pixel Encryption Visual Cryptography[END_REF] define multi-pixel encryption visual cryptography scheme, which encrypts a block of t(1 ≤ t) pixels at a time. Chen et al. [START_REF] Yihui | Authentication Mechanism for Secret Sharing Using Boolean Operation[END_REF] describe a secret sharing scheme to completely recover the secret image without the use of a complicated process using Boolean operation. Li et al. [START_REF] Peng | Aspect Ratio Invariant Visual Cryptography Scheme with Optional Size Expansion[END_REF] give an improved aspect ratio invariant visual cryptography scheme without optional size expansion.
In addition, visual cryptography is used in some other fields. Wu et al. [START_REF] Yus | Sharing and Hiding Secret Images with Size Constraint[END_REF] propose a method to handle a secret image to n stego images with the 1/t size of the secret image. Yang et al. [START_REF] Chingnung | Misalignment Tolerant Visual Secret Sharing on Resolving Alignment Difficulty[END_REF] design a scheme based on the trade-off between the usage of big and small blocks to address misalignment problem. Bose and Pathak et al. [START_REF] Bose | A Novel Compression and Encryption Scheme Using Variable Model Arithmetic Coding and Coupled Chaotic System[END_REF] find the best initial condition for iterating a chaotic map to generate a symbolic sequence corresponding to the source message.
These works are interesting and efficient but sometimes weakness, such as pixel expansion problems [START_REF] Naor | Visual Cryptography[END_REF] and all the secret images should have the same dimension. However, generally, the secret images may have different dimension. Therefore, we propose an extended multi-secret images sharing scheme based on Boolean operation to encrypt multi-secret images with the different dimension. Moreover, the generated share images have the same dimension, then they do not reveal any information about the secret images include their dimension.
The rest of this paper is organized as follows. Section 2 gives the basic definitions. In section 3, an extended multi-secret images sharing scheme is proposed. An experimental is presented in Section 4. Section 5 concludes this paper.
Preliminaries
In this section, an extended-OR operation and an extended-OR operation chain between any two different dimensions images are defined. Let x = 30 and y = 203, then x ⊕ y = 00011110 ⊕ 11001011 = 11010101 = 213. Where, "⊕" is bitwise exclusive-OR operation. Furthermore, The exclusive-OR operation between any two grayscale or color images with the same dimension is defined in [START_REF] Tzung | Efficient Multi-secret Image Sharing Based on Boolean Operations[END_REF].
Definition 2. Let A 1 , A 2 , • • • , A k be k(k > 1) images with different dimen- sions. The extended-OR operation chain is defined as ψ k i=1 A i =A 1 ⊕ A 2 ⊕ • • • ⊕ A k . Here, A 1 ⊕ A 2 = A 2 ⊕ A 1 unless A 1 and A 2 have the same dimension.
3 The sharing and reconstruction of multi-secret images
In this section, n secret images with different dimensions can be encrypted to n + 1 share images with the same dimension. S l , • • • , S m are denoted as S [l,m] .
The sharing process
Sharing algorithm: the sharing process is composed of following two parts. Part1. For n secret images G [0,n-1] , n+1 temporary images S [0,n] with different dimensions are generated by following three steps.
(I) A random integer matrix is generated, which is the first temporary image S 0 with the same dimension as G 1 . Here, ∀x ∈ S 0 , 0 ≤ x ≤ 255.
(II) According to S 0 and the n secret images
G [0,n-1] , n -1 interim matrices B [1,n-1] are computed by B k = G k ⊕ S 0 , k = 1, 2, • • • , n -1.
(III) The other n temporary images S [1,n] are computed by: a)
S 1 = B 1 ; b) S k = B k ⊕ B k-1 if k = 2, • • • , n -1; and c) S n = G 0 ⊕ B n-1 . Part2. n + 1 share images S [0,n]
with the same dimension can be generated by the n + 1 temporary images S [0,n] by the following steps.
(I) Extract the widths (w [0,n-1] ) and heights (h
[0,n-1] ) of the n secret images G [0,n-1] . Let G wh [0,n-1]
be n matrices with the same dimension 2 × 3, which are used to save the w [0,n-1] and h [0,n-1] , respectively. We have:
G wh i = w 1 i w 2 i w 3 i h 1 i h 2 i h 3 i
, where
w i = w 1 i × w 2 i × w 3 i , 1 ≤ w k i ≤ 255 h i = h 1 i × h 2 i × h 3 i , 1 ≤ h k i ≤ 255
Therefore, G wh [0,n-1] can be considered as the new n secret images. Then, the new n + 1 temporary images S wh i are generated from G wh i using Part1. (II) According to S [0,n] and S wh i , the n+1 share images S [0,n] can be computed as following steps.
(1) Let M w = max{w i } and M h = max{h i } + 1.
(2) Generate n + 1 empty images S [0,n] with dimension M w × M h and copy all the elements of S [0,n] to S [0,n] , respectively. The last lines of S [0,n] are empty.
(3) Copy all the elements of S wh [0,n] to the last line of S [0,n] , respectively. (4) Fill in the rest of the n + 1 images S [0,n] with the random numbers which are belong to 0 and 255.
Finally, the n + 1 share images are generated with the same dimension M w × M h . The proposed sharing scheme is shown in Fig. 2. with the dimension 2 × 3 from the head of the last lines of S [0,n] , respectively.
(II) The n + 1 temporary images S wh [0,n] can be decrypted using the following Part2 to obtain other n + 1 temporary image G wh [0,n] with the dimension 2 × 3.
(III) Let w 1 i , w 2 i , w 3 i (h 1 i , h 2 i , h 3 i ) be the first (second) line of G wh i , then w i = w 1 i × w 2 i × w 3 i (h i = h 1 i × h 2 i × h 3 i ) is the width (high) of secret image G i . (IV)
The n + 1 temporary images S [0,n] can be obtained from the n + 1 share images S [0,n] according to the widths and highs in step III. Part2. The n secret images S [0,n] can be obtained according to S [0,n] .
(I) The first secret image
G 0 = S n ⊕ B n-1 = S n ⊕ (S n-1 ⊕ B n-2 = S n ⊕ (S n-1 ⊕ (S n-2 ⊕ B n-3 )) = S n (⊕ (S n-1 ⊕ (S n-2 (⊕ , • • • , (S 2 ⊕ S 1 )) • • • ).
(II) n-1 interim matrices B k are generated by:
B 1 = S 1 and B k = S k ⊕ B k-1 , k = 2, • • • , n -1.
(III) The other secret images are computed by
G k = B k ⊕ S 0 , 1 ≤ k ≤ n -1.
Theorem 2. Assume that n secret images G [0,n-1] with different dimensions are encrypted to n + 1 share images S [0,n] , then the secret images G [0,n-1] can be correctly reconstructed using the n + 1 share images S [0,n] .
Proof:
If k = 0: We have Ψ n i=1 S i = S 1 ⊕ S 2 ⊕ • • • ⊕ S n = B 1 ⊕ (B 2 ⊕ B 1 )⊕ • • • ⊕ (B n=1 ⊕ B n-2 )⊕ (G 0 ⊕ B n-1 ) = G 0 . If k ≥ 1: We have Ψ k i=0 S i = S 0 ⊕ S 1 ⊕ • • • ⊕ S k = S 0 ⊕ B 1 ⊕ (B 2 ⊕ B 1 )⊕ • • • ⊕ (B k ⊕ B k-1 ) = S 0 ⊕ B k = G k .
Color images and the mixed condition of grayscale/color images
The difference between handling color and grayscale images is that each pixel of 24-bit color images can be divided into three pigments, i.e., red (r), green (g), and blue (b). We have A⊕ B = [a i,j,k ⊕ b i,j,k ], where k = r, g, b.
For the mixed condition, each color image is divided into three (red, green, and blue) identical grayscale images. Let A be grayscale image and B be color image, we have A⊕ B = [a i,j ⊕ b i,j,k ], where k = r (red), g (green), b (blue).
Verification and discussion
To verify the correctness of the proposed extended scheme, a tool is developed. Example: There are five secret grayscale images G 0 , G 1 , G 2 , G 3 , G 4 with the 3(a). Here, M w = 640 and M h = 480 + 1 = 481. Then, the five secret images are encrypted and extended to six share images with the same dimension 640×481 using our tool, as shown in Fig. 3(b). The reconstructed images are also decrypted using this tool, as shown in Fig. 3(c). However, it is unsatisfactory that
Schemes
Pixel expansion Image distortion Dimension restriction In [START_REF] Daoshun | Two Secret Sharing Schemes Based on Boolean Operations[END_REF][START_REF] Bose | A Novel Compression and Encryption Scheme Using Variable Model Arithmetic Coding and Coupled Chaotic System[END_REF] No Yes Yes In [START_REF] Shyu | Sharing Multiple Secrets in Visual Cryptography[END_REF][START_REF] Chinchen | Sharing a Secret Two-tone Image in Two Gray-level Images[END_REF] Yes Yes Yes In [START_REF] Tzung | Efficient Multi-secret Image Sharing Based on Boolean Operations[END_REF][START_REF] Yihui | Authentication Mechanism for Secret Sharing Using Boolean Operation[END_REF] No No Yes In [START_REF] Teng | Multi-pixel Encryption Visual Cryptography[END_REF][START_REF] Peng | Aspect Ratio Invariant Visual Cryptography Scheme with Optional Size Expansion[END_REF] Yes No Yes In [START_REF] Yus | Sharing and Hiding Secret Images with Size Constraint[END_REF][START_REF] Chingnung | Misalignment Tolerant Visual Secret Sharing on Resolving Alignment Difficulty[END_REF] No Yes Yes This paper No No No using these schemes developed in [START_REF] Daoshun | Two Secret Sharing Schemes Based on Boolean Operations[END_REF][START_REF] Shyu | Sharing Multiple Secrets in Visual Cryptography[END_REF][START_REF] Chinchen | Sharing a Secret Two-tone Image in Two Gray-level Images[END_REF][START_REF] Tzung | Efficient Multi-secret Image Sharing Based on Boolean Operations[END_REF][START_REF] Teng | Multi-pixel Encryption Visual Cryptography[END_REF][START_REF] Yihui | Authentication Mechanism for Secret Sharing Using Boolean Operation[END_REF][START_REF] Peng | Aspect Ratio Invariant Visual Cryptography Scheme with Optional Size Expansion[END_REF][START_REF] Yus | Sharing and Hiding Secret Images with Size Constraint[END_REF][START_REF] Chingnung | Misalignment Tolerant Visual Secret Sharing on Resolving Alignment Difficulty[END_REF][START_REF] Bose | A Novel Compression and Encryption Scheme Using Variable Model Arithmetic Coding and Coupled Chaotic System[END_REF] to encrypt the five secret images since for any two secret images, some pixels in the bigger (dimension) secret image is out of the operation range for the smaller one and these pixels certainly cannot be encrypted. The comparison of these schemes is shown in Table 1.
Conclusions
An extended multi-secret image sharing scheme based on Boolean operation is proposed, which can share multi-secret images with different dimension. The grayscale and color images are appropriated in our scheme. Furthermore, this scheme can handle the mixed condition of grayscale and color images and the share images do not suffering pixel expansion. Moreover, the reconstructed secret images are the same dimension. In addition, all share images cannot leak any information about the secret images include the dimensions.
Definition 1 .
1 Let A(a ij ) and B(b ij ) be two images with different dimensions m×n and h×w, respectively, where m×n = h×w, 0 ≤ a ij ≤ 255, 0 ≤ b ij ≤ 255. The extended-OR operation between A and B is defined as follows. 1) A m×n ⊕ B h×w = A m×n ⊕ B m×n . Where, B is a temporary matrix. If m × n ≤ h × w, B orderly takes m × n pixels from the head of B. Otherwise, B circularly and orderly takes m × n pixels from the head of B.2) A m×n ⊕B h×w = A h×w ⊕ B h×w . Where, A is a temporary matrix. If m × n > h × w, A orderly takes h × w pixels from the head of A. Otherwise, A circularly and orderly takes h × w pixels from the head of A.
Fig. 1 .
1 Fig. 1. An example for ⊕ and ⊕ operation.
Theorem 1 .
1 Assume that n secret images G [0,n-1] with different dimensions are encrypted to n + 1 share images S [0,n] . All the share images cannot reveal any information independently.
Fig. 2 .
2 Fig. 2. Sharing process and the structure of share image.
3. 2
2 The reconstruction process Part1. The width and height of each secret image can be obtained from S [0,n] . (I) For the n+1 share images S [0,n] , extract the n+1 temporary images S wh [0,n]
Fig. 3 .
3 Fig. 3. An example with five secret images.
Table 1 .
1 Comparison of these schemes
Acknowledgments. This work is supported by the National Nature Science Foundation of China (No. 60773035), the International Cooperation Project in Sichuan Province (No. 2009HH0009) and the fund of Key Disciplinary of Sichuan Province (No. SZD0802-09-1). | 14,211 | [
"1003121",
"1003122",
"1003123"
] | [
"487165",
"487165",
"487165"
] |
01480216 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480216/file/978-3-642-36818-9_9_Chapter.pdf | Susmit Bagchi
email: [email protected]
Analyzing Stability of Algorithmic Systems using Algebraic Constructs
Keywords: recursive algorithms, z-domain, stochastic, control theory, perturbation
In general, the modeling and analysis of algorithmic systems involve discrete structural elements. However, the modeling and analysis of recursive algorithmic systems can be done in the form of differential equation following control theoretic approaches. In this paper, the modeling and analysis of generalized algorithmic systems are proposed based on heuristics along with z-domain formulation in order to determine the stability of the systems. The recursive algorithmic systems are analyzed in the form of differential equation for asymptotic analysis. The biplane structure is employed for determining the boundary of the recursions, stability and, oscillatory behaviour. This paper illustrates that biplane structural model can compute the convergence of complex recursive algorithmic systems through periodic perturbation.
Introduction
The algorithm design and analysis are the fundamental aspects of any computing systems. The modeling and analysis of algorithms provide an analytical insight along with high-level and precise description of the functionalities of systems [START_REF] Bisnik | Modeling and analysis of random walk search algorithms in P2P networks[END_REF][START_REF] Olveczky | Formal modeling and analysis of wireless sensor network algorithms in Real-Time Maude[END_REF][START_REF] Ozsu | Modeling and Analysis of Distributed Database Concurrency Control Algorithms Using an Extended Petri Net Formalism[END_REF]. In general, the recursive algorithms are widely employed in many fields including computer-controlled and automated systems [START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF]. Traditionally, the algorithms are analyzed within the discrete time-domain paying attention to the complexity measures. However, the convergence property and the stability analysis of the algorithms are two important aspects of any algorithmic systems [START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF]. In case of recursive algorithms, the convergence analysis is often approximated case by case. The asymptotic behaviour of algorithms is difficult to formulate with generalization [START_REF] Olveczky | Formal modeling and analysis of wireless sensor network algorithms in Real-Time Maude[END_REF][START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF]. The asymptotic behaviour of stochastic recursive algorithms is formulated by constructing models [START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF], however, such models fail to analyze the stability of the algorithm in continuous time domain throughout the execution. This paper argues that the stability analysis of any algorithm can be performed within the frequency-domain by considering the algorithms as functional building blocks having different configurations. In order to perform generalized frequency-domain analysis, the algorithms are required to be modeled and transformed following the algebraic constructs. Furthermore, this paper proposes that boundary of execution of recursive algorithms can be analyzed following biplane structure and the stability of the algorithms can be observed in the presence of stochastic input by following the traces in the biplane structure bounding the algorithms. The proposed analytical models are generalized without any specific assumptions about the systems and thus, are applicable to wide array of algorithmic systems. This paper illustrates the mechanism to construct analytical model of any complex algorithmic system and methods to analyze the stability of the system under consideration. The rest of the paper is organized as follows. Section 2 describes related work. Section 3 illustrates the modeling and analysis of the algorithms in frequency-domains and their stability analysis using biplane structure. Section 4 and 5 present discussion and conclusion, respectively.
Related Work
The modeling of computer systems and algorithms is useful to gain an insight to the designs as well as to analyze the inherent properties of the systems [START_REF] Reisig | Elements of distributed algorithms: modeling and analysis with petri nets[END_REF][START_REF] Bisnik | Modeling and analysis of random walk search algorithms in P2P networks[END_REF][START_REF] Olveczky | Formal modeling and analysis of wireless sensor network algorithms in Real-Time Maude[END_REF][START_REF] Ozsu | Modeling and Analysis of Distributed Database Concurrency Control Algorithms Using an Extended Petri Net Formalism[END_REF][START_REF] Keinert | Modeling and Analysis of Windowed Synchronous Algorithms[END_REF][START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF]. For example, the fusion of models of artificial neural network (ANN) and fuzzy inference systems (FIS) are employed in many complex computing systems. The individual models of the ANN and FIS are constructed and their interactions are analyzed in order to establish a set of advantages and disadvantages overcoming the complexities of these systems [START_REF] Abraham | Neuro Fuzzy Systems: State-of-the-Art Modeling Techniques[END_REF]. The other successful applications of modeling techniques to the distributed algorithms and the distributed database in view of Petri Nets are represented in [START_REF] Reisig | Elements of distributed algorithms: modeling and analysis with petri nets[END_REF][START_REF] Ozsu | Modeling and Analysis of Distributed Database Concurrency Control Algorithms Using an Extended Petri Net Formalism[END_REF]. It is illustrated how Petri Nets can be employed to model and analyze complex distributed computing algorithms [START_REF] Reisig | Elements of distributed algorithms: modeling and analysis with petri nets[END_REF]. However, in case of distributed database, the concurrency control algorithms are modeled by formulating extended place/transition net (EPTN) [START_REF] Ozsu | Modeling and Analysis of Distributed Database Concurrency Control Algorithms Using an Extended Petri Net Formalism[END_REF]. The EPTN formalism is a derivative of the Petri Nets. In structured peer-to-peer (P2P) networks, the random-walks mechanism is used to implement searching of information in minimum time. The model of searching by random-walks in P2P network is constructed to obtain analytical expressions representing performance metrics [START_REF] Bisnik | Modeling and analysis of random walk search algorithms in P2P networks[END_REF]. Following the model, an equation-based adaptive search in P2P network is presented. The analysis of probabilistic as well as real-time behaviour and the correctness of execution are the challenges of systems involving wireless sensor networks (WSN). Researchers have proposed the modeling techniques of WSN to analyze the behaviour, correctness and performance of WSN by using Real-Time Maude [START_REF] Olveczky | Formal modeling and analysis of wireless sensor network algorithms in Real-Time Maude[END_REF]. The Real-Time Maude model provides an expressive tool to perform reachability analysis and the checking of temporal logic in WSN systems. On the other hand, the modeling and analysis of hand-off algorithms for cellular communication network are constructed by employing various modeling formalisms [START_REF] Leu | Modeling and analysis of fast handoff algorithms for microcellular networks[END_REF][START_REF] Chi | Modeling and Analysis of Handover Algorithms[END_REF]. The modeling of fast hand-off algorithms for microcellular network is derived by using the local averaging method [START_REF] Leu | Modeling and analysis of fast handoff algorithms for microcellular networks[END_REF]. The performance metrics of the fast hand-off algorithms and the necessary conditions of cellular structures are formulated by using the model construction. In another approach, the modeling technique is employed to evaluate the hand-off algorithms for cellular network [START_REF] Chi | Modeling and Analysis of Handover Algorithms[END_REF]. In this case, the model is constructed based on the estimation of Wrong Decision Probability (WDP) and the hand-off probability [START_REF] Chi | Modeling and Analysis of Handover Algorithms[END_REF]. In the image processing systems, the modeling and analysis of signals are performed by designing the sliding window algorithms. Researchers have proposed the Windowed Synchronous Data Flow (WSDF) model to analyze the sliding window algorithms [START_REF] Keinert | Modeling and Analysis of Windowed Synchronous Algorithms[END_REF]. The WSDF is constructed as a static model and a WSDF-balance equation is derived.
The analysis of convergence of any algorithm is an important phenomenon [START_REF] Rudolph | Convergence analysis of canonical genetic algorithms[END_REF][START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF]. The convergence analysis of canonical genetic algorithms is analyzed by using modeling techniques based on homogeneous finite Markov chain [START_REF] Rudolph | Convergence analysis of canonical genetic algorithms[END_REF]. The constructed model illustrates the impossibility of the convergence of canonical genetic algorithms towards global optima. The model is discussed with respect to the schema theorem. On the other hand, the modeling and analysis of generalized stochastic recursive algorithms are performed using heuristics [START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF]. The heuristic model explains the asymptotic behaviour of stochastic recursive algorithms. However, the model does not perform the stability analysis of the recursive algorithms in the presence of stochastic input.
Models of Algorithms in z-domain
The z-domain analysis is widely used to analyze the dynamics and stability of the discrete systems. The computing algorithms can be modeled in z-domain in order to construct heuristic analysis as well as stability analysis of the various algorithmic models in the view of the transfer functions.
Singular model
In the singular model, the algorithm is considered as a transfer function with single input and single output (SISO) mechanism. The schematic representation of the singular model is presented in Fig. 1. Let, a non-commutative composition of any two functions x and y is described as (x ο y). Thus, the dynamics of the singular algorithmic model can be composed as,
v(k) = A 1 (f(k)) = (A 1ο f)(k). Let, α 1 = (A 1ο f), hence in z-domain v(z) = Σ k = 0, ∞ α 1 (k).z -k = α 1 (z).
The algorithmic transfer function is stable if α 1 (z) is a monotonically decreasing function for sufficiently large k.
Chained model
In the chained model of the algorithmic system, two independent algorithms are put in series as illustrated in Fig. 2.
Fig. 2. Schematic representation of chained model
In the chained model, two algorithms act as independent transfer functions transforming discrete input to discrete output at every instant k. Thus, the overall transfer function of chained model can be presented as, v(k) = (A 2ο α 1 )(k) = α 21 (k). Hence, in the z-domain v(z) = α 21 (z) and the chained algorithms are stable if α 21 (z) is monotonically decreasing for sufficiently large k.
Parallel models
In case of parallel model, two (or more) independent algorithms execute in parallel on a single set of input at every instant k and, the final output of the system is composed by combining the individual outputs of the algorithms. A 2-algorithms parallel model is illustrated in Fig. 3.
, v(k) = (A 1ο f)(k) + (A 2ο f)(k) = α 1 (k) + α 2 (k).
Hence, in the z-domain the discrete values of the output can be presented as, v(z) = α 1 (z) + α 2 (z). This indicates that a parallel algorithmic system is stable if either the individual algorithmic transfer functions are monotonically decreasing or the combined transfer function of the system is converging for sufficiently large k. On the other hand, the 2-algorithms parallel model can be further extended to parallel-series model by adding another algorithm in series as illustrated in Fig. 4. However, in this case algorithm A 3 transforms the output of parallel computation in deterministic execution at discrete instances k. Hence, the final output of the system is,
v f (k) = A 3 (α 1 (k) + α 2 (k)). As, v(k) = α 1 (k) + α 2 (k), thus v f (k) = α 3 (k), where α 3 = (A 3ο v) and v f (z) = α 3 (z). The parallel-series model is stable if α 3 (z) is a converging function. This indicates that, v f (z) can be stable even if v(z) is diverging function provided A 3 (v(z))
is a monotonically converging function.
Recursion with stochastic observation
The recursive algorithms are widely used in computing systems. The fundamental aspect of any computing system involving the recursive algorithm is the existence of a feedback path as illustrated in Fig. 5. In the feedback algorithmic model, the feedback path is positive and the feedback gain can be either unity or can have any arbitrary transfer-gain. In pure recursive computing model, the feedback path will have unit gain and the system will be controlled by external input f(k) at k = 0, whereas the further input series to the system will change to v(k) due to positive feedback for k > 0, where f(k > 0) = 0. The behaviour of such system can be analyzed by using two formalisms such as, heuristics and z-domain analysis techniques.
Heuristics analysis
The generalized difference equation of the recursive algorithmic system is given as, v(k) = A 1 (A 2 (v(k-1)) + f(k)). In case of positive feedback with unit gain, the closed-loop difference equation controlling the system is given as,
v(k) = A 1 (v(k-1) + f(k)) (1)
Equation ( 1) represents a recursive algorithm with stochastic input f(k), where A 1 is a deterministic function having a set of regularity conditions. The function f(k) can be generalized by using some function y as,
f(k) = y(v(k-1), f(k-1), e(k)) (2)
where, e(k) is an arbitrary error in the system at the execution instant k.
The stability of whole system depends on the stability of equation ( 2). If f(k) is exponentially stable within a small neighborhood around k after some point b (k >> b), then [START_REF] Ljung | Analysis of recursive stochastic algorithms[END_REF],
B(v(b-1), f(b)) = h(v(k)) + r(b) (3)
where, B(v(k-1), f(k)) = A 1 (v(k-1) + f(k)), h(v( )) = EB(v,f(b)) and r(b) is a random variable with zero mean. Thus, equation (1) can be represented as,
v(k) = B(v(k-1), f(k)) (4)
Hence, equation ( 4) can be approximately evaluated between k and k+a (a > 0) as,
v(k+a) = v(k) + Σ j = k+1, k+a B(v(j-1), f(j)) ≈ v(k) + Σ j = k+1, k+a h(v(k)) + Σ j = k+1, k+a r(j) ≈ v(k) + a h(v(k)) (5)
In equation ( 5), the random variable is eliminated as it has the zero mean. Hence, the differential equation at point a is given by,
lim a→0 [v(k+a) -v(k)]/a = dv(k)/da = h(v(k)) (6)
Thus, the asymptotic properties of the equation ( 1) can be further derived from equation ( 6) in the form of derivative for any specific recursive algorithmic system.
Stability in z-domain
For the stability analysis in z-domain, it is assumed that A 1 (k) represents the gain factor of A 1 at k-th instant of execution of the algorithm. Now,
v(k) = (η(v(k-1)) + f(k))A 1 (k), where f(k) is a singular external input to A 1 defined as, f(k) = m if k = 0 and, f(k) = 0 otherwise. Hence, v(k) = ηA 1 (k)v(k-1) + f(k)A 1 (k). Initially at k = 0, v(0) = mA 1 (0). Hence, v(k) = ηA 1 (k)v(k-1)
+ mA 1 (0). If the system is purely recursive, then feedback gain is unity (η = 1) and, v(k) = A 1 (k)v(k-1) + mA 1 (0). Thus, in the z-domain the system will be represented as, v(z) = mA 1 (0)z/(z-1)
+ Σ k = 2, ∞ A 1 (k)v(k-1)z -k . Deriving further one can get, v(z) = mA 1 (0)z/(z-1) + {A 1 (1)v(0)/z + A 1 (2) v(1)/z 2 + ……..} = mA 1 (0)z/(z-1) + mA 1 (0) Σ k = 1, ∞ A 1 (k) z -k + Σ k = 2, ∞ {Π j = k, k-1 A 1 (j)}v(k-2) z -k = mA 1 (0)z/(z-1) + mA 1 (0)[A 1 (z) -A 1 (0)] + Λ z (7)
where,
Λ z = Σ k = 2, ∞ {Π j = k, k-1 A 1 (j)}v(k-2) z -k .
The system will be stable if Λ z will minimize or converge for sufficiently large k.
Functional properties
The functional properties of a generalized recursive algorithm with unit positive feedback analyze the stability of the overall system in the presence of oscillation, if any. In addition, the concept of biplane symmetry can be used to analyze the bounds of a recursive algorithmic system. The generalized recursive algorithmic model with positive transfer-gain is represented as v(k) = A 1 (A 2 (v(k-1)) + f(k)). Let, (A 1ο A 2 ) = δ and f(k) is a singular external input to algorithm defined as, f(k) = m if k = 0 and, f(k) = 0 otherwise. Thus, the initial output value is v(0) = A 1 (m) and v(k) = δ k (d), where d = A 1 (m). Now, if A 2 is a unit gain factor, then the system reduces to a pure recursive algorithm such that,
v(k) = A 1 k (d), k > 0.
The stability and behavioral properties of the recursive algorithmic system can be further analyzed as follows.
Stability and Convergence
Let, ƒ: R → R is a stochastic function defined on space R such that, δ(d) ∈ ƒ(R) ⊂ R and | ƒ(R) | > 1. Now, for k > 0, the δ k (d) ∈ ƒ k (R) such that, either ƒ k (R) ∩ ƒ k+1 (R) = {φ} or ƒ k (R) ∩ ƒ k+1 (R) ≠ {φ} depending on the dynamics. A system is bounded if ƒ k+1 (R) ⊆ ƒ k (R). The boundary of δ k (d) is ∆ k = ∩ i = 1, k ƒ i (R). A ε-cut of ƒ k (R) is defined as ƒ kε ⊂ ƒ k (R)
such that, ∀a ∈ ƒ kε the following condition is satisfied: ε ∈ ƒ k (R) and a > ε. An instantaneous remainder of ƒ k (R) is given by, ƒ kε = (ƒ k (R) -ƒ kε ). A system is stable at point N if the boundary ∆ N ≠ {φ}, where 1 ≤ | ∆ N | ≤ w and w << N. A converging system is a stable system at recursion level N with | ∆ N | = 1.
Divergence in systems
Let, in a system
δ k-1 (d) ∈ ƒ (k-1)ε whereas δ k (d) ∈ ƒ kε and δ k+1 (d) ∈ ƒ (k+1)ε such that, δ k-1 (d) < δ k (d) < δ k+1 (d). The system is divergent if ƒ (k-1)ε ∩ ƒ kε ∩ ƒ (k+1)ε = {φ}. A divergent
system is unstable if the limit of recursion k >> 1.
Biplane symmetries
Let, in a system for k ≥ 1, ƒ k (R) = ƒ(R) and, ƒ * : R → R such that (ƒ * ) k (R) = ƒ * (R) where, ƒ(R) ∩ ƒ * (R) = {φ}. Furthermore, ƒ *ε is the ε-cut of ƒ * (R) and ƒ ε is the ε-cut of ƒ(R), whereas the corresponding remainders are ƒ *ε andƒ ε , respectively. Let, δ p (d) ∈ ƒ(R) for p = 1, 3, 5, ….. and, δ q (d) ∈ ƒ * (R) for q = p + 1. Now, if x p+j = δ p+j (d) and y q+j = δ q+j (d), j = 0, 2, 4, 6……., then following set of predicates can occur in the system,
P1 ⇒ [(x p ∈ ƒ ε ) ∧ (x p+2 ∈ ƒ *ε ) ∧ (x p+4 ∈ ƒ ε ) ∧ …………..] P2 ⇒ [(x p ∈ ƒ ε ) ∧ (x p+2 ∈ ƒ ε ) ∧ (x p+4 ∈ ƒ ε ) ∧ ……………] P3 ⇒ [(y q ∈ ƒ *ε ) ∧ (y q+2 ∈ ƒ *ε ) ∧ (y q+4 ∈ ƒ *ε ) ∧ …………] P4 ⇒ (x p+j ∈ ƒ ε ) P5 ⇒ (x p+j ∈ ƒ ε ) P6 ⇒ (y q+j ∈ ƒ *ε ) P7 ⇒ (y q+j ∈ ƒ *ε )
The possible combinatorial distributions of predicates in a recursive algorithmic system are, P 13 , P 23 , P 46 , P 47 , P 56 , P 57 where, P ab = (Pa ∧ Pb). If distributions P 46 and P 57 are valid in a recursive algorithmic system, then it is a biplane-symmetric algorithmic system. Otherwise, if the distributions P 47 and P 56 are valid in a system, then the system is a biplane-asymmetric system. Furthermore, if the distribution P 23 is satisfied in a recursive algorithmic system, then the system is having dual-symmetry between biplanes ƒ and ƒ * and the system is represented as, [ƒ/ƒ * ]. On the other hand, if the distribution P 13 is satisfied in a recursive algorithmic system, then the system is called Bounded-Periodic-Perturbed (BPP) system represented as (ƒ *ε |. In this case, the system is bounded within ƒ and ƒ * planes, however periodic perturbations occur within the domain ƒ *ε .
Oscillation in recursive systems
In a biplane-symmetric system if the following properties hold, then it is called the biplane-symmetric oscillatory recursive system, ∀p, q, | x p | = | y q | = | x p+j | = | y q+j | and x p + y q = x p+j + y q+j = 0. However, in a [ƒ/ƒ * ] system if the following conditions hold, then the system is called asymmetrically oscillating between ƒ and ƒ * planes for values of s (s = 0, 4, 6……), ∀p, q, | x p+s | = | y q+s |, | x p+s+2 | = | y q+s+2 | and x p+s + y q+s = 0, x p+s+2 + y q+s+2 = 0. If a recursive algorithmic system is oscillatory, then it is a deterministic but nonconverging system. A recursive algorithmic system is deterministic and converging if there exists a constant C such that, ∑ p = 1, N (x p +y p+1 ) = ∑ p = 1, M (x p +y p+1 ) = C, where N ≠ M. This indicates that, a deterministic and converging recursive algorithmic system should be in damped oscillation (stable) and should contain idempotency. On the other hand, an oscillatory non-converging recursive algorithmic system is non-idempotent requiring strict consistency conditions.
Discussion
Traditionally, the recursive algorithmic systems are analyzed by using heuristics as well as asymptotic methods following difference equation. Often, the differential equation is formulated in place of difference equation in order to conduct analysis in continuous plane avoiding the complexity. However, the generalized z-domain analysis of algorithmic systems in a discrete plane offers an insight towards the understanding of the overall stability of the algorithmic systems.
The perturbation analysis of a system using biplane structure captures the inherent oscillation in the system. The determinism of convergence of the recursive algorithmic systems with stochastic input can be computed using the symmetry of biplane structure. As a result, the idempotent property of the recursive algorithmic systems becomes easily verifiable in case of complex systems. Thus, depending upon the idempotent property of the complex recursive algorithmic systems, the appropriate consistency conditions can be designed.
Conclusion
The analysis of stability and behaviour of any algorithmic systems can be accomplished by modeling such systems as a block having transfer functional properties. The z-domain analysis of algorithmic models captures the overall response trajectory and the stability of the algorithmic systems. The complex recursive algorithmic systems can be analyzed by modeling in view of z-domain and biplane structure. The heuristics and z-domain models of a generalized recursive algorithmic system with stochastic input reduce the overall system to the deferential equation presenting the dynamic behaviour of the recursive algorithm. On the other hand, the biplane structure determines the boundaries of the recursive algorithmic systems. In addition, the biplane structural model of recursive algorithmic systems serves as a tool to analyze the oscillatory nature of the recursions as well as the stability of the algorithmic systems. The biplane structural model helps to achieve periodic perturbation into the system dynamics and determining convergence conditions, which enables to design the appropriate consistency conditions.
Fig. 1 .
1 Fig. 1. Schematic representation of singular model
Fig. 3 .
3 Fig. 3. Schematic representation of 2-algorithms parallel model
Fig. 4 .
4 Fig. 4. Schematic representation of 2-algorithms parallel-series model
Fig. 5 .
5 Fig. 5. Schematic representation of recursive model | 23,423 | [
"1003134"
] | [
"487170"
] |
01480224 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480224/file/978-3-642-36818-9_11_Chapter.pdf | Nam Chan
Tran Khanh Ngo
Dang
On Efficient Processing of Complicated Cloaked Region for Location Privacy Aware Nearest-Neighbor Queries
Keywords: Location-based service, database security and integrity, user privacy, nearest-neighbor query, complicated cloaked region, group execution
The development of location-based services has brought not only conveniences to users' daily life but also great concerns about users' location privacy. Thus, privacy aware query processing that handles cloaked regions has become an important part in preserving user privacy. However, the state-of-theart private query processors only focus on handling rectangular cloaked regions, while lacking an efficient and scalable algorithm for other complicated cloaked region shapes, such as polygon and circle. Motivated by that issue, we introduce a new location privacy aware nearest-neighbor query processor that provides efficient processing of complicated polygonal and circular cloaked regions, by proposing the Vertices Reduction Paradigm and the Group Execution Agent. In the Vertices Reduction Paradigm, we also provide a new tuning parameter to achieve trade-off between answer optimality and system scalability. Finally, experimental results show that our new query processing algorithm outperforms previous works in terms of both processing time and system scalability.
Introduction
To preserve the LBS user's location privacy, the most trivial method is to remove the direct private information such as identity (e.g., SSID). However, other private information, such as position and time, can also be used to violate the user's location privacy [START_REF] Dang | An open design privacy-enhancing platform supporting location-based applications[END_REF]. In preventing that, the researchers have introduced the Location Anonymizer [START_REF] Dang | An open design privacy-enhancing platform supporting location-based applications[END_REF]. It acts as a middle layer between the user and the LBS Provider to reduce the location information quality in the LBS request. The quality reduction is performed by the obfuscation algorithm which transforms the location to be more general (i.e., from a point to a set of points [START_REF] Duckham | A formal model of obfuscation and negotiation for location privacy[END_REF], a rectilinear region [START_REF] Dang | Anonymizing but Deteriorating Location Databases[END_REF][START_REF] Truong | On Guaranteeing k-Anonymity in Location Databases[END_REF][START_REF] Damiani | The PROBE Framework for the Personalized Cloaking of Private Locations[END_REF][START_REF] Le | Semantic-aware Obfuscation for Location Privacy at Database Level[END_REF][START_REF] To | Bob-tree: an efficient B+-tree based index structure for geographic-aware obfuscation[END_REF][START_REF] To | A Hilbert-based Framework for Preserving Privacy in Location-based Services[END_REF], or a circular region [START_REF] Ardagna | Location privacy protection through obfuscation-based techniques[END_REF], etc.). The request is then sent to the LBS Provider to process without the provider knowing the user's exact location. Due to the reduction in location quality, the LBS Provider returns the result as a candidate set that contains the exact answer. Later, this candidate set can be filtered by the Location Anonymizer to receive the request's exact answer for the LBS user. Consequently, to be able to process those requests, the LBS Provider's Query Processor must deal with the cloaked region rather than the exact location. In this paper, we propose a new Privacy Aware Nearest-Neighbor (NN) Query Processor that extends Casper* [START_REF] Chow | Casper*: Query processing for location services without compromising privacy[END_REF]. Our Query Processor can be embedded inside the untrusted location-based database server [START_REF] Chow | Casper*: Query processing for location services without compromising privacy[END_REF], or plugged into an untrusted application middleware [START_REF] Dang | An open design privacy-enhancing platform supporting location-based applications[END_REF]. The Privacy Aware Query Processor is completely independent of the location-based database server in the LBS Provider and underlying obfuscation algorithms in the Location Anonymizer. Moreover, it also supports various cloaked region shapes, which allows more than one single obfuscation algorithm to be employed in the Location Anonymizer [START_REF] Dang | An open design privacy-enhancing platform supporting location-based applications[END_REF]. In addition, we introduce a new tuning parameter to achieve trade-off between candidate set size and query processing time. Finally, we propose an additional component for the Location Anonymizer, the Group Execution Agent, to strongly enhance the whole system's scalability. Our contributions in this paper can be summarized as follows:
• We introduce a new Privacy Aware NN Query Processor. With its Vertices Reduction Paradigm (VRP), complicated polygonal and circular cloaked regions are handled efficiently. In addition, the performance can be tuned through a new parameter to achieve trade-off between candidate set size and query processing time. • We propose an addition component for the Location Anonymizer, the Group Execution Agent (GEA), to strongly enhance the whole system's scalability. • We provide experimental evidence that our Privacy Aware Query Processor outperforms previous ones in terms of both processing time and system scalability.
The rest of the paper is organized as follows. In section 2 and 3, we highlight the related works and briefly review the Casper* Privacy Aware Query Processor. The proposed Vertices Reduction Paradigm and Group Execution Agent are discussed in section 4 and 5 respectively. Then we present our extensive experimental evaluations in section 6. Lastly, section 7 will finalize the paper with conclusion and future works.
Related Works
In [START_REF] Duckham | A formal model of obfuscation and negotiation for location privacy[END_REF] that transforms an exact user location to a set of points in a road network based on the concepts of inaccuracy and imprecision. They also provide a NN query processing algorithm. The idea is that the user will first send the whole set of points to the server, the server will send back a candidate set of NNs.
Based on that candidate set, the user can either choose to reveal more information in the next request for more accurate result or terminate the process if satisfied with the candidate set of NNs. The other works in [START_REF] Hu | Range Nearest-Neighbor Query[END_REF][START_REF] Kalnis | Preventing Location-Based Identity Inference in Anonymous Spatial Queries[END_REF] respectively propose algorithms to deal with circular and rectilinear cloaked region, those works find the minimal set of NNs. In a different approach, Casper* only computes a superset of the minimal set of NNs that contains the exact NN, in order to achieve trade-off between query processing time and candidate set size for system scalability [START_REF] Chow | Casper*: Query processing for location services without compromising privacy[END_REF]. In addition, Casper* also supports two more query types: Private and Public Query over Private Data. Among previous works, only Casper* supports Query over Private Data, while the others either only support Query over Public Data [START_REF] Duckham | A formal model of obfuscation and negotiation for location privacy[END_REF] or lack the trade-off for system scalability [START_REF] Hu | Range Nearest-Neighbor Query[END_REF][START_REF] Kalnis | Preventing Location-Based Identity Inference in Anonymous Spatial Queries[END_REF]. However, Casper* is only efficient in dealing with rectangular regions. While it can handle polygonal cloaked regions, the application into these cases needs evaluations and modifications. Moreover, in case of systems like OPM [START_REF] Dang | An open design privacy-enhancing platform supporting location-based applications[END_REF], the Query Processor must have the ability to deal with various kinds of cloaked region because the system supports more than one single obfuscation algorithm. Motivated by those problems, our proposed Privacy Aware NN Query Processor offers the ability to efficiently handle complicated polygonal and circular cloaked regions with its Vertices Reduction Paradigm and a new tuning parameter for system scalability. Furthermore, we provide an addition component for the Location Anonymizer, the Group Execution Agent, to strongly enhance the whole system's scalability.
3
The Casper* Privacy Aware Query Processor In this section, let us briefly review the Casper* algorithm, start with its terms and notations. For each vertex v i of the cloaked region A, its NN is called a filter, denoted as
t i if that NN is a public object (public NN) (Fig. 1b, t 1 , t 2 of v 1 , v 2 ).
In case the NN is private, it is denoted as At i . A private object is considered as private NN if it has the minimum distance from its cloaked region's furthest corner to v i (Fig. 1d, At 1 ). The distance between a vertex and its filter is denoted as dist(v i , t i ) (public NN) or minmax-dist(v i , At i ) (private NN). For each edge e ij formed by adjacent vertices v i , v j , a split-point s ij is the intersection point of e ij and the perpendicular bisector of the line segment t i t j (Fig. 1b, s 12 ). For the purpose of the Casper* NN algorithm [START_REF] Chow | Casper*: Query processing for location services without compromising privacy[END_REF], given a cloaked region A, it is to find all the NNs of all the points (1) inside A and (2) on its edges. The algorithm can be outlined in the three following steps below.
• STEP 1 (Filter Selection): We find the filters for all of cloaked region A's vertices.
• STEP 2 (Range Selection): For each edge e ij of cloaked region A, by comparing v i , v j 's filters t i and t j , we consider four possibilities to find candidate NNs and range searches that contain the candidate NNs.
─ Trivial Edge Condition: If t i = t j (Fig. 1a, t 1 = t 2 ), t i (t j ) is the NN to all the points on e ij , so we add t i (t j ) into the candidate set. ─ Trivial Split-Point Condition: In this case, t i ≠ t j , but split-point s ij of e ij takes t i , t j as its NNs (Fig. 1b). This means t i and t j are the NNs to the all points on v i s ij and s ij v j respectively. So we add t i , t j into the candidate set. ─ Recursive Refinement Condition: If two conditions above fail, we will consider to split the edge e ij into v i s ij and s ij v j , then we apply STEP 2 to them recursively.
A parameter refine is used to control the recursive calls for each edge, it can be adjusted between 0 and ∞ initially in the system. For each recursive call, refine will be decreased by 1, and when it reaches 0, we will stop processing that edge. In this case, refine > 0, we decrease it by 1 and process v i s ij and s ij v j recursively. ─ Stopping Criterion Condition: When refine reaches 0, we add the circle centered at s ij of a radius dist(s ij , t i ) as a range query into the range queries set R and stop processing current edge (Fig. 1c). • STEP 3 (Range Search): we execute all the range queries in R, and add the objects into the candidate set. As a result, the candidate set contains NNs for all the points (1) inside cloaked region A and (2) on its edges. After that, the candidate set will be sent back to the Location Anonymizer to filter the exact NN for the user.
In Query over Private Data, STEP 2 is similar to Query over Public Data, with some modifications. Instead of adding At i directly into the candidate set, we will have to add a circle centered at v i of a radius min-max-dist(v i , At i ) as a range query into the range queries set R (Fig. 1d). The same behavior is applied to v j and s ij of edge e ij .
Vertices Reduction Paradigm
Although Casper* can deal with polygonal cloaked region A that has n vertices (ngon), its runtime significantly depends on A's number of vertices (STEP 1) and edges (STEP 2). As shown in Fig. 2a's formula 1 and 2, to process an n-gon, Casper* suffers from two aspects. (1) The processing time of STEP 1 increases because it has to find more filters (4Qt 4 ≤ nQt n ). Besides, the calculation for min-max-dist(v i , At i ) also increase the range query runtime for the n-gon
(Qt 4 ≤ Qt n ). ( 2
)
The processing time of STEP 2 increases as it has to process more edges (4(2 refine -1)Qt 4 ≤ n(2 refine -1)Qt n ).
To ease that problem, we introduce the Vertices Reduction Paradigm (VRP), in which we simplify the polygon so that it has as less vertices as possible before processing it with the Casper* algorithm. For that purpose, the Ramer-Douglas-Peucker (RDP) [START_REF] Douglas | Algorithms for the Reduction of The number of Points required to represent a Digitized Line Or its Caricature[END_REF] algorithm is employed. For each private object (n-gon) in the database, we maintain a vertices reduced version (VRV, m-gon, m < n) of that private object. The VRV is generated by the RDP algorithm and it will be stored inside the database until invalidated. For NN query processing, we use the VRVs instead of original ones to reduce processing time (m ≤ n and Qt m ≤ Qt n , as depicted in formula 3 of Fig. 2a).
The purpose of the RDP [START_REF] Douglas | Algorithms for the Reduction of The number of Points required to represent a Digitized Line Or its Caricature[END_REF] algorithm, given an n-gon (ABCDEFGHIJ in Fig. 2b), is to find a subset of fewer vertices from the n-gon's list of vertices. That subset of vertices forms an m-gon that is simpler but similar to the original n-gon (m < n). The inputs are the n-gon's list of vertices and the distance dimension ε > 0. First, we find the vertex that is furthest from the line segment with the first and last vertices as end points. If that furthest vertex is closer than ε to the line segment, any other vertices can be discarded. Otherwise, given the index of that vertex, we divide the list of vertices into two: [1..index] and [index..end]. The two lists are then processed with the algorithm recursively. The output is an m-gon's list of vertices (Fig. 2b,ACFJ).
In next subsections, we will discuss two different approaches to utilize the VRVs. The first one is to use the VRV directly. The second one, which is better than the first, is to use the bounding polygon of the VRV. In both approaches, the RDP algorithm's overhead in computing the VRVs is insignificant compared to the total processing time of the query. As depicted in Fig. 2b, the dotted polygon ABCDEFGHIJ is the original n-gon while ACFJ and KLMN is the m-gon of the first and second approach respectively. For a circular region, the VRV is its bounding polygon (Fig. 2c, ABDEF) and we use the distance from another vertex to its center plus the radius as min-max-dist of it and that vertex in private NN search (SC+r in Fig. 2c).
Fig. 2. The Vertices Reduction Paradigm
The Direct Vertices Reduction Paradigm Approach
In this approach, by using the m-gons as the cloaked regions of the query and the private objects, we reduce the query processing time in STEP 1 and STEP 2 of the Casper* algorithm (Fig. 2a, formula 3). However, since we use an approximate version of the original cloaked region, we need some modifications in STEP 2 to search for NNs of the parts of n-gon that are outside the m-gon (e.g., ABC in Fig. 2b). During the RDP's recursive divisions, for each simplified edge, we maintain the distance of the furthest vertex that is not inside the m-gon (A, B, E and H in Fig. 2b). The list's size is exactly m. We denote those distances as d (Fig. 2b, distance from H to FJ). The modifications make use of the distance d and only apply to the simplified edges that the discarded vertices are not all inside the m-gon, e.g. AC, CF and FJ in Fig. 2b.
• Modifications for Query over Public Data ─ Trivial Edge and Split-Point Condition: using the corresponding distance d maintained above, we add two range queries centered at v i , v j of radii dist(v i , t i ) + d, dist(v j , t j ) + d into the range queries set R (Fig. 3a). For Trivial Split-Point Condition, we add one more range query centered at s ij of a radius dist(s ij , t i ) + d into R. As shown in Fig. 3c
The Bounding Vertices Reduction Paradigm Approach
One characteristic of the m-gon generated by the RDP algorithm is that it may not contain the original n-gon. In this approach, we want to ensure the m-gon contains the original one. During the RDP's recursive divisions, for each simplified edge (m edges), we maintain the furthest vertex that is not inside the m-gon (A, B, E and H in Fig. 2b). After that, we calculate the m lines that are parallel to the respective edges of the m-gon and through the respective furthest vertices in the list (e.g., KL, LM, MN and NK in Fig. 2b). The intersection of those lines forms a new m-gon that contains the original n-gon inside it (Fig. 2b's KLMN). Therefore, the candidate set of the simplified m-gon is a superset of the original n-gon without directly modifying Casper*.
Although the first approach reduces the query processing time much, it suffers from the moderate increase of the candidate set size. Differently, the second approach achieves both better candidate set size and query processing time than the first one. Firstly, we can add the filters directly into the candidate set without the risk of missing the exact NN because the m-gon contains the original n-gon (no outside parts). Secondly, although the range query's radius is indirectly enlarged through the enlargement of the original n-gons to the bounding m-gons, it is kept minimum (+d+d, an indirect +d of the cloaked region and another +d of the private object). Thus the number of results for each range query is also reduced. Furthermore, the reduction in number of range queries also leads to a slight reduction of processing time.
4.3
The Distance Dimension ε as VRP Tuning Parameter
The Total Processing Time (T) of a query consists of three components. (1) The Query Processing Time (T Q ), which is for the Query Processor to compute the candidate set.
(2) The Data Transmission Time (T X ) which is for the candidate set to be transmitted to the Location Anonymizer for NNs filtration. (3) The Answers Filtration Time (T F ) which is for the candidate set to be filtered for exact NN of the query request. T Q is monotonically decreasing with the decrease of number of vertices, while T X and T F are monotonically decreasing with the decrease of candidate set size. Thus, we can utilize the distance dimension ε as a tuning parameter for VRP since it affects the number of vertices in the VRV and the search radius of range queries in the range query set R. We will consider 2 cases in respect to the ε value: (1) T Q > T X +T F . Initially, the ε is too small that query processing takes too much time. To resolve this, we must increase ε.
(2) T Q < T X +T F . This indicates the candidate set size is too large that (T X +T F ) is longer than T Q . We have to decrease ε. Thus, in order to find an optimal value of ε for the best T, we increase ε until it reaches the optimal point O (Fig. 4a). As shown in Fig. 5, there are many queries with adjacent and overlapped regions at a time (the dotted regions), or even better, a query's region is contained inside another's. Obviously, such queries share a part of or the whole candidate set. To take advantage of that, we propose the Group Execution (GE) algorithm for the Location Anonymizer's additional component, the Group Execution Agent (GEA). The GEA will group as many queries as possible for one query execution before sending them to the Query Processor (N queries into K query groups, K ≤ N, i.e., 9 queries to 3 groups in Fig. 5, the bold G 1,2,3 are used as cloaked regions in NN queries). Algorithm Group Execution function GroupExecution(list of query regions, system-defined max A ) while true and list size > 1 do for each region r i , find another region r j that has least enlargement if r i is grouped with r j , the new region's area a ≤ max A , and add into list(r i ,r j ,a) break if list(r i ,r j ,a) is empty sort list(r i ,r j ,a) by a ascending for each r i in list(r i ,r j ,a) if r i already grouped in this loop if r j already grouped in this loop continue else groupedRegions = groupedRegions U {r j } else groupedRegions = groupedRegions U GroupedRegionOf(r i , r j ) regions = groupedRegions return regions (maximum number of regions grouped, minimum area each group) The algorithm is outlined in the pseudo code above. Its purpose, given a list of query regions (of size N) and a parameter max A , is to group the regions in the list into K grouped regions (K ≤ N) of which areas are smaller than max A . Then the queries are also put into query groups accordingly. The grouped regions are used as the cloaked regions of those query groups in NN query processing. The query group's candidate set is a superset of the candidate set of the queries in the group, so the GEA does not miss any exact NN of those queries. The system benefits from the GEA as shown in Fig. 4b. [START_REF] Douglas | Algorithms for the Reduction of The number of Points required to represent a Digitized Line Or its Caricature[END_REF] The query processing time for each query group is the same as a single query (T Q ) because we only execute the query group once with the grouped region as input. Thus, the sum of all queries' processing time decreases (KT Q ≤ NT Q ). This also leads to the decrease of average query processing time. [START_REF] Beckman | The R*-tree: an efficient and robust access method for points and rectangles[END_REF] The query group's candidate set size increases because it is a superset of the candidate set of those queries in the group, but the average transmission time decreases as we only transmit the common candidates once (as KT' X ≤ NT X ). The average filtration time increases (T' F ≥ T F ), but it is minor in comparison to the benefits above. Furthermore, for optimization, the algorithm's input list must satisfies two conditions: (1) the list's regions are adjacent to each other for easier grouping, (2) the list size is small enough to avoid scalability problem because the algorithm's complexity is O(n 2 ). To find those suitable lists, we maintain an R*-Tree [START_REF] Beckman | The R*-tree: an efficient and robust access method for points and rectangles[END_REF] in the Location Anonymizer. When a query is sent to the Anonymizer, its cloaked region is indexed by the R*-Tree. By finding the R*-Tree's nodes of which the directory rectangle's area are smaller than a predefined area value kmax A , we will get the suitable lists from those nodes' regions. In Fig. 5, we find two suitable lists from the nodes D 2 and D 3 's regions (D 1 's area > kmax A ). Later, the algorithm returns grouped regions G 1 , G 2 and G 3 , which reduces the number of queries from 9 to 3. In fact, the GEA's speedup is dependent of how much overlapped the regions are. The worst case could be that we cannot group any query but still have the overhead of the R*-Tree and the GE algorithm. However, in most cases, when the number of queries is large enough, the GEA does strongly reduce the system's average query processing and transmission time and improve the system scalability.
Experimental Evaluations
We evaluate both two VRP approaches and the GE algorithm for the Private Query over Public and Private Data. The algorithms are evaluated with respect to the tuning parameter ε. For all two types of private query, we compare our algorithms with the Casper*, the performance evaluations are in terms of total processing time and candidate set size. We conduct all experiments with 100K private data and 200K public data. The polygonal cloaked regions are generated by Bob-Tree [START_REF] To | Bob-tree: an efficient B+-tree based index structure for geographic-aware obfuscation[END_REF][START_REF] To | A Hilbert-based Framework for Preserving Privacy in Location-based Services[END_REF], and the circular ones are generated by the works in [START_REF] Ardagna | Location privacy protection through obfuscation-based techniques[END_REF]. The polygonal cloaked regions' number of vertices range from 20 to 30, while the value of ε is varied in the range of 10% to 50% of the Bob-Tree's grid edge size. For GEA, the number of queries is 10K and the parameter max A and kmax A are 30 and 100 times of the above grid's area respectively.
The charts in Fig. 6 show our experimental results. As shown in the processing time charts, the VRPs perform significant improvements compared to Casper*. When ε increases, the processing time of VRPs and GEA decrease while Casper*'s remains constant. Because the larger the ε value is, the larger reduction we can achieve, leads to the larger reduction in query processing time, especially in Private Query over Private Data. At the largest ε (50%), the VRPs reduce the total processing time by 98% for Private Query over Private Data (Fig. 6b) with the standard variation at only 92ms (10% of average total processing time). However, the candidate set size increases moderately (Direct VRP) and slightly (Bounding VRP). Lastly, with the additional component GEA, the total processing time and candidate set size are reduced at best (ε = 50%) by 66% and 33% respectively in comparison to Bounding VRP, the best VRP approach. This helps ease the increase in candidate set size of VRP.
Conclusion and Future Works
In this paper, we introduce a new Privacy Aware Query Processor that extends Casper*. With the new Query Processor's Vertices Reduction Paradigm and its tuning parameter ε, complicated polygonal [START_REF] Damiani | The PROBE Framework for the Personalized Cloaking of Private Locations[END_REF][START_REF] Le | Semantic-aware Obfuscation for Location Privacy at Database Level[END_REF][START_REF] To | Bob-tree: an efficient B+-tree based index structure for geographic-aware obfuscation[END_REF][START_REF] To | A Hilbert-based Framework for Preserving Privacy in Location-based Services[END_REF] and circular [START_REF] Ardagna | Location privacy protection through obfuscation-based techniques[END_REF] cloaked regions are handled efficiently. The main idea is that we employ the Ramer-Douglas-Peucker algorithm to simplify the region's polygon before processing it. Furthermore, we propose the Group Execution Agent to strongly enhance the system scalability. Experimental results show that our works outperform Casper* in dealing with such kinds of region above. For future, we will consider supporting k nearest-neighbor query, continuous query [START_REF] Truong | The Memorizing Algorithm: Protecting User Privacy in Location-Based Services using Historical Services Information[END_REF][START_REF] Chow | Casper*: Query processing for location services without compromising privacy[END_REF] and trajectory privacy [START_REF] Phan | A Novel Trajectory Privacy-Preserving Future Time Index Structure in Moving Object Databases[END_REF] in our Privacy Aware Query Processor.
Fig. 1 .
1 Fig. 1. The Casper* Algorithm
, the NN E' of any point H on BC (C is a discarded vertex outside the m-gon) must be inside the hatched circle centered at H of radius HE (HE' ≤ HE), which is always inside the two bold circles created by the enlarged (+d) range queries. It is also the same for any points in ABC. ─ Stopping Criterion Condition: similarly, we increase the range query's radius by d to ensure including the NNs of the outside parts of the original n-gon. • Modifications for Query over Private Data: because the private objects are also simplified, we will increase the search radius by d + ε for not missing them as candidate NNs (depicted in Fig. 3b, the range query (+d+ ε) reaches the simplified edge AB of another private object while the range query (+d) does not).
Fig. 3 .
3 Fig. 3. Modifications in Vertices Reduction Paradigm
Fig. 4 .Fig. 5 .
45 Fig. 4. System Scalability
Fig. 6 .
6 Fig. 6. Experimental Evaluations | 28,226 | [
"993459"
] | [
"491086",
"491086"
] |
01480226 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480226/file/978-3-642-36818-9_13_Chapter.pdf | Quynh Tran Tri Dang
email: [email protected]
Chi Truong
Tran Khanh Dang
Practical Construction of Face-based Authentication Systems with Template Protection Using Secure Sketch
Keywords: Secure sketch, Face Authentication, Biometric template protection
Modern mobile devices (e.g. laptops, mobile phones, etc.) equipped with input sensors open a convenient way to do authentication by using biometrics. However, if these devices are lost or stolen, the owners will confront a highly impacted threat: their stored biometric templates, either in raw or transformed forms, can be extracted and used illegally by others. In this paper, we propose some concrete constructions of face-based authentication systems in which the stored templates are protected by applying a cryptographic technique called secure sketch. We also suggest a simple fusion method for combining these authentication techniques to improve the overall accuracy. Finally, we evaluate accuracy rates among these constructions and the fusion method with some existing datasets.
Introduction
Current mobile devices (laptops, mobile phones, etc.) are used not only for simple tasks like communicating or web browsing, but they can also be used to do more complex tasks such as learning and working. As a result, some sensitive information is stored on mobile devices for such tasks. To ensure the confidentiality of the personal information, usually an authentication process is implemented. Besides passwordbased authentication, modern devices equipped with input sensors open a new way of doing authentication: biometric. Using biometric for user authentication actually has some advantages [START_REF] O'gorman | Comparing Passwords, Tokens, and Biometrics for User Authentication[END_REF]. However, while passwords can be easily protected by storing only their one-way hash values, it is not easy to do so with biometric data. The problem lies in the noisy nature of biometric data, i.e. the biometric templates captured from the same person in different times will certainly be different. Hence, if we apply the one-way hash function to biometric data, we will unable to compare the distance between the stored data and the authentication data.
In this paper, we propose one construction that offers face-based authentication and provides protection for stored biometric templates at the same time. We follow the concept of "secure sketch" proposed by Dodis et al. [START_REF] Dodis | Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data[END_REF]. One property of secure sketch is that it allows reconstructing the original biometric template exactly when provided another template that is closed enough to the first one. Because of that property, we can protect the stored templates by using a one-way hash function on them.
The remains of this paper are structured as follow: in section 2, we review some related works; in section 3, we present our construction technique of secure sketch on 3 different face recognition algorithms; in section 4, our experiment results are reported and base on them we introduce a simple fusion method to improve the performance of our system; finally, in section 5, we conclude the paper with findings and directions for future researches.
Related Works
Using biometric for authentication is not new [START_REF]Biometric Systems: Technology, Design and Performance Evaluation[END_REF]. One perspective of this problem is how to reliably and efficiently recognize and verify the biometric features of people. This is an interesting topic for pattern recognition researchers. Another perspective of this problem is how to protect the stored templates and this is the focus of security researchers as well as this paper.
There are many approaches to the problem of template protection for biometric data. One of which is the "secure sketch" proposed by Dodis et al. [START_REF] Dodis | Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data[END_REF]. In its simplest form, the working of the secure sketch is described in Fig. 1.
Fig. 1. The working of the secure sketch
There are 2 components of the secure sketch: the sketch (SS) and the recover (Rec). Given a secret template w, the SS component generates public information s and discards w. When given another template w' that is closed enough to w, the Rec component can recover the w exactly with the help of s. There are 3 properties of the secure sketch as described in [START_REF] Dodis | Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data[END_REF]:
1. s is a binary string {0, 1}*.
2. w can be recovered if and only if |w -w'| ≤ δ (δ is a predefined threshold).
3. The information about s does not disclose much about w. In our construction, only property 2 is guaranteed. Fortunately, we can easily transform our sketch presentation into binary string to make it compatible with property 1. And although we do not prove property 3 in this paper, we do give a method to measure the reduction of the search space used to brute-force attack this system using public information s. For these reasons, we still call our construction secure sketch.
Our construction is implemented with biometric templates extracted from 3 face recognition methods: the Eigenfaces method proposed by Turk and Pentland [START_REF] Turk | Eigenfaces for Recognition[END_REF], the 2DPCA method proposed by Yang et al. [START_REF] Yang | Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition[END_REF], and the Local Binary Patterns Histograms (LBPH) proposed by Ahonen et al. [START_REF] Ahonen | Face Recognition with Local Binary Patterns[END_REF]. We select more than one face recognition method to experiment how generic our construction is when applied to different template formats. Another reason is we want to combine the results of these individual authentications to improve the overall resulted performance.
To recover w exactly when given another w' closed to it, some error correction techniques are needed. In fact, the public information s is used to correct w', but it should not disclose too much about w. We follow the idea presented in [START_REF] Juels | A Fuzzy Commitment Scheme[END_REF] paper to design this error correction technique. Our technique can be applied on discrete domains and it gives reasonable results when experimenting with the Eigenfaces, 2DPCA, and LBPH face recognition methods.
Individual biometric recognition systems can be fused together to improve the recognition performance. The fusion can be implemented at feature extraction level, score level, or decision level [START_REF] Ross | Information Fusion in Biometrics[END_REF]. In this paper, based on the specific results obtained from the experiments with individual features (i.e. Eigenfaces, 2DPCA, and LBPH), we propose a simple fusion technique at the decision level, and in fact, it improves the overall performance significantly.
3
Construction Methods
Processing Stages
The stages of our construction are summarized in Fig. 2. Firstly, the face feature is extracted. The formats of the extracted features depend on the face recognition methods used. Secondly, a quantization process is applied to the features' values in continuous domain to convert them into values in discrete domain. This stage is needed because discrete values allow exact recovery more easily than continuous values do.
The quantized values play the role of w and w' as described in previous section. The sketch generation stage produces s given w. And finally, the feature recovery stage tries to recover the original w given another input w' and s. To validate whether the recovered feature matches with the original feature w, a one-way hash function can be applied to w and the result is stored. Then, the same hash function will be applied to the recovered feature and its result is compared with the stored value.
Fig. 2. The stages of our construction
Feature Extraction
Eigenfaces
Given a set of training face images, the Eigenfaces method finds a subspace that best represents them. This subspace's coordinates are called eigenfaces because they are eigenvectors of the covariance matrix of the original face images and they have the same size as the face images'. The detail of the calculating these eigenfaces was reported in [START_REF] Turk | Eigenfaces for Recognition[END_REF].
Once the eigenfaces are calculated, each image can be projected onto its space. If the number of eigenfaces is N, then each image in this space is presented by an Ndimensional vector.
2DPCA
The 2DPCA [START_REF] Yang | Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition[END_REF] works similarly to the Eigenfaces. However, while the Eigenfaces treats each image as a vector, the 2DPCA treats them as matrixes. Then, the 2DPCA tries to find some projection unit vectors X i such that the best results are obtained when the face matrixes are projected on them.
In 2DPCA, the projection of a face matrix M on a vector X i resulted in a transformed vector Y i that has the number of elements equals to the rows of M. If a face matrix has R rows, and the number of projection vectors is P, then the transformed face matrix has a size of RP. In other words, each face image in the 2DPCA method is presented by an N-dimensional vector, in this case N = RP.
Local Binary Patterns Histogram
Local Binary Patterns (LBP), which was first introduced by Ojala et al. [START_REF] Ojala | A Comparative Study of Texture Measures with Classification based on Feature Distributions[END_REF], is used for texture description of images. The LBP operator summarizes the local texture in an image by comparing each pixel with its neighbors. Later, Ahonen et al. proposed LBPH method for face recognition based on the LBP operator [START_REF] Ahonen | Face Recognition with Local Binary Patterns[END_REF].
In this method, at first, a face image is converted to LBP image. Each pixel value is computed by its neighbor values. If the center pixel is greater or equal its neighbor value, then denote it with 1 and 0 otherwise. The surrounding pixels yield a binary number for a center pixel (Fig. 3). After that, the image is divided into small areas and histograms are calculated for each area. The feature vector is obtained by concatenating the local histograms. In this case, if an image is divided into A areas, then it is presented by an N-dimensional vector, in this case N = 256A (256 grayscale values).
Fig. 3. LBP operator
Quantization
The purpose of the quantization stage is to convert feature values from continuous domain to discrete domain. At a first glance, this stage may reduces the security of the authentication process significantly by reducing the continuous search space with infinite elements to a discrete search space with finite elements. However, an informed attacker will understand that biometric authentication is not an exact-matching process, and therefore no need to try every possible values, but only values separated by a threshold. In other words, only finite values are needed to brute-force attacks the continuous template values. Furthermore, we can control the size of the quantized domain by changing the range of the continuous values that mapped to the same quantized value. For these reasons, this stage actually does not affect the security of the authentication system. Our quantization process works as follow: after normalization, the value of each element of feature vectors is a floating point number in [0, 1]. Then, the quantization process will transform this value to an integer in [0, N], with N > 0. Let x be the value before quantization, and x' be the value after quantization, then the quantization formula can be written as [START_REF] O'gorman | Comparing Passwords, Tokens, and Biometrics for User Authentication[END_REF]. Round function returns a nearest integer to a parameter.
= ( )
Sketch Generation
The sketch generation stage produces public information s that can be used later to recover quantized template w. Our construction, based on the idea presented in [START_REF] Juels | A Fuzzy Commitment Scheme[END_REF] paper, is described below. The domain of w is [0, N]. We create a codebook where the codewords spread along the range [0, N] and the distance between any pair of neighbor codeword is the same. In particular, the distance between the codeword c i and c i+1 is 2δ where δ is a positive integer. Then, for any value of w in the range [c i -δ, c i + δ], the mapping function M returns the nearest codeword of w, or M(w) = c i . The sketch generation use the mapping function to return the difference between a value w and its nearest codeword, or
( ) = -( ) (2)
The values of the sketch generation SS(w) is in the range [-δ, δ] irrespectively of the particular value of w. So, given SS(w), an attacker only knows that the correct value w is in the form SS(w) + M(w). To brute-force attack the system, the attacker needs to try all possible values of M(w), or every codeword. The larger δ is, the smaller the codeword space is. Note that the codeword space is always smaller than the quantized space [0, N]. When δ = 1, the codeword space is three times smaller than the quantized space, and when δ = 2, this number is five times.
Feature Recovery
Given the authentication input w', the feature recovery stage uses s and w' to reproduce w if the difference between w and w' is smaller than or equal to δ. Call the recovered value w", it is calculated as
" = ( ′ -( )) + ( ) (3)
To prove the correct w is reproduced when |w -w'| ≤ δ, replace SS(w) by the righthand-side in (2), we have
w" = ( ′ -+ ( )) + -( ) (4) If |w -w'| ≤ δ, then ′ -+ ( ) is in [M(w) -δ, M(w) + δ]
According to the codebook construction, applying the mapping function on any value in this range will return its nearest codeword, which is also M(w). Substituting the function ( ′ -+ ( )) as M(w) in formula (4), we have w" = w.
Security Analysis
In this section, we consider the security of our proposed construction against the most basic attack: brute-force. Each feature template w can be considered as a vector of N elements. To get the correct w, every element in the vector must be corrected successfully. In previous section, we demonstrated that every codeword must be tried to brute-force attack an element. In current construction, we use the same quantization and codebook for every element regardless of its value distribution. So, assume there are S codewords in a codebook, and then in average, an attacker must try cases to get the correct w using a brute-force attack. Of course, the attacker may use the distribution information of the values to reduce the number of searches needed, but it is out of the scope of this paper.
Here are some specific numbers regarding the security of our experiments
•
Experiments
Our proposed constructions are then tested with the Faces94 database [START_REF]Libor Spacek's Faces94 database[END_REF]. The experiments measures true accept rates (the percentage of times a system correctly accepts a true claim of identity) and true reject rates (the percentage of times a system correct-ly rejects a false claim of identity) of different recognition algorithms and with different codeword space. The purpose of the experiments is to verify an ability to apply these constructions in real applications with reasonable threshold. We choose images of 43 people, randomly in the Faces94 database. We have 2 sets of images per a person, one for creating feature vectors and one for recovering feature vectors. For each algorithm, we will conduct 43 x 43 tests, in which 43 of them (testing a person with himself/herself) should recover and 1806 (testing a person with others) should not.
Individual Tests
Eigenfaces Firstly, we choose mages of 37 people (2 images from each person, 17 are female) are selected randomly from the Faces94 database to create the eigenfaces. They need not to be the same as 43 people in the training set. Next, we use the first set of 43 people to create feature vectors. Then, the other set is used to recover the original feature vectors. For every pair of image, x and y, the scheme try to recover x from y. So, if x and y are the same, the system should recover correctly and if x is different from y, the system should not be able to recover x. The eigenfaces numbers chosen are 10, 12, 14, 16 and 18. And the codeword are chosen with space equally and δ range from 2 to 25. Our true reject rate is always 100%, so we only show the performance in term of true accept rate. The true accept rate is depicted in Fig. 4.
Fig. 4. True accept rate for Eigenfaces algorithm
2DPCA
The settings to measure the true accept rate and true reject rate of this experiment is similar to the settings in Eigenfaces. The number of projection axis chosen is just only 2 to 6, because just the image height is 200 pixels, a large enough dimension value. And the codeword are chosen with space equally and δ range from 50 to 250.
As in the experiment with Eigenfaces algorithm, our true reject rate is always 100%, so we only show the performance in term of true accept rate. The true accept rate of this system is depicted in Fig. 5.
Fig. 5. True accept rate for 2DPCA algorithm
Local Binary Patterns Histograms
For the local histogram algorithm, we divide the face images into a 5 x 5 grid, and for each cell of the grid, there are 256 values for 256 grayscale levels. Hence, the dimension of the feature vector in this case is 5x5x256, which is 6400. The codeword are chosen with space equally and δ range from 50 to 100. The reason we stop at 100 is beyond this value, the true reject rate decrease significantly. Unlike the 2 previous algorithms, the local histogram returns true reject rate less than 100% when the size of δ is more than some threshold. The true accept rate and true reject rate for this algorithm is depicted in Fig. 6 Fig. 6. True accept rate and true reject rate for LBPH algorithm
Fusion Tests
The working of our face-based secure sketch for Eigenfaces and 2DPCA algorithms both achieve the true reject rate of 100%, but none of them achieve the 100% true accept rate with different experiment parameters. To further experiment if the fusion of these results could improve the overall performance of our construction, we implement a simple fusion method of these 2 algorithms at the decision making level. In this case, we want to improve the true accept rate, so our fusion is the Boolean function OR that will return true when either the Eigenfaces or 2DPCA result matches. Because the Eigenfaces algorithm produces best result when the eigenfaces number is 12, and because the 2DPCA algorithm produces best result when the projection axes In this paper, we present a practical construction of face-based authentication technique with template protection using secure sketch. Although not exactly as the secure sketch proposed in the [START_REF] Dodis | Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data[END_REF] paper, we demonstrate some security measure of our method in which brute-force is the only attacking technique used. Experiment results show the potential of our construction for using in security application, which the true reject rate is always 100%. The true accept rate of our construction is also increased when a simple fusion technique is applied. However, more theoretical works is needed to prove, especially the security bound when attackers know about the distribution of the extracted feature values. Furthermore, the construction is also need to be tested on more complex human face database to see how it works. Not only improvement on individual feature authentication, there is a need to improve the fusion method. The fusion now is just a simple Boolean function at the decision level. When the feature is recovered correctly, it can be used to calculate the distance between the original feature and the feature used for authentica-tion. Using this distance in the fusion may give more choices in designing the final result. And finally, as the development of modern devices, more sensors input are equipped to capture other features, therefore fusion between different biometric features is also a possible way to enhance the system.
────
The quantized range is [0, 1000] • Eigenfaces: We tested with N = 10, 12, 14, 16, 18 Codeword space range from S = 20 (with δ = 25) to 200 (with δ = 2) The minimum and maximum security offered are 20 10 and 200 18 • 2DPCA Image height is 200 and the number of projection axes chosen is 1, 2, 3, 4, 5, and 6. So, we have N = 200, 400, 600, 800, 1000, and 1200 respectively ─ Codeword space range from S = 2 (with δ = 250) to 10 (with δ = 50) ─ The minimum and maximum security offered are 2 200 and 10 1200 • LBPH ─ We divide the images into a 5 x 5 grid. So, we have N = 5 x 5 x 256 = 6400 ─ Codeword space range from S = 5 (with δ = 100) to 10 (with δ = 50) ─ The minimum and maximum security offered are 5 6400 and 10 6400
are 5 ,
5 we use these setting in the fusion construction. The δ values for eigenfaces are chosen at 5, 10, 15, 20, and 25; the δ values for 2DPCA are chosen at 50, 75, 100, 125, 150, 175, 200, 225, and 250. The true reject rate of our fusion also return 100%, but there is a significant improvement in the true accept rate of the construction. In fact, when the δ value of the 2DPCA reach 100, the fusion always return 100% true accept rate when selecting the δ value for the Eigenfaces algorithm at 5, 10, 15, 20 and 25.
Fig. 7 .
7 Fig. 7. True accept rate for Fusion test
Acknowledgements:
The authors would like to give special thanks to POSCO, South Korea, for their financial support. | 21,920 | [
"1003141",
"1003142",
"993459"
] | [
"491086",
"491086",
"491086"
] |
01480228 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480228/file/978-3-642-36818-9_19_Chapter.pdf | Nguyen Quang-Hung
Pham Dac Nien
Nguyen Hoai
Nguyen Huynh Tuong
Nam Thoai
A Genetic Algorithm for Power-Aware Virtual Machine Allocation in Private Cloud
Energy efficiency has become an important measurement of scheduling algorithm for private cloud. The challenge is trade-off between minimizing of energy consumption and satisfying Quality of Service (QoS) (e.g. performance or resource availability on time for reservation request). We consider resource needs in context of a private cloud system to provide resources for applications in teaching and researching. In which users request computing resources for laboratory classes at start times and non-interrupted duration in some hours in prior. Many previous works are based on migrating techniques to move online virtual machines (VMs) from low utilization hosts and turn these hosts off to reduce energy consumption. However, the techniques for migration of VMs could not use in our case. In this paper, a genetic algorithm for poweraware in scheduling of resource allocation (GAPA) has been proposed to solve the static virtual machine allocation problem (SVMAP). Due to limited resources (i.e. memory) for executing simulation, we created a workload that contains a sample of one-day timetable of lab hours in our university. We evaluate the GAPA and a baseline scheduling algorithm (BFD), which sorts list of virtual machines in start time (i.e. earliest start time first) and using best-fit decreasing (i.e. least increased power consumption) algorithm, for solving the same SVMAP. As a result, the GAPA algorithm obtains total energy consumption is lower than the baseline algorithm on simulated experimentation.
Introduction
Cloud computing [START_REF] Buyya | Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility[END_REF], which is popular with pay-as-you-go utility model, is economy driven. Saving operating costs in terms of energy consumption (Watts-Hour) for a cloud system is highly motivated for any cloud providers. Energy-efficient resource management in large-scale datacenter is still challenge [START_REF] Albers | Energy-efficient algorithms[END_REF][13][9] [START_REF] Beloglazov | Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing[END_REF]. The challenge of energy-efficient scheduling algorithm is trade-off between minimizing of energy consumption and satisfying demand resource needs on time and non-preemptive. Resource requirements depend on the applications and we are interested in virtual computing lab, which is a cloud system to provide resources for teaching and researching. There are many studies on energy efficient in datacenters. Some studies proposed energy efficient algorithm that are based on processor speed scaling (assumption that CPU technology supports dynamic scaling frequency and voltage (DVFS)) [START_REF] Albers | Energy-efficient algorithms[END_REF] [START_REF] Laszewski | Power-aware scheduling of virtual machines in DVFS-enabled clusters[END_REF]. Some other studies proposed energy efficient by scheduling for VMs in virtualized datacenter [START_REF] Goiri | Energyaware Scheduling in Virtualized Datacenters[END_REF] [START_REF] Beloglazov | Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing[END_REF]. A. Beloglazov et al. [START_REF] Beloglazov | Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing[END_REF] presents the Modified Best-Fit Decreasing (MBFD) algorithm, which is best-fit decreasing heuristic, for power-aware VM allocation and adaptive threshold-based migration algorithms to dynamic consolidation of VM resource partitions. Goiri, . et al. [START_REF] Goiri | Energyaware Scheduling in Virtualized Datacenters[END_REF] presents scorebased scheduling, which is hill-climbing algorithm, to place each VM onto which physical machine has the maximum score. However, the challenge is still remain. These previous works did not concern on satisfying demand resource needs on time (i.e. VM starts at a specified start time) and non-preemptive, in addition to both MBFD and score-based algorithms do not find an optimal solution for VM allocation problem. In this paper, we introduce our static virtual machine allocation problem (SVMAP). To solve the SVMAP, we propose the GAPA, which is a genetic algorithm to find an optimal solution for VM allocation. On simulated experimentation, the GAPA discovers a better VM allocation (means lower energy consumption) than the baseline scheduling algorithm for solving same SVMAP.
Problem Formulation
Terminology, notation
We describe notation that is used in this paper as following:
-V Mi: the i-th virtual machine -Mj: the j-th physical machine tsi: start time of the V Mi pei: number of processing elements (e.g. cores) of the V Mi -P Ej: number of processing elements (e.g. cores) of the Mj mipsi: total required MIPS (Millions Instruction Per Seconds) of the V Mi -M IP Sj: total capacity MIPS (Millions Instruction Per Seconds) of the Mj di: duration time of the V Mi, units in seconds -Pj(t): power consumption (Watts) of a physical machine Mj rj(t): set of indexes of virtual machines that is allocated on the Mj at time t
Power consumption model
In this section, we introduce factors to model the power consumption of single physical machine. Power consumption (Watts) of a physical machine is sum of total power of all components in the machine. In [START_REF] Fan | Power provisioning for a warehouse-sized computer[END_REF], they estimated power consumption of a typical server (with 2x CPU, 4x memory, 1x hard disk drive, 2x PCI slots, 1x mainboard, 1x fan) in peak power (Watts) spends on main components such as CPU (38%), memory (17%), hard disk drive (6%), PCI slots (23%), mainboard (12%), fan (5%). Some papers [START_REF] Fan | Power provisioning for a warehouse-sized computer[END_REF] [4] [6] [START_REF] Beloglazov | Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing[END_REF] prove that there exists a power model between power and resource utilization (e.g. CPU utilization). We assume that power consumption of a physical machine (P (.)) is linear relationship between power and resource utilization (e.g. CPU utilization) as [START_REF] Fan | Power provisioning for a warehouse-sized computer[END_REF][4][6] [START_REF] Beloglazov | Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing[END_REF]. The total power consumption of a single physical server (P (.)) is:
P (Ucpu) = P idle + (Pmax -P idle )Ucpu Ucpu(t) = P E j ∑ c=1 ∑ i∈r j (t) mipsi,c M IP Sj,c
In which:
-Ucpu(t): CPU utilization of the physical machine at time t, 0 ≤ Ucpu(t) ≤ 1 -P idle : the power consumption (Watt) of the physical machine in idle, e.g. 0% CPU utilization -Pmax: the maximum power consumption (Watt) of the physical machine in full load, e.g. 100% CPU utilization mipsi,c: requested MIPS of the c-th processing element (PE) of the V Mi -M IP Sj,c: Total MIPS of the c-th processing element (PE) on the physical machine Mj The number of MIPS that a virtual machine requests can be changed by its running application. Therefore, the utilization of the machine may also change over time due to application. We link the utilization with the time t. We re-write the total power consumption of a single physical server (P (.)) with Ucpu(t) as:
P (Ucpu(t)) = P idle + (Pmax -P idle )Ucpu(t)
and total energy consumption of the physical machine (E) in period time [t0, t1] is defined by:
E = t 1 ∫ t 0 P (Ucpu(t))dt
Static Virtual Machine Allocation Problem (SVMAP)
Given a set of n virtual machines {V Mi(pei, mipsi, tsi, di)|i = 1, ..., n} to be placed on a set of m physical parallel machines {Mj(P Ej, M IP Sj)|j = 1, ..., m}. Each virtual machine V Mi requires pei processing elements and total of mipsi MIPS, and the V Mi will be started at time (tsi) and finished at time (tsi + di) without neither preemption nor migration in its duration (di). We do not limit resource type on CPU. We can extend for Algorithm 1: GAPA Algorithm Start: Create an initial population randomly for s chromosomes (with s is population size) Fitness: Calculate evaluation value of each chromosome respectively in given population. New population: Create a new population by carrying out follows the steps: Selection: Choose the two individual parents from current population based on value of evaluation. Crossover: By using crossover probability, we create new children via modifying chromosome of parents. Mutation: With mutation probability, we will mutate at some position on chromosome. Accepting: Currently, new children will be a part of the next generation. Replace: Go to the next generation by assigning the current generation to the next generation. Test: If stop condition is satisfied then this algorithm is stopped and returns individual has the highest evaluation value. Otherwise, go to next step. Loop: Go back the Fitness step.
other resource types such as memory, disk space, network bandwidth, etc. We assume that every physical machine Mj can host any virtual machine, and its power consumption model (Pj(t)) is proportional to resource utilization at a time t, e.g. power consumption has a linear relationship with resource utilization (e.g. CPU utilization) [START_REF] Fan | Power provisioning for a warehouse-sized computer[END_REF][2] [START_REF] Beloglazov | Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing[END_REF]. The objective scheduling is minimizing energy consumption in fulfillment of maximum requirements of n VMs.
The GAPA Algorithm
The GAPA, which is a kind of Genetic Algorithm (GA), solves the SVMAP. The GAPA performs steps as in the Algorithm 1.
In the GAPA, we use a tree structure to encode chromosome of an individual. This structure has three levels: Level 1: Consist of a root node that does not have significant meaning. Level 2: Consist of a collection of nodes that represent set of physical machines. Level 3: Consist of a collection of nodes that represent set of virtual machines. With above representation, each instance of tree structure will show that an allocation of a collection of virtual machines onto a collection of physical machines. The fitness function will calculate evaluation value of each chromosome as in the Algorithm 2.
Scenarios
We consider on resource allocation for virtual machines (VMs) in private cloud that belongs to a college or university. In a university, a private cloud is built to provide computing resource for needs in teaching and researching. In the cloud, we deploy installing software and operating system (e.g. Windows, Linux, etc.) for practicing lab hours in virtual machine images (i.e. disk images) and the virtual machine images are stored in some file servers. A user can start, stop and access VM to run their tasks. We consider two needs as following: i A student can start a VM to do his homework.
ii A lecturer can request a schedule to start a group of identical VMs for his/her students on lab hours at specified start time and in prior.
The lab hours requires that the group of VMs will start on time and continue in spanning some time slots (e.g. 90 minutes).
iii A researcher can start a group of identical VMs to run his/her parallel application.
Workload and simulated cluster
We use workload from one-day of our university's schedule for laboratory hours on six classes in the We show results from the experiments in the Table 3 and Figure 1. We use a popular simulated software for a virtualized datacenter is the CloudSim [START_REF] Calheiros | CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms[END_REF][6] to simulate our virtualized datacenter and the workload. The GAPA is a VM allocation algorithm that is developed and integrated into the CloudSim version 3.0. On simulated experimentation, we have total energy consumptions of both the BFD and the GAPA algorithms are 16.858KWh and average of 13.007KWh respectively. We conclude that the energy consumption of the BFD algorithm is higher than the energy consumption of GAPA algorithm is approximately 130%. In case of the GAPA, these GAPA use the probability mutation is 0.01 and size of population is 10, number of generations is {500, 1000}, probability of crossover is {0.25, 0.5, 0.75}.
Experiments
Related works
B. Sotomayor et al. [START_REF] Sotomayor | Provisioning Computational Resources Using Virtual Machines and Leases[END_REF] proposed a lease-based model and First-Come-First-Serve (FCFS) and backfilling algorithms to schedule best-effort, The MBFD tends to allocate a VM to such as active physical machine that would take the minimum increase of power consumption (i.e. the MBFD prefers a physical machine with minimum power increasing). However, the MBFD cannot find an optimal allocation for all VMs. In our simulation, for example, the GAPA can find a better VM allocation (lesser energy consumption) than the minimum increase of power consumption (best-fit decrease) heuristic in our experiments. In this example, our GAPA will choose one Dell server to allocate these 16 VMs. As a result, our GAPA consumes less total energy than the best-fit heuristic does.
Another study on allocation of VMs [START_REF] Goiri | Energyaware Scheduling in Virtualized Datacenters[END_REF] developed a score-based allocation method to calculate scores matrix of allocations of m VMs to n physical machines. A score is sum of many factors such as power These studies are only suitable for service allocation, in which each VM will execute a long running, persistent application. We consider each user job has a limited duration time. In addition to, our GAPA can find an optimal schedule for the static VM allocation problem on single objective is minimum energy consumption. In a recently work, J. Kolodziej et al. presents evolutionary rithms for energy management. None of these solutions solves same our SVMAP problem.
Conclusions and Future works
In a conclusion, a genetic algorithm can apply to the static virtual machine allocation problem (SVMAP) and brings benefit in minimize total energy consumption of computing servers. On simulation with workload of one-day lab hours in university, the energy consumption of the baseline scheduling algorithm (BFD) algorithm is higher than the energy consumption of GAPA algorithm is approximately 130%. Disadvantage of the GAPA algorithm is longer computational time than the baseline scheduling algorithm.
In the future work, we concern methodology to reduce computational time of the GAPA. We also concern some other constraints, e.g. deadline of jobs. We also study on migration policies and history-based allocation algorithms.
Algorithm 2 :
2 Construct fitness function powerOfDatacenter := 0 For each host ∈ collection of hosts do utilizationMips := host.getUtilizationOfCpu() powerOfHost := getPower (host, utilizationMips) powerOfDatacenter := powerOf Datacenter + powerOfHost End For Evaluation value (chromosome) := 1.0 / powerOfDatacenter 3 Experimental study
.6 46.7 52.3 57.9 65.4 73.0 80.7 89.5 99.6 105.0 113.0 Dell R620 56.1 79.3 89.6 102.0 121.0 132.0 149.0 171.0 195.0 225.0 263.0
Fig. 1 .
1 Fig. 1. The total energy consumption (KWh) for earliest start time first with best-fit decrease (BFD), GAPA algorithms
Table 1
1
. The workload is simulated by total
of 211 VMs and 100 physical machines (hosts).
We consider there are two kind of servers in our simulated virtualized
datacenter, which includes two power consumption models of two power
model of the IBM server x3250 (1 x [Xeon X3470 2933 MHz, 4 cores],
8GB) and another power model of the Dell Inc. PowerEdge R620 (1 x
[Intel Xeon E5-2660 2.2 GHz, 16 cores], 24 GB) server with 16 cores in
the Table 2. The baseline scheduling algorithm (BFD), which sorts list
of virtual machines in start time (i.e. earliest start time first) and using
best-fit decreasing (i.e. least increased power consumption, for example
MBFD [5]), will use four IBM servers to allocate for 16 VMs (each VM
requests single processing element). Our GAPA can finds a better VM
Table 1 .
1 Workload of a university's one-day schedule
Duration
Day Subject Class ID Group ID Students Lab. Time (sec.)
6 506007 CT10QUEE QT01 5 -456----8100
6 501129 CT11QUEE QT01 5 123-----8100
6 501133 DUTHINH6 DT04 35 123-----8100
6 501133 DUTHINH5 DT01 45 -456----8100
6 501133 DUTHINH5 DT02 45 -456----8100
6 501133 DUTHINH6 DT05 35 123-----8100
6 501133 DUTHINH6 DT06 41 123-----8100
Table 2 .
2 Two
power models of (i) the IBM server x3250 (1 x [Xeon X3470 2933 MHz, 4 cores], 8GB)
[START_REF]SPECpower ssj[END_REF]
and (ii) the Dell Inc. PowerEdge R620 (1 x [Intel Xeon E5-2660 2.2 GHz, 16 cores], 24 GB)
[START_REF]SPECpower ssj2008 results for[END_REF]
Table 3 .
3 Total energy consumption (KWh) of running: (i) earliest start time first with best-fit decreasing (BFD); (ii) GAPA algorithms. These GAPA use the probability mutation of 0.01 and size of population of 10. N/A means not available To maximize performance, these scheduling algorithms tend to choose free load servers (i.e. highest-ranking scores) when allocates a new lease. Therefore, a lease with single VM can be allocated on big, multi-core physical machine. This way could be waste energy, both of the FCFS and backfilling does not consider on the energy efficiency. S. Albers et al.[START_REF] Albers | Energy-efficient algorithms[END_REF] reviewed some energy efficient algorithms which are used to minimize flow time by changing processor speed adapt to job size.
GA's GA's Prob. Energy
Algorithms VMs Hosts Generations of Crossover (KWh) BFD/GAPA
BFD 211 100 N/A N/A 16.858 1
GAPA P10
G500 C25 211 100 500 0.25 13.007 1.296
GAPA P10
G500 C50 211 100 500 0.50 13.007 1.296
GAPA P10
G500 C75 211 100 500 0.75 13.007 1.296
GAPA P10
G1000 C25 211 100 1000 0.25 13.007 1.296
GAPA P10
G1000 C50 211 100 1000 0.50 13.007 1.296
GAPA P10
G1000 C75 211 100 1000 0.75 13.007 1.296
immediate and advanced reservation jobs. The FCFS and backfilling
algorithms consider only performance metric (e.g. waiting time, slow-
down). G. Laszewski et al. [13] proposed scheduling heuristics and to present ap-
plication experience for reducing power consumption of parallel tasks in a
cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique.
We did not use the DVFS technique to reduce energy consumption on
datacenter.
Some studies [9][3][5] proposed algorithms to solve the virtual machine al-
location in private cloud to minimize energy consumption. A. Beloglazov
et al. [3][5] presented a best-fit decreasing heuristic on VM allocation,
named MBFD, and VM migration policies under adaptive thresholds. | 19,128 | [
"993414",
"1003143",
"1003144",
"994956",
"913141"
] | [
"491086",
"491086",
"491086",
"491086",
"491086"
] |
01480235 | en | [
"shs",
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480235/file/978-3-642-36818-9_2_Chapter.pdf | Ai Thao Nguyen
email: [email protected]
Tran Khanh Dang
A Practical Solution Against Corrupted Parties and Coercers in Electronic Voting Protocol over the Network
Keywords: Electronic voting, blind signature, dynamic ballot, uncoercibility, receipt-freeness
In this paper, we introduce a novel electronic voting protocol which is resistant to more powerful corrupted parties and coercers than any previous works. They can be the voting authorities inside the system who can steal voters' information and the content of their votes, or the adversaries outside who try to buy the votes, and force voters to follow their wishes. The worst case is that the adversaries outside collude with all voting authorities inside to destroy the whole system. In previous works, authors suggested many complicated cryptographic techniques for fulfilling all security requirements of electronic voting protocol. However, they cannot avoid the sophisticated inside and outside collusion. Our proposal prevents these threats from happening by the combination of blind signature, dynamic ballots and other techniques. M oreover, the improvement of blind signature scheme together with the elimination of physical assumptions makes the newly proposed protocol faster and more efficient. These enhancements make some progress towards practical security solution for electronic voting system.
Introduction
Along with the rapid growth of modern technologies, most of the traditional services have been transformed into remote services through internet. Voting service is among them. Remote electronic voting (also called e-Vot ing) system makes voting more efficient, more convenient, and more attractive. Therefore, many researchers have studied this field and t ried to put it into practice as soon as possible. However, that has never been an easy work. It is true that e-voting brings many benefits for not only voters but also voting authorities. Nevertheless, benefits always come along with challenges. The biggest challenge of e-voting relates to security aspects. In previous works, authors proposed several electronic voting protocols trying to satisfy as many security requirements as possible such as eligibility, uniqueness, privacy, accuracy, fairness, receipt-freeness, uncoercibility, individual verifiability, universal verifiab ility. Ho wever, security leaks cannot be rejected thoroughly in re cent electronic voting protocols when voting authorities collude with each other. In the protocol of Cetin kaya et al [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF], fo r examp le, though the authors announced that their protocol fulfilled the requirement of uncoercibility, once the adversaries corrupted the voter and colluded with the voting authorities taking responsibilit ies of holing ballots and voter's cast, they could easily found out whether that voter follo wed their instru ction or not. In addition, in voting protocol of Spycher et al [START_REF] Spycher | A new approach towards coercionresistant remote e-voting in linear time[END_REF] and JCJ protocol [START_REF] Juels | Coercion-resistant electronic elections[END_REF], if the coercer can co mmun icate with the registrars, no longer can voter lie about their credentials. Therefore, the uncoercibility cannot be satisfied. M oreover, in order to satisfy the receipt-freeness, some protocols employed the physical assumptions such as untappable channels that are not suitable for the services through internet.
Most of the previous electronic voting protocols applied three main cryptographic techniques to solve the security problems. Thus, we classify these protocols into three types as: protocols using mix-nets, blind signatures, and homomorph ic encryption. The concept of mix-nets was firstly introduced by Chaum in [START_REF] Chaum | Untraceable Electronic M ail, Return Addresses, and Digital Pseudonyms[END_REF]. Since then, there have been some proposed voting protocols such as [START_REF] Camenisch | A formal treatment of onion routing[END_REF]. However, these protocols met with the big difficulties because of the huge costs of calculations and commun ications which the mix-net required. Moreover, the final result of voting process is dependent on each linked server in mix-net. If any lin ked server is corrupted or broken, the final result will be incorrect. So far, no election system based on mix-net has been implemented [START_REF] Brumester | Towards secure and practical e-Elections in the new era[END_REF]. Besides mix-net, ho mo morphic encryption is another way to preserve privacy in electronic voting system. Though homo morphic encry ption protocols like [START_REF] Baudron | Practical M ulti-Candidate Election System[END_REF] [START_REF] Acquisti | Receipt-free homomorphic elections and write-in voter verified ballots[END_REF] are more popular than mix-net, they are still inefficient for large scale elections because computational and communicational costs for the p roof and verification of vote's valid ity are quite large. In addition, homo mo rphic encryption protocols cannot be employed on multi-choices voting forms. As for the blind signature protocols, they also provided anonymity without requiring any co mp lex co mputational operators or high co mmunicat ional cost. Until now, there have been many protocols based on blind signature such as [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF][8][10] [START_REF] Juang | A verifiable multi-authority secret election allowing abstention from voting[END_REF]. So me of them e mployed blind signature on concealing the content of votes, others concealed the identifications of voters. Protocol [START_REF] Fujioka | A practical secret voting scheme for large scale elections[END_REF], for examp le, conceals the content of votes; then at the end of voting process, voters had to send the decryption key to the voting authority. Th is action might break the security if the adversaries conspired with these voting authorities. Therefore, our proposal applies the blind signature technique which is used to hide the real identif ication of a voter. Besides that, in order to protect the co ntent of votes we apply dynamic ballots along with a recasting mechanism without sacrificing uniqueness to enhance security in the electronic voting protocol and make good the previous protocol 's shortcomings as well.
In this paper, we p ropose an inside and outside collusion -free electronic voting protocol which guarantees all security requirements. The remarkab le contribution is that our proposal is able to defeat the more powerful adversaries wh ich can collude with most of the voting authorities. Another improvement is the enhancement of blind signature scheme that makes our protocol faster and more efficient.
The structure of this paper is organized as follows. In Section 2, we summarize the background knowledge of electronic voting. We describe the details of our p roposal protocol in Sect ion 3. Then, in Section 4 security of the protocol is d iscussed. Finally, the conclusion and future work are presented in Section 5.
Background
Security Requirements
According to [START_REF] Liaw | A secure electronic voting protocol for general elections[END_REF], the security requirements of electronic voting system are introduced as follows: (1) privacy: no one can know the link between the vote and the voter who casted it; (2) eligib ility: only eligible and authorized voters can carry out their voting process; (3) uniqueness: each voter has only one valid vote; (4) accuracy: the content of vote cannot be modified or deleted; (5) fairness: no one, including voting authorities, can get the intermediate result of the voting process before the final result is publicized; (6) receipt-freeness: the voting system should not give voter a receipt which he uses to prove what candidate he voted; [START_REF] Camenisch | A formal treatment of onion routing[END_REF] uncoercibility: the adversary cannot force any voters to vote for his own intention or to reveal their votes; [START_REF] Liaw | A secure electronic voting protocol for general elections[END_REF] individual verifiability : every voter is able to check whether their v ote is counted correctly or not; (9) universal verifiability: every voter who is interested in tally result can verify it is correct ly co mputed fro m all the ballots casted by eligib le voters or not.
2.2
Cryptography Buil ding Block Bulletin Boards. In [START_REF] Araujo | Towards practical and secure coercion-resistant electronic election[END_REF], Bu llet in board is a communicat ion model which can publish informat ion posted on its body, thus everybody can verify these information. Electronic voting system applies this model to fulfill the requirement of verifiability. In the protocol using bulletin board, voters and voting authorities can post information on the board. Nevertheless, no one can delete or alter these things. Blind Signature. The concept of blind signature was first introduced by Chaum in 1982. It stemmed fro m the need of verifying the valid of a document without revea ling anything about its content. A simp le method to implement the blind signature scheme is to apply the asymmetric cryptosystem RSA. We have some notations: (1) m: the document needs to be signed; (2) d : the private key of authority (signer); (3) (e, N): the public key of authority; (4) s: the signature of m.
The RSA b lind signature scheme is implemented as follows:
The owner generates a random nu mber r which satisfies gcd(r, N) = 1. He blinds m by the blind factor r e (mod N). After that, he sends the blinded document m'= m. r e (mod N) to the authority. Upon receiving m', the authority computes a blinded signature s', as illustrated in Eq. ( 1), then sends it back to the owner.
s' (m') d mod N ( m. r e ) d mod N (m d . r ed ) mod N (m d . r) mod N (1)
According to Eq. ( 1), the owner easily obtains the signature s, as Eq. ( 2).
s s'r -1 mod N m d mod N (2)
Dynamic Ballot. The concept of dynamic ballot was introduced in [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF]. Th is is a mechanism that helps voting protocol fulfill the requ irement of fairness. In most of evoting protocols, authors have used usual ballots in wh ich the order of candidates is pre-determined. Therefore, when someone gets a voter's casting, they instantly know the actual vote of that voter. Alternatively, the candidate orders in dynamic ballot change randomly for each ballot. Hence, adversaries need the voter's casting as well as the corresponding dynamic ballot in o rder to obtain the real choice o f a voter.
In voting process, each voter can randomly take one of these ballots. He chooses his favorite candidate. Then he casts the order of this candidate in his ballot (not the name of this candidate) to a voting authority and his ballot to another voting authority . Plaintext Equality Test (PET). The notion of PET was proposed by Jakobsson and Juels [START_REF] Eng | A critical review of receipt-freeness and coercion-resistance[END_REF]. The purpose of PET protocol is to compare t wo ciphertexts without decryp ting. It based on the ElGamal cryptosystem [START_REF] Kohel | Public key cryptography[END_REF].
Let (r 1 , s 1 ) = (a y1 , m 1 .a x.y1 ) and (r 2 , s 2 ) = (a y2 , m 2 .a x.y2 ) be ElGamal ciphertexts of two plaintexts m 1 and m 2 respectively. The input I of PET protocol is a quotient of ciphertexts (r 1 , s 1 ) and (r 2 , s 2 ), and output R is a single bit such that R = 1 means
m 1 = m 2 , otherwise R = 0.
According to ElGamal cryptosystem, I is the ciphertext of the plaintext (m 1 /m 2 ). Therefore, if so meone who owns the decryption key x, they can obtain the quotient of m 1 and m 2 without gaining any information about the two plaintexts m 1 and m 2 .
3
The Proposed Electronic Voting Protocol
Threats in Electronic Voting Protocol
Vote B uyi ng and Coercion. In a tradit ional voting system, to ensure a voter not to be coerced or try to sell his ballot to another, voting authorities built some election precincts or kiosk in order to separate voters fro m coercers and vote buyers. Therefore, they could vote based on their own intentions. When electronic voting system is brought into reality, there are no election precincts or voting kiosks, but voters and their devices which can connect to the internet. Hence, the threats from coercers and vote buyers quickly become the center of attention of the voting system. Corrupted Registration. Registration is always the first phase of a voting process where voting authorities check voters' elig ibilit ies and give voters the certificates to step into the casting phase. However, in case a voter abstains from voting after registration, the corrupted registrars can take advantages of those certificates to legalize the false votes by casting the extra vote on behalf of the abstaining voters. Sometimes, corrupted registrars can issue false certificates to deceive other voting authorities.
Corrupted B allot Center. So me protocols have a ballot center as providing voters with ballots. Others, in [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF], utilize it for hold ing the choices that voters made until the casting phase completes. If the ballot center beco mes a corrupted party, it can modify the content of the votes or sell them to vote-buyers and coercers who want to check whether the coerced voters cast the candidate they expect. Hence, a feasible electron ic voting protocol has to possess the mechanism to protect the system against this threat. Corrupted Tallier. Tallier takes the responsibility for counting up all the votes to get the final result of voting process. If tallier beco mes a corrupted party, it will be ab le to do that job though the voting process does not come to the end. In this case, it will release the intermediate voting result which has the influence on the psychology of the voters who have not casted the ballots yet. This threat makes the fairness fail.
The Proposed Electronic Voting Protocol
Before exp laining each step in the protocol, we introduce some notations: Registration Phase. In this phase, the blind signature technique is applied to conceal the real identity of a voter through creating an anonymous identity for commun icating with other voting authorities. The following paragraphs will show how voters get their anonymous identification fro m Privacy of Voter server (hereafter called PVer).
Firstly, the voter sends his real ID to Registration server (hereafter called RS) to start registration process. Based on the real ID, RS checks whether that user is registered or not. If he d id this job before, RS will terminate his session; otherwis e, RS will ask CA to check the current ru les of the voting process in order to find out whether this person can become an elig ible voter or not. Then, RS creates a certificate and sends it to the voter. This certificate includes: a serial nu mber, a d igital stamp, a session key, a signature of RS.
Upon receiving the certificate, voter generates his unique identification nu mber: uid = Hash(D V (Digital stamp)) To get the signature of a voting committee on uid, a voter applies the blind signature technique as introduced in Section2.2. He uses a random blind factor to b lind uid, and then sends it together with the certificate to PVer, wh ich takes the responsibility for preserving privacy of voters. PVer saves the serial nu mber in cert ificate in o rder to ensure that each certificate asks for the blind signature just one time. After checking the validity of certificate, PVer blindly signs the uid, then send the result s' to the voter. He, then, unblinds s' to get the signature s of the voting committee on his uid. Since then, the voter sends uid and corresponding s to other voting authorities for authentication. The detail steps are illustrated in Fig. 1.
To avoid man-in-the-middle -attacks, the asymmet ric cryptosystem is used at the 1 st , 6 th , and 8 th steps. However, at the 10 th step, asymmet ric key pairs are not a good choice because they are used only one time for encrypting message, not authentica ting. Therefore, the symmetric-key cryptosystem with Tripple DES algorith m is proposed in this blind signature scheme because it has some significant benefits: (1) it does not consume too much computing power so we can shorten encryption time and simp lify the period of encryption cert ificate as well; (2) although symmetric encryption is not as safe as an asymmetric encryption, high level of security still be guaranteed for some reasons that: Triple DES has high complexity, the session key generated randomly by system is long enough to against Brute Force and Dictionary Attack, and the period of using session key is limited in one step with a short time.
Another improvement of this blind signature scheme is that a voter generates list of anonymous identifications including uid, uid 1 , and uid 2 instead of just one. The purpose of uid is to communicate with other voting servers; and uid 1 and uid 2 are to ensure the dynamic ballot of voter is not modified by any adversaries .
Authenticati on and Casting Phase.
To protect privacy of votes fro m coercers, voting buyers, or sometimes adversaries who stay inside the system, we propose the scheme as shown inFig.2, which applies dynamic ballots, plaintext equivalent test, bulletin boards as introduced in Section 2.2.
Fig.2. Casting scheme
In previous works, to avoid coercion as well as vote buying, authors apply a fake identification mechanism to deceive coercers (in JCJ protocol [START_REF] Juels | Coercion-resistant electronic elections[END_REF]); and others utilize a recasting mechanism without sacrificing uniqueness to voters' final decision [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF]. The fake identification requires the condition that at least one voting authority knows the real identification of a voter in order to determine what the real votes are.
Therefore, if this voting authority becomes adversary, the requirements of uncoercibility and vote-buying can be v iolated. As a result, our proposal uses recasting mechanism to achieve higher level of security.
Firstly, an eligib le voter receives the list of candidates fro m Ballot Center (called BC). He then, mixes the order of candidates randomly, sends this dynamic ballot B to BC and casts his cloaked vote V' by picking the order of the candidate in B he is favor, and sending it to the Casting server (called CS). To ensure B and V' cannot be mo dified by others, the voter encrypts them with uid 1 and uid 2 by the couple of public keys e k (1) and e k (2) generated by Key Generator server (called KG). KG also saves the private key d k along with corresponding e k in KG-List for decrypting in the next phase. After that, voter sends (E P ET (uid), E k (1) (B, uid 1 ), time, e k (1) ) to BC, and (uid, E k (2) (V', uid 2 ), e k (2) ) to CS as illustrated in Fig. 2. Each message receiving fro m voter, CS checks uid of this voter. If uid is invalid, CS discards it; otherwise, it hashes the whole message, and publishes the result on the Bulletin Board BB2 for individual verifiability. It also stores the message into List2 for matching with B in tallying phase. As for BC, it does the same things with every message it receives, except authenticating the eligibility of voters.
Voters are allo wed to recast. Because the actual vote V of a voter consists of B and V', voters just need to change one of two co mponents to modify the value of V. In this protocol, voters are able to change the orders of candidates in their dynamic ballot B. In order to recast, a voter sends another message (E P ET *(uid), E k (1) (B*, u id 1 ), time*, e k (1) ) to BC in which E P ET *(uid), B* and time* are respectively new ElGamal encryption of uid, new dynamic ballot and the time when he sends the message. Tallying Phase. At the end of casting phase, PET server applies PET protocol in Section 2.2 to each E P ET (uid i ) in List1. The purpose is to find wh ich pair of encryptions of uid i and uid j is equivalent without decryption. After that, PET server removes the record holding the earlier time parameter. Concretely, we consider two records R i and R j of List1: R i = (E P ET (uid i ), E Ki (1) (B i , uid 1i ), time i , e Ki (1) ) R j = (E P ET (uid j ), E Kj (1) (B j , uid 1j ), time j , e Kj (1) ) If PET(E P ET (uid i ),E P ET (uid j )) = 1, and t ime i > time j ; then the system removes R j fro m the system. The purpose of this process is to remove duplicated votes and gain the latest choices of all voters. After that, PET server continues to compare each E P ET (uid i ) in List2 to each E P ET (uid j ) in List1 to find out which B in List1 is corresponding to V' in List2. If there exists a record in List1 which does not match with any record in List2, this record must have come fro m an invalid voter, so it is discarded at once. The purpose of this process is to remove invalid dynamic ballots B in List1.
After determin ing pairs of records , KG-List publishes the list of session keys (e K , d K ) for List1 and List2 to find d K related to each e K which is attached to every record in List1 and List2. With the corresponding d k , E K (1) (B, uid 1 ) and E K (2) (V', u id 2 ) are decrypted. Tallying server (called TS) checks the valid of uid 1 and uid 2 to ensure B and V' not to be mod ified by any part ies, then co mbines the valid values of B and V' to find out the actual vote V of a voter. Finally, TS counts up all the actual votes and publishes the result of voting process. Universal verifiability -√ √
In our p rotocol, a voter employs the blind signature technique to get the voting a uthority's signature on his created identity. Therefore, the RS and PVer do not know anything about the anonymous identity that voters use to authenticate the mselves. Hence, if these voting authorities become corrupted, they cannot take adva ntages of abstention to attack the system. So do the protocols of Hasan [START_REF] Hasan | E-Voting Scheme over Internet[END_REF] and Cetinkaya [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF]. In JCJ protocol [START_REF] Juels | Coercion-resistant electronic elections[END_REF], the RS establishes the credentials and passes it to voters through an untappable channel. In the worst case, if the RS is a corrupted party, it can g ive voters fake credentials, and use the valid ones to vote for other candidates. Thus, the corrupted RS becomes a security flaw o f JCJ protocol. Using physical assumption, i.e. an untappable channel, is another weak point of JCJ protocol in co mparison with the previously proposed protocols.
In the voting protocols of Hasan [START_REF] Hasan | E-Voting Scheme over Internet[END_REF] and Cetinkaya [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF], though eliminating the abstention attack fro m corrupted RS, these protocols are not stronger enough to defeat sophisticated attacks. The voting protocol of Hasan is quite simple; it has no mecha nism to protect the content of votes against being modified. Thus, if CS or TS collude with attackers, the system will collapse. As a result, the accuracy and fairness prope rties cannot be guaranteed. In ideal case wh ich every server is trusted, the prot ocol cannot avoid vote-buying and coercion if voters reveal their anonymous identities to vote-buyer or coercer. As for the protocol of Cetinkaya, it guarantees some security requirements (as illustrated in Tab le 1). Ho wever, the weakness point of this protocol is that the voters are still coerced if the servers holding ballots connive with coercer. In the worst case, voters also able to sell their ballots by providing buyers with their anonymous identities, and then if buyers collude with Ballot Generator, Counter, and Key Generator, they can find out whether these anonymous identities are attached to the candidate they expect or not. In other words, corrupted BC is a security flaw that Cetin kaya has not fixed yet. Our protocol makes good Cetinkaya's protocol shortcomings by encrypting uid using ElGamal cryptosystem before sending it to BC. Therefore, when a voter recasts, BC itself cannot recognize h is uid. Only Casting server has responsibility to authenticate the elig ibility o f u id. However, the recasting process does not take place in CS, coercers cannot collect any information fro m this server.
If TS beco mes corrupted, our protocol cannot be broken even though TS colludes with others voting authorities in protocol. In previous protocol using dynamic ballot, corrupted TS just needs to bribe BC and CS for getting intermed iate result. However, in our protocol, B and V' are encrypted with the session key generated by KG, so BC and CS cannot provide the value of B and V' fo r TS without the decrypt key. Even if KG is also corrupted, the intermediate result of our p rotocol is still safe because the uid of voters are encrypted using ElGamal cryptosystem. Attackers have no way to combine B and V' or to remove invalid and duplicated votes. Therefore, corrupted Tallier is no longer a threat for our protocol. However, regarding sophisticated attacks which many voting authorities conspire together, Hasan [START_REF] Hasan | E-Voting Scheme over Internet[END_REF] and Cet inkaya [START_REF] Cetinkaya | A practical verifiable e-voting protocol for large scale elections over a network[END_REF] are not strong enough to defeat these kinds of attacks.
According to the blind signature technique, no voting authorities know the lin k b etween voter's real ID and h is uid and, no one can find out the link between a vote and a voter who casted it. It means that the privacy requirement is guaranteed.
This protocol has mult iple layers of protection. For instance, RS checks the validity of requesters by CRAM; then, PVer check the eligib ility of voters by their certificates. Another interesting point of our protocol is that there is a voter's signature d V in the uid of a voter so the RS cannot create a fake uid to cheat other voting authorities without detecting. In brief, our protocol achieves eligibility.
Recasting is allowed in our protocol. If an adversary coerces voters to cast for his intention, the voters can send another vote to replace the previous one. According to the analysis above, this process cannot be discovered by coercers though they connive with many voting authorities. Therefore, the uncoercibility requirement is guaranteed.
Receipt-freeness is also fulfilled when the voters cannot prove their latest casting to vote-buyer. In case that an adversary penetrates into List1 and gets voters' uid through bribing, if the uid is not encrypted, the adversary can easily find out a certain uid does recasting process or not. Consequently, he can threat the voter or d iscover what the latest casting of voter is. Nevertheless, this assumption has never occurred in our protocol, according to the analysis at the beginning of this section.
The requirement of individual verifiability is guaranteed by applying bulletin boards. BC publishes Hash(E P ET (uid), E K (1) (B, uid 1 ), t ime) in BB1 and Hash(uid, E K
(V', uid 2 )) is published in BB2. Thus, voters just have to hash the necessary information which they have already known, and co mpare their results to all records in bulletin boards to check whether the system counted his vote correctly. At the end of election, all voting authorities publish their lists. Any participant or passive observer can check the soundness of final result bas ed on the informat ion on these lists and the bulletin boards as well. Hence, universal verifiability is fulfilled.
Conclusion
In this paper, we have proposed an unsusceptible electronic voting protocol to most of sophisticated attacks. The proposed protocol protects the privacy of voters and the content of votes from both inside and outside authorities even though more and more adversaries collude together. Furthermore, the fact no physical assumptions and no complex cryptographic techniques need to be used makes our proposal more pract ical. In the future, we intend to formalize an electronic voting protocol using process calculi such as pi-calculus for describ ing concurrent processes and their interactions .
( 1 )
1 (e x , d x ): a public-p rivate key pair of user X; (2) E x (m): an encryption of m with the public key e x ; (3) D x (m): a decryption/sign of m with the private key d x ; (4) H(m): an one way hash function with an input m; (5) E P ET (m): an encryption of m using ElGamal cryptosystem; (6) PET(x, y ): a PET function applying PET protocol with two inputs x, y.
Fig. 1 .
1 Fig.1. Registration scheme. | 29,236 | [
"1001346",
"993459"
] | [
"491086",
"491086"
] |
01480241 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480241/file/978-3-642-36818-9_38_Chapter.pdf | Jostein Jensen
email: [email protected]
Identity Management Lifecycle -Exemplifying the Need for Holistic Identity Assurance Frameworks
Keywords: CAPEC, Common Attack Pattern Enumeration and Classification, //capec.mitre.org/
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
A (digital) identity is the information used to represent an entity in an ICT system [START_REF]Information technology -security techniques -a framework for identity management -part 1: Terminology and concepts[END_REF]. In the context of this paper we think of entity as a human being, meaning that we think of identity as a digital representation of a physical person. A digital identity consist of three key elements [START_REF] Bertino | Identity Management -Concepts, Technologies and Systems[END_REF]: 1) an identifier used to identify the owner of the identity 2) attributes, which describes different characteristics of, or related to, the identity owner 3) credentials which is evidence/data that is used by the identity owner to establish confidence that the person using the identity in the digital world corresponds to the claimed person. There must be processes in place to create, use, update, and revoke digital identities, and policies must exist to govern each of these activities. This is called Identity Management (IdM), and the IdM lifecycle is illustrated in Figure 1. The rigor and quality of all steps of the IdM process can vary substantially between different organizations, and this affects the level of trust that can be associated with a digital identity. Dishonest individuals can exploit weaknesses in any of the identity management lifecycle steps to gain unauthorized access to resources, and as such threaten confidentiality, integrity and availability of assets.
Security requirements can be specified for each phase and each activity in the IdM lifecycle to mitigate threats towards it. The purpose of defining security requirements in relation to identity management is to increase the confidence in the identity establishment phase, and increase the confidence that the individual who uses a digital identity is the individual to whom it was issued [START_REF] Burr | Electronic authentication guideline[END_REF].
Fig. 1. Identity Management Lifecycle. Adapted from [START_REF] Bertino | Identity Management -Concepts, Technologies and Systems[END_REF] Requirements for each lifecycle activity can be bundled to form identity assurance levels, where a low assurance level specifies IdM requirements for systems with limited risk levels and high assurance levels define IdM protection strategies in high-risk environments. Examples of assurance levels with associated requirements targeted at the activities in the IdM lifecycle can be found in Identity Assurance Frameworks, such as those defined by the Norwegian government [START_REF]Framework for autehntication and non-repudiation in electronic communication with and within the public sector (norwegian title: Rammeverk for autentisering og uavviselighet i elektronisk kommunikasjon med og i offentlig sektor[END_REF], the Australian government [START_REF]National e-authentication framework[END_REF], and the US government [START_REF] Burr | Electronic authentication guideline[END_REF].
In this paper we will look at each step of the identity management lifecycle, and identify relevant threats to each lifecycle phase (section 2). The government frameworks mentioned above [START_REF]Framework for autehntication and non-repudiation in electronic communication with and within the public sector (norwegian title: Rammeverk for autentisering og uavviselighet i elektronisk kommunikasjon med og i offentlig sektor[END_REF] [2] [START_REF] Burr | Electronic authentication guideline[END_REF] are examined to determine whether they specify security requirements that can mitigate the identified threats, and they are used in this paper to illustrate the need for holistic identity assurance frameworks that cover all phases of the IdM lifecycle (section 3). Then we provide a discussion of our findings in section 4, and conclude the paper in section 5.
Identity management lifecycle and threats towards it
Identity management life cycles have been presented in different shapes, for instance in by International Standards Organization [START_REF]Information technology -security techniques -a framework for identity management -part 1: Terminology and concepts[END_REF], Baldwin et. al [START_REF] Baldwin | On identity assurance in the presence of federated identity management systems[END_REF] and Bertino and Takahashi [START_REF] Bertino | Identity Management -Concepts, Technologies and Systems[END_REF]. Even though the lifecycle presentations vary between these, they treat the same concepts. The following structure, which is illustrated in Figure 1 is inspired by Bertino and Takahashi. More information about threats towards IdM can be found in [START_REF] Burr | Electronic authentication guideline[END_REF] and [START_REF] Baldwin | On identity assurance in the presence of federated identity management systems[END_REF], while more technical insight to most threats can be found in the CAPEC database 1 .
Creation
The first phase in the IdM lifecycle is identity creation. Identity attributes will be collected and registered, credentials will be defined, and finally issued to the user during this process. Identity proofing including screening and vetting of users [START_REF]Information technology -security techniques -a framework for identity management -part 1: Terminology and concepts[END_REF] can be part of these activities. The creation process is the foundation for all subsequent use of digital identities, and as such rigor in this phase is of utmost importance for systems that require a high to moderate security level.
Threats to the creation process There are numerous motives for attackers to somehow manipulate the identity creation process, and where one example is to assume the identity of another person during the establishment of a digital identity. This can e.g. be done by presenting forged identity information (e.g. false passport) during the identity proofing process, or exploit the fact that identity proofing is not operationalized in the creation process. University enrollment under a fake alias, establishment of credit cards or establishment of phone subscriptions in another persons name are examples of this threat. The consequence of this is that the attacker obtains full access to resources by means of a valid user identity. Further, invalid attributes can be inserted in the user database, attributes can be modified by unauthorized entities or valid, and false attributes can be registered during the attribute registration if proper countermeasures against these threats are not in place. These threats can have serious consequences knowing that attributes can be used to determine access level e.g. based on group memberships/roles in role based access control (RBAC) schemes or possibly any other attribute in attribute-based access control (ABAC) schemes. Also the credential registration process must be protected so that attackers cannot steal or copy credentials, such as username password pairs. If attackers get access to valid credentials, they can impersonate valid users to obtain protected information. These challenges also exist during delivery of digital identities. Attackers can obtain access to digital identities, which can be used in subsequent malign activities by intercepting the communication channel used to deliver the credentials, such as mail or e-mail.
Usage
Once a digital identity is created and issued, it is time to start using it in electronic transactions. Digital identities are often being associated with the authentication process. The issued credentials are being used for this purpose. It is also becoming more and more common that electronic services provide personalized content based on identity attributes, and even to base access control decisions on certain identity attributes. The use of digital identities can vary from use on one separate service, to use on multiple services. Single-sign-on (SSO) is a concept where users obtain a security assertion after a successful authentication, and where this assertion is used as authentication evidence towards the subsequent services the user visits. SSO is commonly used in enterprise networks where employees' authentication provides them a security assertion (e.g. Kerberos tickets in Microsoft environments) that can be used to access their e-mail, file shares, intranet and so on. Federated single-sign-on is an extension of the SSO concept, where organizations can cooperate on technology, processes and policies for identity management. Federated SSO allows security tokens to be used to achieve single-sign-on across organizational borders.
Threats to the use phase There are many threats towards the use of digital identities. Access credentials can be lost, stolen or cracked so that attackers can authenticate, and thereby impersonate, valid users. There are many attack vectors used to obtain valid credentials. Communication lines can be intercepted to copy plaintext data, password files can be stolen and decrypted, social engineering can be used to trick users into giving away their credentials, and so on. The introduction of SSO and federated SSO has added to this complexity in that security assertions are issued based upon a successful authentication. This security assertion is stored by the client and used as proof of identity in subsequent service request. This means that an attacker can copy assertions and add them to malign service requests, or replay previously sent messages. If the receiving service trusts the assertions it will provide information as requested. Since authentication data (assertions) are shared across services in SSO and across services within different trust domains in federated SSO, the attack surface in weakly designed systems is highly increased compared to having separate systems. As already mentioned, RBAC-and ABAC-models allow taking access control decisions based on identity attributes. If attackers can modify attributes during transmission, they can be allowed to elevate their privileges by manipulating attributes. Another scenario is that attackers modify e.g. shipping address so that one user orders and pays the goods, which are then sent to the attacker's destination. The disclosure of identity attributes may also violate users privacy, or reveal company internal information.
Update
While some identity attributes are static, such as date of birth, eye color and hight, others can change over time. Employees' role in a company can change, people can move and change address, and credit cards, digital certificates and so on can expire. The identity management process must therefore include good procedures to keep identity attributes up to date to ensure their correctness. Identity adjustment, reactivation, maintenance, archive and restore are activities part of the identity update process [START_REF]Information technology -security techniques -a framework for identity management -part 1: Terminology and concepts[END_REF].
Threats to the update phase The threats related to the update phase are similar to those presented in the creation phase. Credentials can be copied or stolen and false attributes can be provided. In operative environments one can experience that the responsibility for identity creation and identity update are placed at different levels in the organization. While the human resource department may be responsible for creation of user identities e.g. in relation with a new employment, the responsibility for updating user profiles may lie at the IT-support department. Consequently, attackers can approach different parts of an organization to achieve the same goals. Attackers can also exploit weaknesses specific to the update procedures. Delays in the update procedure can allow users to access content based on old but still valid access credentials and attributes, and attacks towards update management interfaces can allow unauthorized reactivation of user accounts.
Revocation
Identities, including credentials should be revoked if they become obsolete and/or invalid [START_REF] Bertino | Identity Management -Concepts, Technologies and Systems[END_REF]. Revocation can be separated into identity attribute suspension and identity deletion [START_REF]Information technology -security techniques -a framework for identity management -part 1: Terminology and concepts[END_REF]. The former means that some or all identity attributes are made unavailable so that access rights associated with these attributes are made temporarily unavailable to the user. An example of this can be that the association between a user and a certain group membership is removed to reduce a user's access rights. Another is the deactivation of all access rights associated with a user. Identity deletion means the complete removal of registered identity information. Information about revocation should be distributed to all relevant stakeholders to ensure that access is not given based on invalid credentials.
Threats to the revocation phase Suspension and deletion of identity information can primarily be misused to block authorized users from accessing resources. Additionally, insufficient distribution of revocation lists to distributed services can allow attackers to use stolen identities even after the access rights have been formally revoked.
Governance
There is a need to have policies in place and govern all steps of the identity management lifecycle. Regarding creation of identities, for instance, there should be policies in place that regulate e.g. who can create identities, how they are created, how the quality of attributes can be assured, how credentials are issued and so on. Identity management governance is closely related to identity assurance and identity assurance levels, where requirements for all phases are specified.
Threats to identity management governance Password policies are among the policies that affect all phases of the identity management lifecycle, so we continue to use this as an example to illustrate the lack of, or weak, policies. Password policies should include requirements for password length, complexity and validity period. Non-existent or weak policies will allow users to associate their digital identities with insecure passwords. Weak passwords are easily being hacked e.g. through brute force attacks or guessing attacks. Insufficient password policies therefore lead to concerns whether an identity can be trusted or not. Non-existent or poor requirements for password change (update) and revocation also affect the trustworthiness of credentials. With infinite password lifetime, attackers can exploit compromised credentials as long as the user account is active. Policy incompliance means that policies exist, but that they are not being followed to e.g. due to lack of policy enforcement. It does not help to have password length and complexity requirements if the technical platform still allows users to select shorter and weaker passwords. Further, many users will continue to reuse their passwords after expiry, despite a policy stating that passwords are valid for 90 days and that reuse is not allowed. Lack of policies in other IdM areas will similarly lead to weaknesses that can be exploited.
Identity assurance frameworks
The previous section introduced the steps of the IdM lifecycle and threats that are relevant to each of these. Baldwin et al. [START_REF] Baldwin | On identity assurance in the presence of federated identity management systems[END_REF] state that identity assurance is concerned with the proper management of risks associated with identity management. Identity assurance contributes to ensure confidence in the vetting process used to establish the identity of the individual to whom the credential was issued, and confidence that the individual who uses the credential is the individual to whom the credential was issued [START_REF] Burr | Electronic authentication guideline[END_REF]. Identity assurance frameworks consider the threats associated with each IdM lifecycle phase, and specify security requirements to mitigate them.
Many governments around the world, including the Norwegian, the Australian and the US, have developed government strategies to provide online services to their citizens, and to make electronic communication between citizens and the public services a primary choice. There are several legal requirements that regulate such communication, and proper identity management and proper management of identity assurance levels are essential to fulfill them. Consequently, each of these governments have developed identity assurance frameworks: The Norwegian Framework for Authentication and Non-repudiation with and within the Public Sector (FANR) [START_REF]Framework for autehntication and non-repudiation in electronic communication with and within the public sector (norwegian title: Rammeverk for autentisering og uavviselighet i elektronisk kommunikasjon med og i offentlig sektor[END_REF], the Australian National e-Authentication Framework (NeAF) [START_REF]National e-authentication framework[END_REF] and the US National Institute of Standards and Technology (NIST) Electronic Authentication Guideline [START_REF] Burr | Electronic authentication guideline[END_REF].
Security requirements for each IdM lifecycle phase are bundled to form identity assurance levels; the higher the assurance level, the stricter requirements. Assurance levels can be seen as the levels of trust associated with a credential [START_REF] Soutar | Identity assurance framework: Overview[END_REF], and information about the assurance level of a digital identity can be used by service providers to determine whether they trust the identity presented to them or not. The US government, for instance, defines four identity assurance levels [START_REF]E-authentication guidance for federal agencies[END_REF]:
-Level 1: Little or no confidence in the asserted identity's validity -Level 2: Some confidence in the asserted identity's validity -Level 3: High confidence in the asserted identity's validity -Level 4: Very high confidence in the asserted identity's validity Identities that fulfill requirements at level 1 can be used to access content that has limited concerns regarding confidentiality, integrity and availability, while identities fulfilling level 4 requirements can be used to access assets at the highest classification level. This will balance needs for usability and security.
In Table 1 we provide a summary of the IdM lifecycle phases and activities we presented in section 2, and a third column to illustrate which of the lifecycle phases and activities each assurance framework cover2 . Our claim is that identity assurance frameworks should cover all phases, and all important activities of the IdM lifecycle to establish trustworthy IdM. Non-existence of requirements may lead to situations where identity risks are not being properly managed.
Discussion
As Table 1 illustrates, the most extensive assurance framework of the three we have investigated is the NIST Electronic Authentication Guideline. Both the Australian and the Norwegian frameworks have shortage of requirements for several of the IdM lifecycle activities. Madsen and Itoh [START_REF] Madsen | Challenges to supporting federated assurance[END_REF] state that if there are factors in one lifecycle activity causing low assurance, then this will determine the total assurance level, even if other areas are fully covered at higher assurance levels. In practice this means that even if services offered e.g. by the Norwegian government use authentication mechanisms that satisfy assurance level 4, the full service should be considered to satisfy assurance level 1 at best, since there are no requirements for use of SSO assertions (online services offered by the Norwegian government use federated single-sign-on). We will primarily use the Norwegian framework [START_REF]Framework for autehntication and non-repudiation in electronic communication with and within the public sector (norwegian title: Rammeverk for autentisering og uavviselighet i elektronisk kommunikasjon med og i offentlig sektor[END_REF] as example in the following discussion.
The Norwegian framework specifies requirements for the creation phase only targeted at credential delivery. Consequently, threats towards the other activities in the creation phase will not be mitigated unless the identity providers implement security controls specified outside the scope of the framework. There are theoretical possibilities that false identity attributes can be registered for a person, and that identities are created for persons with false aliases and so on since there are no common rules for identity proofing and attribute registration. One can also question the quality of created credentials if there are no further specifications regarding credential generation, including key lengths, password strengths and the like.
For the use phase there are requirements targeted at authentication activity. In isolation, the authentication requirements in the Norwegian framework seems to be sufficient in that the quality of the authentication mechanisms shall improve with increasing assurance levels. However, since the identity proofing and other credential quality requirements during the creation phase are not in place there is still a risk that credentials in use are of low quality, and therefore exposed to guessing attacks or brute force attacks. Further, the framework does not specify any protection requirements for use of assertions. If the assertions in SSO and federated SSO environments are not properly protected, an attacker can intercept the communication between a user and a public service, copy the assertion, and craft his own service requests with valid assertions included. In this way an attacker can impersonate a user without a need to know access credentials. None of the three investigated identity assurance frameworks specify requirements for the attribute sharing activity. Thomas and Meinel [START_REF] Thomas | An attribute assurance framework to define and match trust in identity attributes[END_REF] claim that a verification of an attribute might not be desired as long as a user is not involved in transactions that require it. As such, the lack of attribute sharing requirements may indicate that there is only a very limited set of attributes being shared in the government systems and that attributes are not being used as source for authorization decisions. If this is not true, however, Thomas and Meinel's advice to implement mechanisms to verify the quality and integrity of shared identity attributes should be followed [START_REF] Thomas | An attribute assurance framework to define and match trust in identity attributes[END_REF].
Both the Australian (NeAF) and US (NIST) frameworks cover important aspects of the identity update and revocation phases, except that they do not specify requirements on updating and suspending attributes. The reason for omitting such requirements may be similar to what we identified for attribute sharing in the use phase. The Norwegian framework, on the other hand, fails to target the update and revocation phases at large. Users of the Norwegian framework must therefore on an individual basis define security controls to mitigate the threats against the update and revocation phases.
All the government frameworks are developed to facilitate common identity management practices throughout government agencies, and reuse of authentication services or access credentials across online services offered by the governments. Based on the discussion above one can argue that this goal can be fulfilled by following NeAF and NIST guidelines. The Norwegian identity assurance framework [START_REF]Framework for autehntication and non-repudiation in electronic communication with and within the public sector (norwegian title: Rammeverk for autentisering og uavviselighet i elektronisk kommunikasjon med og i offentlig sektor[END_REF], on the other hand, has considerable limitations. The Norwegian framework states that "the factors used to separate between security levels [read: assurance levels] are not exhaustive.". This understatement is consistent with our analysis that shows there are many factors that are not considered at all. The consequence is that service providers independently need to fill in the gaps where the framework is incomplete. The probability that two independent organizations solves this task completely different is high. There are at least two challenges related to this:
-Specifications, policies and technical solutions will likely be inconsistent. This will result in lack of interoperability between systems, and thus prevent reuse of solutions.
-Each organization will specify different requirements and policies for each assurance level. It will be difficult to assess the assurance level against trustworthiness of the digital identities if there are no common definitions of what each assurance level include. Madsen and Itoh [START_REF] Madsen | Challenges to supporting federated assurance[END_REF] took at technical view to explain challenges related to identity assurance, and related technical interoperability issues. Our results show that challenges with identity assurance can be elevated to a higher level if identity assurance frameworks are not developed with an holistic view on the identity management lifecycle, i.e. it must be developed to include security requirements that mitigate current threats towards each lifecycle phase. The trust an entity will associate with a digital identity will depend on all the processes, technologies, and protections followed by the identity provider and on which the digital identity were based [START_REF] Madsen | Challenges to supporting federated assurance[END_REF]. That being said, the Norwegian Government and public administrations have had success with implementation of a common authentication service for the public sector. The main reason for this is that one common entity, the Agency for Public Management and eGovernment (Difi)3 , has been responsible for realization of a public authentication service (MinID/ID-porten). Norwegian public administrations can integrate their online services with this common authentication portal. The chance of having interoperable, federated SSO enabled, authentication services without this model would have been low without considerable efforts to improve the common Norwegian identity assurance framework, or without substantial coordination activities between the public services.
Conlcusion
The essence of information security is to protect confidentiality, integrity and availability of assets. To achieve this we need to know whether the entity re-questing an asset is authorized or not, and consequently we need to determine the identity of the requestor. Identity management defines the processes and policies to create, use, update and revoke digital identities. IdM is as such essential to ensure information security. Identity assurance frameworks specify requirements targeting the different phases of the identity management lifecycle, and are intended to specify and determine the trustworthiness of digital identities.
In this paper we studied the Norwegian Framework for Authentication and Non-repudiation in Electronic Communication with and within the Public sector, the Australian National e-Authentication framework, and the US Electronic Authentication Guideline as examples of existing identity assurance frameworks. We saw that these frameworks have considerable deviations in coverage when it comes to targeting security requirements towards the identity management lifecycle phases and activities. The paper illustrates the importance of specifying assurance frameworks that takes a holistic view of the identity management lifecycle and related threats.
Table 1 .
1 IdM Lifecycle and assurance framework coverage
IdM Lifecycle Lifecycle activity Framework coverage
Phase
FANR NeAF NIST
Credential delivery x x x
Create Identity proofing x x
Attribute registration x
Authentication x x x
Use Use of assertions (SSO/federated x
SSO)
Attribute sharing
Renew credential x x
Update Update attributes
Reactivate user account x x
Suspend attributes x
Revoke Delete identity x x
Distribute revocation lists x x
An x indicates that the framework includes requirements for the given activity, however, the completeness and quality of the requirements are not considered.
www.difi.no | 28,986 | [
"1003150"
] | [
"50794"
] |
01480255 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480255/file/978-3-642-36818-9_52_Chapter.pdf | Alessandro Armando
Aniello Castiglione
email: [email protected]
Gabriele Costa
Ugo Fiore
email: [email protected]
Alessio Merlo
email: [email protected]
Luca Verderame
Ilsun You
email: [email protected]
Trustworthy Opportunistic Access to the Internet of Services
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The evolution of Web 2.0 as well as the spread of cloud computing platforms have pushed customers to use always more remote services (hosted in a cloud or a server farm) rather than local ones (installed on personal devices). Such paradigm shift has basically improved the role of the network connectivity. Indeed, the access to remote services as well as the user experience, strongly depends on the network availability and the related performances (i.e., QoS).
To get evidence of this, let us consider a set of cloud users travelling through an airport and needing to access remote services from their device (e.g. smartphone, tablets, laptop) to complete their job. Presently, telecommunications companies sell internet connection for fixed time slots inside the airport, by means of 3G or wireless connections. Thus, each of these cloud users is compelled to subscribe, individually, to such internet connections, thereby getting extra charge to access the remote service. Moreover, users often do not get through the purchased connection, using less bandwidth or disconnecting before the end of the time slot. Then, such scenario leads to a non-negligible waste of purchased resources and money that may be reduced whether proper architectural or software solutions would allow, for instance, cooperation and resource sharing among the cloud users.
In this paper, we cope with such problem by investigating the adoption of the Software Defined Networking (SDN) paradigm as potential solution to build and manage QoS-constrained and on demand connection among mobile devices. In particular, we describe the main issues arising when trying to orchestrate a group of mobile devices that participate in an opportunistic network. Besides the difficulty of finding valid orchestration, e.g., in terms of QoS, we also present the security concerns at both network and device level. Finally we introduce a case study illustrating how our assumptions apply to a real life web service.
This paper is structured as follows. In Section 2 we state the problem of orchestrating opportunistic, service-oriented networks. Then, Section 3 describes the main security issues arising in this context and how to deal with them. In Section 4 we present our case study and its features. Finally, in Section 5 we survey on some related works and Section 6 concludes the paper.
Problem Statement
A provider P of a service S relies on a network infrastructure implementing S. The implementation of S is designed to meet both functional, e.g., accessibility, QoS and responsiveness, and non functional, e.g., security and fault tolerance, requirements. Moreover, through proper testing procedures, evidences that the implementation of S complies with these requirements have been produced and collected by P . In order to access S, customers need a network enabled device, e.g., laptops, tablets and smartphones, that can connect to and interact with S (typically by means of a client application). This scenario is depicted in Figure 1a.
Clearly, when a suitable connection is not available, the customer has no access to S. In order to access S, customers might enable a new connection, e.g., by buying a (costly and slow) 3G or a (local) wifi connection from a connectivity provider. This approach requires an existing infrastructure to be present and, definitely, charges extra costs on customers that, possibly, already pay for S.
Recent technological trends have highlighted that mobile devices can share their connectivity by playing the role of an access point. This technology, known as tethering, exploits multiple connection paradigms, e.g., wifi, bluetooth and IR, to create local networks. For instance, a mobile device can use wifi tethering to share its 3G connection with a group of neighbors. Although a single device has serious limitations, e.g., battery consumption, computational overhead and bandwidth exhaustion, populated areas are typically characterized by the presence of many devices. Thus, a proper orchestration of (a group of) these devices can lead to more powerful and reliable networks.
Local area configuration
A customer C having no network connectivity is logically isolated from service S. However, the customer's device is physically surrounded by networked devices. These devices connect to one or more networks by means of different channels. Schematically, an instance of such a configuration is reported in Figure 1b.
Mobile agents. Mobile devices have heterogeneous hardware profiles, e.g., memory, computational power, presence/absence of bluetooth, etc. Also, their configuration can change over time, e.g., on device battery charge and position.
In general, we can consider each device to be a computational platform that can install and run (at least small) applications. Moreover, we assume that all the (enabled) devices run software supporting basic orchestration steps.
Communication protocols. Connected devices use different channels, e.g., bluetooth and wifi, to establish connections. These channels have different features and, in general, have been designed for different purposes. We call a pit a device having direct access to the internet. Hence, the devices must create a network where one or more pits are present. Other resources can be present in the network, e.g., computation and memory nodes, and they can be exploited for the service delivery.
Device contracts. Each device holds a precise description of its features and requirements, namely a contract. Contracts describe which kind of activities can be carried out by the device. Examples of entries of a contract are:
-Available disk space, i.e., the amount of memory that the device can offer.
-Available connections, i.e., channel types, bandwidth, etc. -Computational capacity, i.e., whether the device can carry out computation.
Each feature can be associated to a precise cost that must be paid to access/use it. Informally, we can see a contract as a list of predicates like: For instance, the first rule says that the device can connect to the internet in two different ways (i.e., 3G and WiFi) and describes the differences in costs and bandwidth. Instead, the meaning of the third clause is that the device can offer up to 2 GB of memory space at the given cost per MB. Also, the contract says that after 60 minutes stored data will be deleted.
Other devices can retrieve a contract and compare it against their requirements. Moreover, when a contract does not satisfy certain requirements, a negotiation process is started. Negotiation consists in proposing modifications to the original contract clauses. If the contract owner accepts the proposed corrections, a new, suitable contract is used in place of the previous, unfitting ones.
Network orchestration
Network orchestration plays a central role. Indeed devices must cooperate in a proper way in order to achieve the network goals. Among the recent proposals for the organization of networks, Software Defined Networking (SDN) is receiving major attention.
Software defined networking
The main feature of SDN is a clear distinction between control plane and data plane in network choreographies. Mainly this approach allows for exploiting centralised service logic for the network orchestration. Typically, network nodes take responsibility for both data transfer and network organization activities promiscuously. This behavior is acceptable when networks are composed by dedicated nodes, i.e., platforms (hardware and software) dedicated to the network management. However, under our settings we cannot expect to have homogeneity in nodes configurations. Indeed, nodes configurations may differ for many reasons, e.g., hardware, battery state, user's activities and security policies. Hence, we must expect that the network management is carried out by some dedicated devices in a partially distributed way.
Nodes offering advanced computational capabilities can take responsibility for the control activities. These activities include node orchestration, network monitoring and reaction to changes. Since nodes do not have any pre-installed orchestration software, mobile code must be generated and deployed dynamically. Figure 2 represents this process. Solid lines represent data links, i.e., channels used for service-dependent traffic. Instead, dashed lines denotes control channels, i.e., used for control activities. Control instructions are generated by the service provider being the entity holding the service logic. A control node receives a piece of software (jigsaw piece) that is responsible for managing the behavior of (part of) the network.
Security issues
Many security threats can arise during the recruiting, negotiation and execution phases. All the security aspects can appear at either (i) network level or (ii) service level. Below, we list and describe the main security issues showing whether they affect the first or second of these layers.
Network security
Devices in opportunistic networks build transient and goal-driven networks, thus behaving as peers. Hence, most of the security issues at network level resemble those of P2P networks. By joining an opportunistic network, each device gets potentially unknown neighbours and exchange data with them. In this context, confidentiality and integrity are major guarantees to provide to the final user, since information sent to the service S may be corrupted or intercepted by malicious devices. Device authentication is also required in order to recognize and isolate malicious devices.
Authenticity. Usually devices are uniquely characterized, e.g., by the MAC address or IMEI code. However, a strong authentication relating a device with a physical user is hard to achieve at this level. Also a global authentication in a network is hardly achievable, due to the lack of a central authority and the heterogeneity of device platforms. However, mutual and pairwise authentication between devices may be easily carried out. In this context, from the singledevice point of view the authentication is aimed at 1) allowing honest devices to recognize and isolate malicious ones, and 2) building temporary trust relationship between trusted and authenticated devices in order to share bandwidth, memory and disk resources. To meet these targets, the adoption of gossiping algorithms [START_REF] Boyd | Gossip algorithms: Design, analysis and applications[END_REF], combined with cooperative access control mechanisms [START_REF] Merlo | Secure cooperative access control on grid[END_REF] can be adopted.
Confidentiality. Message confidentiality is a main concern since a device reaches the service by sending data through unknown and untrusted devices, without the possibility to trace its own traffic. However, confidentiality at this layer can be granted by the use of secure channels built at higher layer. For instance, HTTPS channels established between the source device and the service are suffice to provide the required confidentiality through traffic encryption. Secrecy provided by HTTPS channels is not easily breakable, in particular for single devices in the networks.
Integrity. Ciphering data grants secrecy but does not prevent devices from tampering the traffic they receive. Thus, the use of integrity scheme can be envisaged in opportunistic networks. There exist integrity schemes based on shared keys and private/public key. The choice between shared key (e.g., MAC schemes [START_REF] Preneel | On the security of two mac algorithms[END_REF]) and public/private key schemes (e.g., DS schemes [START_REF] Bellare | The exact security of digital signatures-how to sign with rsa and rabin[END_REF] and Batch verification schemes [START_REF] Gasti | On the integrity of network codingbased anonymous p2p file sharing networks[END_REF]) depends on the contingency of the opportunistic network. Besides, the use of such schemes requires, at most, the installation of simple libraries or programs on the device.
Service level security
Here we can identify two groups of entities aiming at different security goals: (i) the service-customer coalitions and (ii) the control-data nodes.
Service-customer security. The service provider and its customers share a common goal, i.e., enabling the customer to access the service according to a given SLA. Among the clauses of the agreement, security policies prescribes how the service handles the customer's data and resources. In general, the provider can rely on a trusted service infrastructure. However, in our scenario the service is delivered by a group of, potentially untrusted, devices which extend the trusted infrastructure. Intruders could join the network and perform some disrupting attack, e.g., denial of service, also depending on the service typology.
On the other hand, the service can include security instructions in the code deployed on control nodes. In this way, control nodes can monitor the behavior of (a portion of) the network. Monitoring allows control nodes to detect intruders and, possibly, cut them off. Even more dangerously, the intruder could be a control node. In this case, the service can detect the misbehaving control node by directly monitoring its behavior. Control node monitoring can rely on other control nodes, including the (trusted) customer. Hence, a group of control nodes can isolate a malicious peer when detected. Still, control nodes collusion represent a major threat and mitigation techniques could be in order.
Nodes security. Data and control nodes have a different security goal. Since they join an orchestration upon contract acceptance, their main concern is about avoiding contract violations. Being only responsible for packets transmission, data nodes can directly enforce their contract via resources usage constraints.
Control nodes require more attention. As a matter of fact, they receive orchestration code from the server and execute it. The execution of malicious software can lead to disruptive access and usage of the resources of the device. Thus, a control node must have strong, possibly formal, guarantees that the mobile code is harmless, i.e., it respects the given contract.
A possible solution consists in running the received code together with a security monitor. Security monitors follow the execution of a program and, when they observe an illegal behavior, run a reaction procedure, e.g., they can throw a security exception. A security monitor comparing the mobile code execution against a given contract is an effective way to ensure that no violations take place. Although monitoring requires extra computational effort, lightweight implementations causing small overheads have been proposed, e.g., see [START_REF] Costa | Runtime monitoring for next generation java me platform[END_REF].
Another approach exploits static verification on to avoid run-time checks. Briefly, the service provider can run formal verification tools, e.g., a model checker, before delivering the mobile code. The model checker verifies whether the code satisfies a given specification, i.e., the contract, and, if it is not the case, returns an example of a contract violation. Instead, if the process succeeds, a proof of compliance is generated. Using proof-carrying code [START_REF] Necula | Proof-carrying code[END_REF] the proof is then attached to the code and, then, efficiently verified by the control node. A valid proof ensure that the code execution cannot violate the contract of the node.
Meeting at the airport: a case study
We consider the following scenario. A e-meeting service offers to its customers the possibility to organise and attend to virtual meetings. A meeting consists of a group of customers that use (i) a VoIP system for many-to-many conversations and (ii) file sharing for concurrently reading and writing documents.
Private companies buy annual licenses. Then, employees install a free client application on their devices and access the service using proper, company-provided credentials. Nevertheless, company employees use to travel frequently and, often, need to buy wireless access in airports and train stations. This practice causes extra, variable charges on the service usage.
Service requirements The two service components, i.e., VoIP and file sharing, have different features. Mainly, the VoIP service has precise constraints on transmission times in order to make the communication effective. In order to respect this constraint, the service can reduce the voice encoding (up to a minimal threshold) quality whenever slow connections risk to cause delays.
Instead, the file sharing system must guarantee that documents are managed properly. Roughly, users can acquire the ownership of a document and modify it until they decide to release the control. Each time a document is retrieved (submitted) it is downloaded from (uploaded to) a network repository database. Document loading and saving are not time critical operations, i.e., they can be delayed, but data consistency must be guaranteed.
Network structure In order to set up a suitable network, the client starts a recruiting procedure. Briefly, it floods with a request message its neighbours (up to a fixed hops number) and collects their contracts. If the set of received contracts satisfies preliminary conditions, e.g., sufficient nodes density ad existence of internet-enabled nodes, the negotiation process starts.
Negotiation requires interaction with the web service. To do this, at least one of the nodes having internet access must take responsibility for sending the negotiation information to the orchestration service. This information includes nodes contracts and topology description, i.e., nodes neighbours tables. The orchestrator check whether the network configuration satisfies minimal requirements and returns contract proposals for the control nodes. The nodes receive the negotiated contracts and decide whether to accept or reject it. If a contract is rejected, the process can be repeated 6 . When all the control nodes accept the proposed contracts the service send them a piece of software implementing part of the distributed orchestration algorithm. Each node verifies the validity of the received code and starts the orchestration procedure. The resulting network organization is depicted in the figure below.
Intuitively, each control node is responsible for coordinating the activities of a group of data nodes (rounded areas). Data nodes are responsible for transmitting network traffic and they are recruited and managed by control nodes. Also, control nodes must react to a plethora of possible events, e.g., topology changes, data and control node fall or performances decay.
Related Work
Many technologies are related to our model. Here we briefly describe those that, in our view, better apply to this context.
Just recently, software defined networking received major attention. Among the others, OpenFlow [START_REF] Mckeown | OpenFlow: enabling innovation in campus networks[END_REF][START_REF]OpenFlow Switch Specification[END_REF][START_REF] Tootoonchian | Hyperflow: a distributed control plane for openflow[END_REF] is the leading proposal of an implementation of SDN. Basically, OpenFlow allows network managers to programmatically modify the behavior of networks. Although it is currently applied to standard network infrastructure, i.e., standard routers and switches, this technology seems to be also suitable for mobile devices. Hence, we consider it to be a promising candidate for the implementation of orchestration tools.
Formal verification plays a central role under our assumptions and it appears at several stages. Mainly, contract-based agreements require formal analysis techniques for granting that implementations satisfy a contract. A standard method for this is model checking [START_REF] Clarke | Model checking[END_REF][START_REF] Baier | Principles of Model Checking (Representation and Mind Series)[END_REF]. However, also proof verification is crucial for allowing network nodes to check the proof validity when the source in an untrusted service. This step can be implemented by using proof-carrying code [START_REF] Necula | Proof-carrying code[END_REF].
Being a main concern, code mobility and composition environment must include proper security support. In particular, policies composition techniques must be included in any proposed framework. Local policies [START_REF] Bartoletti | Local policies for resource usage analysis[END_REF] represent a viable direction for allowing several actors to define their own security policies, apply them to a local scope and compose global security rules efficiently. Also, since our proposal is based on mobile devices technology, specific security solutions for mobile OSes must be considered. In this sense, in [START_REF] Armando | Formal modeling and reasoning about the Android security framework[END_REF] the problem of securing the Android platform against malicious applications has been studied.
Finally, also dynamic monitoring appear to be necessary for managing and re-organizing the network in case of necessity, e.g., upon failure discovery. A possible solution consists in using the approach presented by Bertolino et al. [START_REF] Bertolino | Towards a model-driven infrastructure for runtime monitoring[END_REF] for retrieving and collecting information about nodes behavior. Instead, for what concerns security monitoring, a possible approach is presented in [START_REF] Costa | Runtime monitoring for next generation java me platform[END_REF]. Since this proposal has been tested on resource limited devices, it seems a good candidate for avoiding computational loads on network nodes.
Conclusion
In this paper, we described the possibility of applying Software Defined Networking (SDN) paradigm as potential solution to build and manage opportunistic connection among mobile devices and web services. In particular, we described the main issues arising when trying to orchestrate devices that share the goal of implementing a QoS compliant network. Also, we considered the security issues deriving from such a model and possible approaches and countermeasures. Finally, we presented a case study that highlights the main aspects that must be considered under our assumptions.
(a) Standard access configuration. (b) Service unreachable to customer.
Fig. 1 .
1 Fig. 1. Mobile access to a web service.
NET.
Internet: 3G (Bandwidth: 3.2 MB/sec; Cost: 0.05 €/MB) + WiFi (Bandwidth: 14.4 MB/sec; Cost: 0.01 €/MB); LINK. Bluetooth: (Bandwidth: 720 Kb/sec; Cost: 0 €/sec); DISK. Space: 2 GB; Cost: 0.01 €/MB; Expiration time: 60'; CPU. Speed: 800 MHz; Cost: 0.02 €/sec;
Fig. 2 .
2 Fig. 2. Control and data nodes participating in an orchestration.
Fig. 3 .
3 Fig. 3. Orchestration providing opportunistic access to e-meeting.
We assume that nodes cannot reject a contract respecting its own original clauses. For instance a node offering 2 GB of disk space can reject a request for 3 GB but not one for 1 GB. | 23,560 | [
"1003157",
"1003158",
"1003159",
"1003160",
"1003161",
"1001090",
"993476"
] | [
"302889",
"302831",
"302889",
"544964",
"302889",
"487207",
"302889",
"487208"
] |
01480261 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480261/file/978-3-642-36818-9_63_Chapter.pdf | Leandro Marin
email: [email protected]
Antonio J Jara
email: [email protected]
Antonio Skarmeta
email: [email protected]
Shifting Primes on OpenRISC Processors with Hardware Multiplier
Shifting primes have proved its efficiency in CPUs without hardware multiplier such as the located at the MSP430 from Texas Instruments. This work analyzes and presents the advantages of the shifting primes for CPUs with hardware multiplier such as the JN5139 from NX-P/Jennic based on an OpenRISC architecture. This analysis is motivated because Internet of Things is presenting several solutions and use cases where the integrated sensors and actuators are sometimes enabled with higher capabilities. This work has concluded that shifting primes are offering advantages with respect to other kind of primes for both with and without hardware multiplier. Thereby, offering a suitable cryptography primitives based on Elliptic Curve Cryptography (ECC) for the different families of chips used in the Internet of Things solutions. Specifically, this presents the guidelines to optimize the implementation of ECC when it is presented a limited number of registers.
Introduction
Internet of Things proposes an ecosystem where all the embedded systems and consumer devices are powered with Internet connectivity, distributed intelligence, higher lifetime and higher autonomy. This evolution of the consumer devices to more connected and intelligent devices is defining the new generation of devices commonly called "smart objects".
Smart objects are enabled with the existing transceivers and CPUs from the Wireless Sensor Networks (WSNs), i.e. CPUs highly constrained of 8 and 16 bits such as ATMega 128, Intel 8051, and MSP430 [START_REF] Davies | MSP430 Microcontroller Basics[END_REF]. But, since the level of intelligence and required functionality is being increased, some vendors are powering the consumer devices with CPUs not so constrained such as ARM 5 used in the SunSpot nodes [START_REF] Castro | Architecture for Improving Terrestrial Logistics Based on the Web of Things[END_REF] from Oracle Lab or the NXP/Jennic JN5139 used in the imote and recently in the first smart light from the market based on 6LoWPAN presented by GreenWave [START_REF] Hoffman | GreenWave Reality Announces Partnership with NXP, GreenWave Reality[END_REF].
These smart objects require a suitable security primitives to make feasible the usage of scalable security protocols for the application layer such as DTLS, which has been considered the security to be applied over the Constrained Application Protocol (CoAP) [START_REF] Shelby | Constrained Application Protocol (CoAP), Internet-Draft[END_REF] over IPv6 network layer [START_REF] Jara | GLoWBAL IPv6: An adaptive and transparent IPv6 integration in the Internet of Things[END_REF].
Specifically, CoAP and the Smart Energy profile for ZigBee alliance (SE 2.0) are considering DTLS 1.2 described in the RFC6347 [START_REF] Rescorla | RFC6347 -Datagram Transport Layer Security Version 1.2[END_REF]. This extends the ciphersuites to include the supported by hardware in the majority of the Wireless Sensor Networks transceivers, i.e. AES-128 in CCM mode. In addition, this includes Elliptic Curve Cryptography (ECC) for establishing the session.
Therefore, the challenge is in offering a suitable ECC implementation for the authentication and establishment of the sessions through algorithms such as DTLS.
ECC implementations have been optimized in several works of the state of the art. For example, it has been optimized for constrained devices based on MSP430 in our previous works. But, such as described, the market is not limited to these highly constrained devices therefore it needs to be evaluated how the special primes considered for very specific CPU architectures and conditions are performing for other CPUs.
This work presents the shifting primes and describes how they were used for the MSP430, then it is described the new architecture from the JN5139 CPU, and finally how the shifting primes continue being interesting for the implementation over this higher capabilities, in particular shifting primes offer a feature to carry out the reduction modulo p at the same time that it is carried out the partial multiplication in order to optimize the usage of the registers and consequently reach an implementation which is presenting the best performance from the state of the art for the JN5139 CPU.
Shifting Primes
Shifting primes are a family of pseudo-mersenne primes that were designed, in [START_REF] Marin | Shifting primes: Extension of pseudo-mersenne primes to optimize ecc for msp430-based future internet of things devices[END_REF], to optimize the ECC cryptography primitives when the CPU is not supporting a hardware multiplication. This type of constrained CPUs is the commonly used in sensors and actuators for home automation and mobile health products. For example, the low category of the MSP430 family from Texas Instrument [START_REF] Davies | MSP430 Microcontroller Basics[END_REF].
Similar primes to the shifting primes have been previously mentioned in [START_REF] Imai | A practical implementation of elliptic curve cryptosystems over GF(p) on a 16-bit microcomputer[END_REF], but they did not exploited its properties and applications. These new properties which are the used to optimize the implementation for constrained devices without hardware multiplier were described in [START_REF] Marin | Shifting primes: Extension of pseudo-mersenne primes to optimize ecc for msp430-based future internet of things devices[END_REF]. In addition, this work presents new features for its optimization in CPUs with hardware multiplication support.
A shifting prime p is a prime number that can be written as follows: p = u • 2 α -1, for a small u. In particular we are using for the implementations p = 200 • 2 8•19 -1. There are more than 200 shifting primes that are 160-bit long. The details about this definition can be seen in [START_REF] Marin | Shifting primes: Extension of pseudo-mersenne primes to optimize ecc for msp430-based future internet of things devices[END_REF] and [START_REF] Marin | Shifting primes: Optimizing elliptic curve cryptography for smart things[END_REF].
For the implementation of the ECC primitives is used the Montgomery representation for modular numbers. Thereby, computing x → x/2(p) is very fast even without a hardware multiplier when the shifting primes are used.
Operations using shifting primes can be optimized computing x → x/2 16 (p) instead of shifting one by one each step during the multiplication. By using this technique, MSP430 can make a single scalar multiplication within 5.4 million clock cycles in [START_REF] Marin | Shifting primes: Optimizing elliptic curve cryptography for 16-bit devices without hardware multiplier[END_REF].
But, the situation is rather different when the CPU supports hardware multiplication. For this situations, the use of the hardware multiplier through the offered instructions set performs better, since blocks of several bits can be multiplied within a few cycles, for example blocks of 16 bits for a CPU of 32 bits with a 16 bits multiplier such as the located at the JN5139 CPU from Jennic/NXP.
The following sections present as the implementation of the ECC primitives can be optimized for CPUs with hardware multiplier and the advantages that the shifting primes are offering for these high capability CPUs yet.
C and Assembler in JN5139
The ECC primitives implementation has been optimized for Jennic/NXP JN5139 microcontroller. The implementation is mainly developed in C, but there are critical parts of the code that require a more precise and low level control, and they have required the use of assembler. In particular, assembler has been used for the basic arithmetic (additions, subtractions and multiplications modulo p).
The target architecture of this chip is based on the OpenRISC 1200 instruction set, and it has been named "Beyond Architecture" or "ba". In particular, the basic instruction set for JN5139 is called "ba1" and the one for JN5148 is called "ba2". Some of the characteristics that are important in our implementation are:
1. 32 general purpose registers (GPRs) labeled r0-r31. They are 32 bits wide. Some of them are used for specific functions (r0 is constantly 0, r1 is the stack pointer, r3-r8 are used for function parameters and r9 is the link register This multiplies two registers of 32 bits, but this only stores the result in a single register of 32 bits. Therefore, in order to now loss information, it is only used the least significant part of the registers, i.e. 16 effective bits from the 32 bits, to make sure that the result fit into a single register of 32 bits.
5. The clock frequency of the JN5139 CPU is 16MHz.
Since the RISC principles from the CPU used, the instruction set is reduced, but this offers a very fast multiplication (within 3 clock cycles) with respect to the 16 bits emulated multiplication available in the MSP430 CPUs, which requires more than 150 cycles. Therefore, the support for the multiplication is highly worth for the modular multiplication performance. Such as mentioned, the multiplication offered by the JN5139 CPU is limited to the least significant 32 bits of the result, therefore this requires to limit the multiplication for its usage in the modular multiplications. This is important because the implementation carries out 16x16 multiplications to avoid information loss.
Another important characteristic is that there are available a high number of registers. Therefore, this allows to keep all the information in registers during the multiplication process, and consequently reduce the number of memory operations.
The following sections presents how the multiplication modular is implemented over the JN5139 CPU and how the shifting primes are optimized this implementation thanks to its suitability for the reduction modulo p, in order to reduce the total number of required registers.
Multiplication Algorithm
There are different options to compute the product a • b modulo p. The choice of one of them depends on the instruction set and the number of registers available. There are a lot of C implementations that could be optimal for some architectures, but they are rather inconvenient for other architectures.
The decisions considered for this implementation are dependent on this particular architecture, in terms such as the mentioned issues with the multiplication instruction, l .mul, which offers a multiplication of two 32-bit numbers, but only offers the least significant 32 bits as a result.
Let x and y, the two operands for the multiplication stored in 16 bits blocks with big endian memory. Then, x = 10 i=0 x i 2 16(10-i) and y = 10 i=0 y i 2 16 (10-i) . The basic multiplication algorithm requires to multiplicate each x i y j and add the partial result from each partial multiplication to the accumulator.
The result from the multiplication of the x i y j blocks is a number of 32 bits, which needs to me added to the accumulator. This addition to the accumulator requires previously the shifting of the result to the proper position regarding the index from the x i y j blocks, i.e., m ij = x i y j 2 16(10-i) 2 16(10-j) = x i y j 2 16(10-i-j) requires a shifting of 2 16(10-i-j) to be added to the accumulator.
Since the result from the multiplication of the x i y j blocks is a number of 32 bits, the result can be directly added only when i and j presents the same parity. This is mainly caused because m ij will be divisible by 2 32 , and consequently it will be aligned to a word (32 bits). Otherwise, when i and j are not presenting the same parity, m ij will be divisible by 2 16 and not by 2 32 , consequently the memory is not aligned and it cannot be operated.
A solution for the presented problem with the memory is realign the results m ij when i + j is odd, but this requires to shift operations, and the addition to the accumulator of both results. This means 4 instructions instead of the 1 instruction when the result is . Note, that the 50% of the multiplications will not be aligned.
The proposed solution in this work in order to avoid the mentioned extra costs and impact in the performance is to define a second accumulator. Therefore, an accumulator is used for the aligned additions, i.e. k•2 32 , and another accumulator is used for the not aligned additions, i.e. k • 2 32 + 2 16 . Thereby, the realign is only required at the end, when both accumulators are combined.
The proposed solution presents the inconvenient of requiring a high number of registers to store the second accumulator. In the particular case, when the numbers have a length equal to 160 bits, the accumulator needs to be equal to 320 bits, in order to store the result from 160x160. Therefore, it is required 10 registers for a single accumulator, and consequently 20 registers for the two required accumulators. In addition, one of the operands needs to be also stored into the registers, i.e. 160 bits stored in 10 registers, 16 bits per register, note that it is stored only 16 bits per register even when it is feasible to store 32 bits, since the previously described limitations by the hardware multiplication. In summary, it is required 30 registers to keep into the registers the accumulators and one of the operands, but it is not feasible because additional registers for the temporal are required.
But, a solution is feasible to maintain both accumulators into the registers and at the same time the additionally required registers thanks to the features from the shifting primes.
Shifting primes allows to carry out the reduction modulo p, while it is added the partial result to the accumulator. Thereby, the result is 160 bits (when p is a 160 bits number). This allows to keep the two accumulators of 160 bits, instead of the previously mentioned 2 accumulators of 320 bits. Therefore, only 5 registers are required for each accumulator, i.e., 10 registers for both accumulators, what is feasible.
Let two accumulators A and B, A =
5 i=0 A i 2 32•i , B = 5 i=0 B i 2 32•i .
A is used for the aligned operations and B for the not aligned operations. Let the previously mentioned operands x and y. The first multiplication iteration is to multiplicate the first operand by x9, and after add it to the adequate accumulator such as follows:
x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 A B × y9 x0y9 x2y9 x4y9 x6y9 x8y9 x1y9 x3y9 x5y9 x7y9 x9y9 O O O O O O O O O O
Then, it is established the operand x into the registers r13 ,..., r20, the accumulator A in r22 ,..., r26 and the accumulator B in r27 ,..., r31. The assembler code required to carry out this partial multiplication is composed by 10 l .mul, where each one requires 3 cycles. l . mul r31 , r20 , r 3 l . mul r26 , r21 , r 3 l . mul r30 , r18 , r 3 l . mul r25 , r19 , r 3 l . mul r29 , r16 , r 3 l . mul r24 , r17 , r 3 l . mul r28 , r14 , r 3 l . mul r23 , r15 , r 3 l . mul r27 , r12 , r 3 l . mul r22 , r13 , r 3
The next step is to multiplicate by x8, and shift 16 bits the result. Note, that it should be required to shift 16 bits the accumulator, but instead of that operations, the proposed solution stores this partial result in the accumulator B. The global effect is a 16 bits in the accumulator at the end. For the next multiplication, it is required to shift the accumulator A by a block, i.e. 32 bits. For this operations, it is taking into account that p = 0xc800 • 0x10000 9 -1 and consequently 1 ≡ 0xc800 • 0x10000 9 modulo p, therefore it is equal to:
A = A 0 • 0x10000 8 + A 1 • 0x10000 6 + A 2 • 0x10000 4 + A 3 • 0x10000 2 + A 4 = A 0 •0x10000 8 +A 1 •0x10000 6 +A 2 •0x10000 4 +A 3 •0x10000 2 +A 4 0xc800•0x10000 9
It can be splitted into two block of 16 bits, then
A 4 = A H 4 • 0x10000 + A L 4
, and it can be considered:
A = (0xc800 • A H 4 • 0x10000 8 + A 0 • 0x10000 6 + A 1 • 0x10000 4 +A 2 • 0x10000 2 + A 3 ) • 0x10000 2 + 0xc800 • A L 4 • 0x10000 9
The value 0xc800•A L 4 •0x10000 9 is moved to B, since now it has the adequate alignment to be combined with the other accumulator. Then, the accumulator A can be shifted 32 bits with only change the role of the register. The change of the role for the registers does not require any explicit instruction, it is only a programming issue, which can be adapted when the loop from the multiplication is unrolled. Therefore, it is unrolled the 10 iterations of the loop (one iteration for each 16 bits from the 160 bits of the operand).
Therefore, the registers rotation is directly programmed in the code. Note that the register r6 has the value 0xc800 during all the operation. The code is as follows: l . a n d i r8 , r31 , 0 x f f f f l . mul r8 , r8 , r 6 l . s r l i r31 , r31 , 1 6 l . mul r31 , r31 , r 6 l . add r26 , r26 , r 8 l . addc r8 , r7 , r 0 l . s l l i r8 , r8 , 1 6 l . add r31 , r31 , r 8
After the change is carried out in the accumulator, then it is carried out the multiplication with the block x 8 , and it is added to the appropriated accumulator. The scheme is as follows (where + represent the addition operation, and ⊕ the addition with carry):
x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 A B × y8 ⊕x1y8 ⊕x3y8 ⊕x5y8 ⊕x7y8 +x9y8 ⊕x0y8 ⊕x2y8 ⊕x4y8 ⊕x6y8 +x8y8 W W W W W W W W W W
Finally, in terms of assembly code is: l . mul r7 , r20 , r 3 l . add r25 , r25 , r 7 l . mul r7 , r18 , r 3 l . addc r24 , r24 , r 7 l . mul r7 , r16 , r 3 l . addc r23 , r23 , r 7 l . mul r7 , r14 , r 3 l . addc r22 , r22 , r 7 l . mul r7 , r12 , r 3 l . addc r26 , r26 , r 7 l . addc r7 , r0 , r 0 l . mul r8 , r21 , r 3 l . add r31 , r31 , r 8 l . mul r8 , r19 , r 3 l . addc r30 , r30 , r 8 l . mul r8 , r17 , r 3 l . addc r29 , r29 , r 8 l . mul r8 , r15 , r 3 l . addc r28 , r28 , r 8 l . mul r8 , r13 , r 3 l . addc r27 , r27 , r 8
The last carry values from both accumulators needs to be added. The carry of A is shifted 16 bits and added to the most significant part of B. The final carry from this operation is added to the carry of B. l . addc r8 , r0 , r 0 l . s l l i r8 , r8 , 1 6 l . add r26 , r26 , r 8 l . addc r7 , r7 , r 0 Now, the role of the accumulators A and B are interchanged, with a shifting of 32 bits in A. The pending carry is added to the most significant part of A.
The presented process needs to be repeated 9 times until that it is completed all the blocks of the operand, i.e. y i . At the end, all the accumulators are added in order to get the final result.
Results and evaluation
The evaluation is focused on the multiplication modulo p, since it is the most critical part in the ECC algorithms. There is a very rich literature about how to implement ECC primitives by using the basic modular arithmetic. For this purpose, it needs to be fixed a curve and implement the point arithmetic. A wide variety of curves, point representations and formulas for point addition and point doubling can be found in [START_REF] Bernstein | Explicit-formulas database[END_REF].
For the prime p = 200 • 256 19 -1 we have chosen the Weierstrass curve y 2 = x 3 -3x-251. The number of points of this curve is p+1257662074940094999762760, that is a prime number. We have chosen the parameter -3 to use the formulas for point addition and point doubling in Jacobian coordinates given in [START_REF] Bernstein | Explicit-formulas database[END_REF].
The time that we have considered as a reference is the time required for a key generation. This requires the selection of a random number s K (the private key) and the computation of the scalar multiplication [s k ]G where G is a generator of the group of points in the curve. The generator has been set up with the following coordinates: x G = 0x9866708fe3845ce1d4c1c78e765c4b3ea99538ee and y G = 0x58f3926e015460e5c7353e56b03dd17968bfa328
The time required for the scalar multiplication is usually computed in terms of the time M required for a single modular multiplications. A standard reference is [START_REF] Cohen | Efficient elliptic curve exponentiation using mixed coordinates[END_REF] that gives 1610M for 160-bit scalar multiplication. This result requires some precomputations and considers that computing a square is a bit faster than a standard multiplication, 0.8M. We have made a rather optimized multiplication with a code that requires around 2Kb. To have another function for squaring would require more or less the same and all the other precomputations could increase the size of the program too much. We have used an implementation that requires 2100M (1245 multiplications and 855 squares).
Standard literature ignores the time required for other operations different from modular multiplication. We have computed in our case that the key generation spends 83.13% of the time doing modular multiplications, this is a big percentage, but not all the time. Of course, our biggest efforts have been done in the optimization of this operation.
The following table presents the real time required for the key generation, one single modular multiplication, and finally for 2100 of them. Since the CPU clock from the JN5139 is equal to 16MHz, 54.9 µs are 878 clock cycles in real time. The number of cycles for reading the code is around 750 cycles, this is the theoretical number of cycles. But, the real time is a little bit higher than the theoretical time because the external interruptions, cache failures, and the pipeline could require some extra cycles to execute the code.
Conclusions and Future Work
The first conclusion is that the multiplication algorithm presents a high dependence to the CPU architecture where it will be evaluated. From our experience, we have experimented with the MSP430 architecture, which is based on a 16 bits microcontroller, without support for hardware multiplication, 16 registers and a limited set of instructions. For this work, it has been evaluated an architecture with higher capabilities, specifically, an architecture based on OpenRISC with 32 bits operations, support for hardware multiplication, 32 registers with a very low cost per instruction, and an extended set of instructions. These huge differences make very difficult to compare among different implementations, at least that they have tested over the same architecture and exploiting the same features. For example, it can be found an implementation of ECC for MSP430 and JN5139 in [START_REF] Chatzigiannakis | Elliptic Curve Based Zero Knowledge Proofs and their Applicability on Resource Constrained Devices, Mobile Adhoc and Sensor Systems[END_REF] which presents a very low performance, since it is implemented over the WiseBed Operating System. Therefore, even when they are using the same architecture the results are highly different because they are not being able to exploit the main benefits from the architecture.
The second conclusion is that shifting primes have demonstrated be a very useful primes, which are offering a very interesting set of properties in order to optimize the implementations for constrained devices. First, it was optimized for the MSP430 CPU thanks to its low quantity of bits set to 1, which simplifies some iterations from the multiplication when it is implemented through additions and shifts as a consequence that this was not supported supported by the hardware. Second, it has been also presented how to exploit the shifting primes for architectures, such as the JN5139 based on OpenRISC, which are supporting hardware multiplication. The optimization for the JN5139 has been focused on its suitability for the reduction modulo p, while it is added the partial result to the accumulator in order to make feasible the exploitation of the hardware multiplication over the available registers.
Finally, it needs to be considered hybrid scenarios, since in the different use cases from the Internet of Things, it will be common to find a same solution with multiple CPUs in the sensors, actuators and controllers. For example, it could be found a JN5139 module in the controller, since this has a higher memory and processing capabilities to manage multiple nodes requests and maintenance. However, the most common CPU for the sensors and actuators will be the MSP430, since this presents a lower cost but yet enough capabilities for their required functionality. Therefore, it is very relevant to have this kind of primes and implementations such as the presented in this work, which is feasible for devices with different capabilities. For that reason, our future work will be focused on demonstrate a scenario where MSP430 and JN5139 are integrated into the same solution, and both implement high level security algorithms such as DTLS for CoAP, using both of them certificates built with shifting primes-based keys. making thereby them feasible to interoperate and exploit the described optimization. STREP European Projects "Universal Integration of the Internet of Things through an IPv6-based Service Oriented Architecture enabling heterogeneous components interoperability (IoT6)" from the FP7 with the grant agreement no: 288445, "IPv6 ITS Station Stack (ITSSv6)" from the FP7 with the grant agreement no: 210519, and the GEN6 EU Project.
Addition operation offers different instructions; addition with carry and addition with immediate value ( l .add, l .addc, l .addi ), which consume one clock cycle. 4. Multiplication instruction l .mul requires tree clock cycles. See [1, Table3.2].
). See
[START_REF] Opencores | OpenRISC 1000[END_REF] Section 4.4
] and
[6,
Subsection 16.2.1]. 2. All arithmetic and logic instructions access only registers. Therefore, they require the use of load and store instructions to access memory. For this purpose, OpenRISC offers the required instructions to load and store in a very flexible way, i.e. this offers operations for bytes, half words and words between registers and memory. The memory operations consume a low number of cycles. Load operations require two clock cycles and store operations require one clock cycle, when there is not cache line miss or DTLB miss. See [1, Page 15]. 3.
Acknowledgment
This work has been carried out by the excellence research group "Intelligent Systems and Telematics" granted from the Foundation Seneca (04552/GERM/06). The authors would like also thanks to the Spanish Ministry of Science and Education with the FPU program grant (AP2009-3981), the Ministry of Science and Innovation, through the Walkie-Talkie project (TIN2011-27543-C03-02), the | 26,522 | [
"993498",
"1003165",
"910734"
] | [
"309836",
"309836",
"309836"
] |
01471612 | en | [
"sdu"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01471612/file/Tan_P-T-time-isotopic.pdf | Zhou Tan
Philippe Agard
Jun Gao
email: [email protected]
Timm John
Jilei Li
Tuo Jiang
Léa Bayet
Xinshui Wang
Xi Zhang
isotopic evolution of coesite-bearing eclogites: implications for exhumation processes in SW Tianshan
Keywords: UHP metamorphism, eclogite, P-T path, pseudosection modeling, thermobarometry, zircon U-Pb, zircon oxygen isotope, exhumation pattern
The Chinese Southwestern Tianshan high-to ultra-high pressure low temperature (HP-UHP/LT) metamorphic belt exhibits well-preserved mafic layers, tectonic blocks/slices and boudins of different sizes and lithology embedded within dominant meta-volcanosedimentary rocks. Despite a wealth of previous studies on UHP relicts, P-T paths estimates and age constraints for metamorphism, controversies still exist on P-T-t assessments and regional exhumation patterns (i.e., tectonic mélange versus internally coherent "sub-belt" model). This study focuses on a group of coesite-bearing eclogite samples from a thick (~ 5 meters) layered metabasalt outcrop in order to unravel its detailed tectono-metamorphic evolution through space and time (both prograde, peak and exhumation). Using SIMS zircon U-Pb and oxygen isotope analyses, TIMS Sm-Nd multi-point isochron dating, in situ laser-ICP-MS trace-element analyses, classical thermobarometry and thermodynamic modeling, we link the multistage zircon growth to garnet growth and reconstruct a detailed P-T-time-isotopic evolution history for this UHP tectonic slice: from UHP peak burial ~ 2.95 ± 0.2 GPa, 510 ± 20 ℃ around 318.0 ± 2.3 Ma to HP peak metamorphism ~ 2.45 ± 0.2 GPa, 540 ± 20 ℃ at 316.8 ± 0.8 Ma, then, with eclogite-facies deformation ~ 2.0 ± 0.15 GPa, 525 ± 25 ℃ at 312 ± 2.5 Ma, exhumed to near surface within ca. 303 to ca. 280 Ma. Our P-T-time-isotopic results combined to the compilation of regional radiometric data and P-T estimates notably point to the existence of a short-lived period of rock detachment and exhumation (< 10 Ma, i.e. at ca. 315 ± 5 Ma) with respect to subduction duration.
A C C E P T E D
M A N U S C R I P T
Introduction
Mechanisms and processes responsible for the occasional recovery of negatively buoyant, ocean-derived high-to ultra-high pressure low temperature (HP-UHP/LT) eclogites equilibrated along the subduction plate interface and for their juxtaposition as tectonic slices or blocks during exhumation remain a matter of debate [START_REF] Agard | Exhumation of oceanic blueschists and eclogites in subduction zones: timing and mechanisms[END_REF][START_REF] Burov | Mechanisms of continental subduction and exhumation of HP and UHP rocks[END_REF][START_REF] Chen | Exhumation of oceanic eclogites: thermodynamic constraints on pressure, temperature, bulk composition and density[END_REF][START_REF] Federico | 39Ar/40Ar dating of high-pressure rocks from the Ligurian Alps: Evidence for a continuous subduction-exhumation cycle[END_REF][START_REF] Gerya | Exhumation of highpressure metamorphic rocks in a subduction channel: A numerical simulation[END_REF][START_REF] Guillot | Exhumation processes in oceanic and continental subduction contexts: a review[END_REF][START_REF] Klemd | Changes in dip of subducted slabs at depth: Petrological and geochronological evidence from HP-UHP rocks (Tianshan[END_REF]Lü et al., 2012;[START_REF] Warren | Exhumation of (ultra-)high-pressure terranes: concepts and mechanisms: Solid Earth[END_REF][START_REF] Warren | Modelling tectonic styles and ultra-high pressure (UHP) rock exhumation during the transition from oceanic subduction to continental collision[END_REF]. The Southwestern Tianshan Akeyazi HP-UHP/LT metamorphic belt potentially provides an interesting test example, with well-preserved mafic horizons or tectonic blocks/slices/boudins of different sizes (from the cm-to km-scale) embedded in volumetrically dominant meta-volcanosedimentary rocks (Gao and Klemd, 2003;[START_REF] Gao | PT path of high-pressure/low-temperature rocks and tectonic implications in the western Tianshan Mountains[END_REF][START_REF] Meyer | An (in-) coherent metamorphic evolution of high-pressure eclogites and their host rocks in the Chinese southwest Tianshan?[END_REF].
However, despite numerous previous works (on UHP relicts, detailed petrology, P-T estimates on isolated blocks/slices and time constraints on the timing of metamorphism; section 2), the area is still a matter of controversy as to (i) the exact P-T evolution and age of metamorphism, (ii) whether the metamorphic belt may be composed of two distinct HP and UHP slices and (iii) The Chinese Southwestern Tianshan high-to ultrahigh-pressure low-temperature metamorphic complex extends for about 200 km along the Southwestern Central Tianshan Suture Zone (SCTSZ; Fig. 1). It is correlated with the Atbashi metamorphic complex in the Southwestern Tianshan Accretionary Complex [START_REF] Hegner | Mineral ages and PT conditions of Late Paleozoic high-pressure eclogite and provenance of mélange sediments from Atbashi in the south Tianshan orogen of Kyrgyzstan[END_REF] and the Fan-Karategin metamorphic belt [START_REF] Volkova | Geochemical discrimination of metabasalt rocks of the Fan-Karategin transitional blueschist/greenschist belt, South Tianshan, Tajikistan: seamount volcanism and accretionary tectonics[END_REF]. [START_REF] Gao | Paleozoic tectonic evolution of the Tianshan Orogen[END_REF] proposed that this HP-UHP/LT complex formed from the northward subduction of the South Tianshan ocean and subsequent collision between the already amalgamated Kazakhstan-Yili-Central Tianshan terrane, in the north, and the Tarim-Karakum plates in the south. Subduction polarity is still debated, however, with alternative suggestions of southward subduction (e.g. [START_REF] Lin | Palaeozoic tectonics of the south-western Chinese Tianshan: new insights from a structural study of the high-pressure/low-temperature metamorphic belt[END_REF].
The Southwestern Central Tianshan Suture Zone bounds to the north the Chinese section of the HP-UHP/LT metamorphic complex, known as the Akeyazi metamorphic complex. This contact, now a ~0.5 km wide sinistral strike-slip shear zone, was active from the late Permian to early Triassic [START_REF] Gao | The mineralogy, petrology, metamorphic PTDt trajectory and exhumation mechanism of blueschists, south Tianshan[END_REF]Gao andKlemd, 2000, 2003). To the north lies a LP-HT Palaeozoic active continental margin [START_REF] Allen | Palaeozoic collisional tectonics and magmatism of the Chinese Tien Shan[END_REF][START_REF] Gao | Paleozoic tectonic evolution of the Tianshan Orogen[END_REF]Klemd et al.,
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
2014), mainly made of amphibolite-and granulite-facies rocks, along with Late Silurian and Early Carboniferous continental arc-type volcanic and volcaniclastic rocks and granitoids (Gao and Klemd, 2003;[START_REF] Gao | Tectonic evolution of the South Tianshan orogen and adjacent regions, NW China: geochemical and age constraints of granitoid rocks[END_REF][START_REF] Xia | Zircon U-Pb ages and Hf isotopic analyses of migmatite from the "paired metamorphic belt[END_REF]. The Akeyazi metamorphic complex (AMC) is overlain to the southwest by unmetamorphosed Palaeozoic sedimentary strata representing the northern, passive continental margin on margin of the Tarim plate [START_REF] Allen | Palaeozoic collisional tectonics and magmatism of the Chinese Tien Shan[END_REF][START_REF] Carroll | Late Paleozoic tectonic amalgamation of northwestern China: sedimentary record of the northern Tarim, northwestern Turpan, and southern Junggar basins[END_REF].
The Akeyazi metamorphic complex is predominantly composed of strongly schistosed meta-volcanosedimentary rocks hosting mafic metavolcanics, marbles and rare ultramafic rocks. Mafic metavolcanics are eclogites and/or blueschists showing gradual transitions or interlayering [START_REF] Li | Coexisting carbonate-bearing eclogite and blueschist in SW Tianshan, China: petrology and phase equilibria[END_REF]. They are distributed as pods, boudins, thin layers or massive blocks in the host rocks (Gao and Klemd, 2003). The AMC was interpreted by some as a tectonic mélange and thought to have formed in a subduction accretionary wedge during subduction of the Southwestern Tianshan ocean (Gao and Klemd, 2003;[START_REF] Gao | PT path of high-pressure/low-temperature rocks and tectonic implications in the western Tianshan Mountains[END_REF]. Both the metavolcanics and the matrix meta-volcanosedimentary were variably retrogressed under blueschist and/or greenschist facies conditions.
Whole-rock geochemical data for the mafic metavolcanics suggest ocean basalt or arc-related affinities (Gao and Klemd, 2003;[START_REF] John | Trace-element mobilization in slabs due to non steady-state fluid-rock interaction: constraints from an eclogite-facies transport vein in blueschist[END_REF][START_REF] Klemd | New age constraints on the metamorphic evolution of the highpressure/low-temperature belt in the western Tianshan Mountains[END_REF]. A recent study indicated that some eclogite boundins also have a
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
continental arc affinity protolith, possibly originating from the basement of a Palaeozoic continental arc setting (Liu et al., 2014a).
Previous age constraints on the P-T evolution of tectonic slices/blocks
Most peak metamorphic estimates for eclogites and prograde blueschists (differences are largely controlled by lithology) yield eclogite-facies HP-LT conditions within the range 480-580℃ and 1.5-3.0 GPa [START_REF] Beinlich | Trace-element mobilization during Ca-metasomatism along a major fluid conduit: Eclogitization of blueschist as a consequence of fluid-rock interaction[END_REF][START_REF] Gao | PT path of high-pressure/low-temperature rocks and tectonic implications in the western Tianshan Mountains[END_REF][START_REF] John | Trace-element mobilization in slabs due to non steady-state fluid-rock interaction: constraints from an eclogite-facies transport vein in blueschist[END_REF][START_REF] Klemd | P -T evolution of glaucophane-omphacite bearing HP-LT rocks in the western Tianshan Orogen, NW China:new evidence for 'Alpine-type' tectonics[END_REF][START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF][START_REF] Meyer | An (in-) coherent metamorphic evolution of high-pressure eclogites and their host rocks in the Chinese southwest Tianshan?[END_REF][START_REF] Soldner | Metamorphic P-T-t-d evolution of (U)HP metabasites from the South Tianshan accretionary complex (NW China) -Implications for rock deformation during exhumation in a subduction channel: Gondwana Research[END_REF][START_REF] Wei | Eclogites from the south Tianshan, NW China: petrological characteristic and calculated mineral equilibria in the Na2O-CaO-FeO-MgO-Al2O3-SiO2-H2O system[END_REF]. A range of P-T conditions of 570-630℃ at 2.7-3.3 GPa was obtained for eclogite-facies micaschists [START_REF] Wei | Metamorphism of high/ultrahigh-pressure pelitic-felsic schist in the South Tianshan Orogen, NW China: phase equilibria and P-T path[END_REF][START_REF] Xin | Petrology and U-Pb zircon dating of coesite-bearing metapelite from the Kebuerte Valley, western Tianshan[END_REF], and 470-510℃ at 2.4-2.7 GPa for eclogites (e.g. Meyer et al., 2016, see Table. S2 for references). Evidence for UHP metamorphism comes from both relict coesite inclusions in garnet in several localities (stars in Fig. 1b) and thermodynamic pseudosection modeling [START_REF] Lü | Coesite inclusions in garnet from eclogitic rocks in western Tianshan, northwest China: convincing proof of UHP metamorphism[END_REF](Lü et al., , 2009;;Lü et al., 2012;[START_REF] Tian | Metamorphism of ultrahigh-pressure eclogites from the Kebuerte Valley, South Tianshan, NW China: phase equilibria and P-T path[END_REF][START_REF] Wei | Metamorphism of high/ultrahigh-pressure pelitic-felsic schist in the South Tianshan Orogen, NW China: phase equilibria and P-T path[END_REF]. The spread of P-T estimates [START_REF] Du | Lawsonite-bearing chloritoid-glaucophane schist from SW Tianshan, China: Phase equilibria and P-T path[END_REF][START_REF] Gao | The mineralogy, petrology, metamorphic PTDt trajectory and exhumation mechanism of blueschists, south Tianshan[END_REF][START_REF] Gao | PT path of high-pressure/low-temperature rocks and tectonic implications in the western Tianshan Mountains[END_REF][START_REF] Klemd | P -T evolution of glaucophane-omphacite bearing HP-LT rocks in the western Tianshan Orogen, NW China:new evidence for 'Alpine-type' tectonics[END_REF][START_REF] Li | A common high-pressure metamorphic evolution of interlayered eclogites and metasediments from the "ultrahigh-pressure unit"of the Tianshan metamorphic belt in China[END_REF][START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF][START_REF] Li | Coexisting carbonate-bearing eclogite and blueschist in SW Tianshan, China: petrology and phase equilibria[END_REF]Lü et al., 2009;Lü et al., 2012;[START_REF] Tian | Metamorphism of ultrahigh-pressure eclogites from the Kebuerte Valley, South Tianshan, NW China: phase equilibria and P-T path[END_REF][START_REF] Wei | Eclogites from the south Tianshan, NW China: petrological characteristic and calculated mineral equilibria in the Na2O-CaO-FeO-MgO-Al2O3-SiO2-H2O system[END_REF] may a priori arise from contrasting assumptions for thermodynamic modeling (and/or difficulties in determining Fe 3+ content and assessing H 2 O activity) or from the complexity of
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
metamorphic evolutions in individual HP-UHP tectonic slices.
The exhaustive compilation of age data (location on Fig. 1b), and the comparison of previous age data versus their assessed P max estimates (figures 2a and 2b), evidence a considerable spread in ages too, with an overlap between eclogite and blueschist ages, whatever the protolith (Fig. 2a). The timing of peak metamorphic conditions falls in the range 325-305 Ma. Garnet growth by multi-point Lu-Hf isochron was dated at ca. 315 Ma [START_REF] Klemd | Changes in dip of subducted slabs at depth: Petrological and geochronological evidence from HP-UHP rocks (Tianshan[END_REF], for both eclogites and blueschists from a variety of valleys within AMC.
U-Pb SIMS ages from metamorphic zircon rims in eclogites are indistinguishable within error, at 319 ± 3 Ma and 321 ± 2 Ma (Liu et al., 2014a;[START_REF] Su | U-Pb zircon geochronology of Tianshan eclogites in NW China: implication for the collision between the Yili and Tarim blocks of the southwestern Altaids[END_REF] and similar to a U-Pb age of 318 ± 7 Ma obtained for eclogite-facies rutile [START_REF] Li | SIMS U-Pb rutile age of low-temperature eclogites from southwestern Chinese Tianshan[END_REF]. [START_REF] Du | A new PTt path of eclogites from Chinese southwestern Tianshan: constraints from PT pseudosections and Sm-Nd isochron dating[END_REF] also reported a suite of relative consistent Sm-Nd isochron ages of 309 ± 4.6 M, 306 ± 15 Ma and 305 ± 11 Ma for eclogites from the Habutengsu river (Fig. 1b). An age of 317 ± 5 Ma was obtained on high-pressure veins crosscutting a blueschist wall-rock, interpreted as the prograde dehydration-related transformation of blueschist to eclogite (Rb-Sr multi-point isochron, [START_REF] John | Volcanic arcs fed by rapid pulsed fluid flow through subducting slabs[END_REF]. Recent Sm-Nd and Lu-Hf isochron ages of 318.4 ± 3.9 Ma and 326.9 ± 2.9 Ma on blueschists [START_REF] Soldner | Metamorphic P-T-t-d evolution of (U)HP metabasites from the South Tianshan accretionary complex (NW China) -Implications for rock deformation during exhumation in a subduction channel: Gondwana Research[END_REF] were interpreted as peak eclogite-facies and prograde metamorphism, respectively. Post-peak cooling was constrained by white mica K-Ar, Ar-Ar and Rb-Sr ages at around 310 Ma [START_REF] Klemd | New age constraints on the metamorphic evolution of the highpressure/low-temperature belt in the western Tianshan Mountains[END_REF]. Ages <
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
280 Ma or > than ~325 Ma were considered by most authors as resulting from limitations of isotopic dating (e.g. excess Ar in Ar-Ar system; Nd disequilibrium in Sm-Nd system; difficulties to relate zircon U-Pb ages to metamorphic stages)
or taken as evidence for distinct HP-UHP episodes.
Controversy on regional exhumation
The Akeyazi metamorphic complex is either interpreted as a tectonic mélange or as made of two main units. In the first interpretation, mafic slices/blocks derived from different depths (UHP and HP conditions) were juxtaposed and mixed during exhumation in a meta-volcano-sedimentary subduction channel-like setting [START_REF] Klemd | Changes in dip of subducted slabs at depth: Petrological and geochronological evidence from HP-UHP rocks (Tianshan[END_REF][START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF][START_REF] Meyer | An (in-) coherent metamorphic evolution of high-pressure eclogites and their host rocks in the Chinese southwest Tianshan?[END_REF]. Despite, indications that rocks may have partly re-equilibrate with fluids in equilibrium with serpentinites (van der [START_REF] Van Der Straaten | Blueschist-facies rehydration of eclogites (Tian Shan, NW-China): Implications for fluid-rock interaction in the subduction channel[END_REF]Straaten et al., , 2012)), serpentinites, which can act as buoyant material during exhumation processes [START_REF] Guillot | Tectonic significance of serpentinites[END_REF], are extremely rare [START_REF] Shen | UHP Metamorphism Documented in Ti-chondroditeand Ti-clinohumite-bearing Serpentinized Ultramafic Rocks from Chinese Southwestern Tianshan[END_REF].
Meta-volcanosedimentary rocks could also act as buoyant material and promote the exhumation of denser, negative-buoyant oceanic HP-UHP/LT rocks in a subduction channel [START_REF] Gerya | Exhumation of highpressure metamorphic rocks in a subduction channel: A numerical simulation[END_REF].
In contrast, the coherent sub-belt model (Lü et al., 2012), based on several individual UHP occurrences in the northern part of the AMC and the prevalence of blueschist facies rocks without UHP "signal" in the south, considers that the
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
AMC consists of two internally coherent metamorphic "sub-belts": a UHP in the north and a HP belt in the south, separated by a major fault contact (only inferred at present) and later juxtaposed during exhumation.
Sample selection and whole-rock composition
Samples were collected from the second eastern tributary up the Atantayi valley (Fig. 1b) in a ~six meter thick, layered eclogitic mafic slice. In order to detailedly reconstruct its tectono-metamorphic evolution (pro-and retrograde), three fresh eclogite samples were chosen from the core of the thick-layered eclogite outcrop (Fig. 3; oriented samples: GJ01-6, 11AT06-2; for geochemistry: 11AT06-1) and one garnet-clinopyroxene-quartz bearing micaschist from the meta-volcanosedimentary host-rock (GJ01-1). Considering the relative low Zr content (only ca. 50 ug/g) of the metabasalts (Table . 2), about 100 kilograms of eclogite were sampled for the zircon study and approximately 500 zircon grains were collected and mounted (detailed processing is given in the Analytical methods in the Appendix).
Bulk-and trace-element geochemical data are shown in table. 2 and figure.
4. Loss on ignite on ranges between 1.8 and 3.3 wt. %, in agreement with modal abundances of hydrous mineral such as amphibole (5 to 8 vol%).
Chondrite-and primitive mantle-normalized [START_REF] Sun | Chemical and isotopic systematics of oceanic basalts: implications for mantle composition and processes[END_REF] rare earth element (REE) and trace-element patterns of the eclogites are
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
Petrology
Mineral occurrences and modal abundances are given in Table . 1. Mafic samples are true eclogites with > 70 vol. % of garnet and omphacite [START_REF] Carswell | Eclogite facies rocks[END_REF]. Mineral constituents of eclogite samples are garnet, omphacite, epodite-group minerals, paragonite, blue amphibole (rimmed in places by
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
blue-green amphibole), quartz, as well as rutile/titanite, calcite and phengite (Figs. 5a-f) in subordinate amounts. The fraction of hydrous minerals (especially amphibole) increases in the vicinity of the host-rock.
Garnet in sample 11AT06 occurs as idioblastic porphyroblasts (~0.4 to 2.5 mm in diameter; Figs. 5d, 5e) in a medium-grained omphacite matrix. The inclusion-rich cores (Fig. 5e) host omphacite, epidote, chlorite, paragonite, quartz and glaucophane/barroisite (Fig. S2). Box-shaped inclusions of aggregates of paragonite and epidote could represent pseudomorphs after lawsonite (Fig. S2). Coesite inclusions (Figs. 7b,c) were found in the garnet mantle (Fig. 7d), where inclusions are much less than in the core.
Omphacite occurs as small subhedral-anhedral grains (~0.05 to 0.1 mm across), either as the main matrix phase or as inclusions in garnet, epidote and paragonite (Figs. 5b,5d,5f,5i). Epidote is subhedral and contains inclusions of omphacite, quartz and rutile (Fig. 5d). Blue amphibole is found as inclusions in some garnets (Fig. 5h) and in the core of late-stage blue-green amphibole (Fig. 5g). Rutile is present both in garnet porphyroblasts (Fig. 5e), paragonite and amphibole grains as armored relicts, and in the matrix (Fig. 5g), where it is replaced by helicitic rims of titanite (Fig. S1). Retrograde albite is rare and xenoblastic. Chlorite replaces or cuts across garnet (Fig. 5e).
In oriented eclogite samples (11AT06-2 and GJ01-6), omphacite defines a weak to moderate foliation, and contain aggregates of epidote group minerals and paragonite aligned along the matrix foliation (Figs. 5f,6a,6b)
Mineral chemistry
A selection of representative EPMA analyses is provided in Table . 3.
Mineral abbreviations are after [START_REF] Whitney | Abbreviations for names of rock-forming minerals[END_REF].
Garnet
Garnet porphyroblasts in eclogite sample 11AT06-1 exhibit systematic compositional zoning (e.g., Fig. 7e) with a core-mantle increase in pyrope (X prp )
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
and a slight decrease in spessartine and grossular content (X sps and X grs ). The almandine (X alm ) profile is fairly constant from core to mantle then decreases steadily towards the outermost part of the rim (Fig. 7e). Second-order fluctuations in X grs and X sps but also in X alm can be observed (Fig. 7e). Garnet profiles may show a rim-outer rim coeval increase in X alm and decrease in X prp , which is generally attributed to retrograde reequilibration by diffusion [START_REF] Wei | Metamorphism of high/ultrahigh-pressure pelitic-felsic schist in the South Tianshan Orogen, NW China: phase equilibria and P-T path[END_REF].
Garnet zoning in eclogite is highlighted by seven spots along the EMPA profile (Fig. 7e), which are used in pseudosection modeling (section 8.2).
Overall, garnet composition changes from core to mantle and rim from Alm 67 Py 10 Grs 20 Sps 4 to Alm 70 Py 12 Grs 18 Sps 1 to Alm 57 Py 21 Grs 22 Sps 0.3 (Table . 3).
These average values correspond to the Grt-C, Grt-M1 to M2 and Grt-M5 to R zones defined in Fig. 7d, and to the approximate location of the LA-ICPMS trace element analyses of the garnet core, mantle and rim.
Omphacite
Clinopyroxene is always omphacitic, but inclusions in garnet and matrix clinopyroxene show distinct compositional variations (Fig. 7f). Inclusions have a lower jadeite content (31-45 mol.%, average 38 mol.%) and a higher total FeO content (4.3-12.9 wt.%, average 7.7 wt.%) than matrix omphacite (40-51 mol.% Jd and 2.5-6.5 wt.% FeO, respectively). Both have similar values of
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
Na N /(Na N +Ca N ) (~0.48; Table . 3). Omphacite inclusions in paragonite lie on the upper bound for jadeite content and Na N /(Na N +Ca N ) but on the lower bound for the total FeO content.
Amphibole
Amphibole in the layered eclogite is either glaucophane or barroisite (Fig. 7g). Glaucophane occurs as euhedral porphyroblasts (~0.5 mm across) with barroisite rims (Fig. 5g). Both glaucophane and barroisite appear as inclusions in garnet, with a wide range of compositions for the latter, across the magnesio-katophorite field [START_REF] Leake | Nomenclature of amphiboles[END_REF] and a systematic decrease in (Na) M4 (Fig. 5h). Glaucophane inclusions have higher Fe 2+ /(Fe 2+ +Mg) and Fe 3+ /(Fe 3+ +Al) ratios. Some amphibole grains have a clear zonation, with glaucophane in the core and barroisite in the rim, similar to the Tianshan garnet-omphacite blueschist samples described by [START_REF] Klemd | P -T evolution of glaucophane-omphacite bearing HP-LT rocks in the western Tianshan Orogen, NW China:new evidence for 'Alpine-type' tectonics[END_REF].
White mica
Paragonite and minor phengite appear as matrix mineral and as inclusions
ACCEPTED MANUSCRIPT
grains in the matrix (~0.02 mm across) or as inclusions in garnet porphyroblasts (Fig. S2). Both of them exhibit a similar Si content (~3.43 p.f.u.).
Epidote-group minerals
Epidote-group minerals occur as 0.5-0.7 mm large porphyroblasts or as subhedral grains close to garnet porphyroblasts (Figs. 5c,5d,5e).
Porphyroblasts are randomly oriented in the matrix with mineral inclusions of omphacite and rutile (Fig. 5c). The pistacite (=Fe 3+ /(Fe 3+ +Al)) contents of epidote range from 0.09 to 0.17 (Table . 3).
Chlorite
Chlorite appears as matrix minerals filled into the fractures of garnet porphyroblasts or as inclusions in garnet (Table . 3).
Rutile/Titanite
Rutile appears as acicular or irregular crystals inclusions in paragonite (Fig. 5g), glaucophane/barroisite (Fig. 5g), garnet (Fig. S2) and matrix minerals (Fig. S1). A titanite-armor around rutile is observed in both matrix omphacite (Figs. S1a,S1b) and quartz (Figs. S1c,S1d). The TiO 2 and Al 2 O 3 contents of titanite are respectively about 37.9 wt % and 2.11 wt % (Table . 3).
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
Trace-element pattern of zircon and garnet
Zircon in layered eclogite occurs as 30 to 50 um grains in the matrix (Fig. 7a) were also observed (Figs. 8a, 8e and S2). A light-luminescent outer rim can also be distinguished in many grains (e.g., Fig. 8 and Fig. S3). A metamorphic origin may be attribute to these zircon domains when considering the < 0.01 Th/U ratios and the presence of diagnostic HP minerals, such as omphacite and rtile, detected via Raman spectroscopy (with peaks at 682 cm -1 and 609 cm -1 , respectively; Figs. S3m, S3n). Quartz was observed but never coesite.
or
Trace-element pattern and oxygen isotopes of zircon
Based on differences in cathodoluminescence (CL) images and chondrite-normalized REE patterns (and U-Pb chronology, which will be discussed in section 7.1), four zircon domains (or growth stages) have been defined from core to rim: respectively (1) the zircon core, (2) a zone enriched in
4).
The REE pattern of zircon rims differ largely from that of the former three zones discussed above by a steeper MREE to HREE distribution (with (Lu/Gd) N ~ 10 and (Lu/Tb) N ~ 34, Table . 4) and the absence of a HREE plateau. Its Y, P, Th and U contents are also much lower (i.e., respectively, ~ 48, 21, 0.17 and
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
three zircon domains.
Trace-element pattern of garnet
Laser spots for the analysis of garnet core, mantle and rim corresponding to EPMA analyses of the C, M1 to M2 and M5 to R areas (shown in Fig. 7d).
Chondrite-normalized REE patterns of garnet core, mantle and rim (Fig. 9b) are broadly similar (and comparable to the REE distributions within zircon), with relatively high and flat HREE content and relative depletion in MREE and LREE.
Garnet cores show the highest HREE enrichment with (Lu/Sm) N ~ 12, compared to the mantle domain ((Lu/Sm)N ~ 2). Garnet mantle shows the highest LREE and a slightly negative slope for HREE. Garnet rims show the strongest depletion in MREE and a slightly lower HREE content compared to that of mantle.
Geochronology and oxygen isotope
U-Pb dating and isotopes of zircon
Fifty spots of twenty-seven zircon grains from eclogite sample 11AT06 have been analyzed to unravel the oxygen isotope composition and data the four growth domains ( [START_REF] Williams | U-Th-Pb geochronology by ion microprobe[END_REF] with the terrestrial Pb isotope composition from [START_REF] Stacey | Approximation of terrestrial lead isotope evolution by a two-stage model[END_REF], which is consistent with the lower intercept age within errors.
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
By contrast, oxygen isotope δ 18 O compositions measured in the four zircon domains are very similar (Fig. 10b; Table . 6), with most values ranging between 8.9 ‰ and 9.5 ‰ (Fig. 10b; mean value = 9.13 ± 0.09 ‰, MSWD = 5.6, n = 56;
Table. 6).
Sm-Nd isotopic chronology
Whole rock, omphacite and garnet (including core and rim) Sm-Nd isotopic data of the oriented eclogite sample 11AT06-2 are given in P-T estimates of 495 ± 20 ℃ and 2.75 ± 0.05 GPa were derived assuming equilibrium between garnet cores and inclusions of omphacite and amphibole (i.e., red square #1 in Fig. 7d). Equilibrating omphacite inclusions and the garnet mantle (red square #2 in Fig. 7d), in the vicinity of coesite inclusion, yield higher P-T estimates of 520 ± 15 ℃ and 3.0 ± 0.75 GPa but with considerable pressure uncertainties. Lower P but higher T values, around 570 ± 35 ℃ and 2.4 ± 0.3 GPa, are obtained for garnet rims, considering an omp-grt-ph-amp equilibrium assemblage (i.e., red square #3: Fig. 7d; Table. were estimated from the mineral assemblage amphibole + paragonite + epidote + chlorite + feldspar (albite) at 380 ± 50 ℃ and 1.0 ± 0.2 GPa, and 420 ± 25 ℃ & 0.9 ± 0.1 GPa.
Phase equilibrium modeling
Pseudosection modeling for sample 11AT06-1 was performed in the system MnNCKFMASHTO (Tables. 1, 2, 3), with excess SiO 2 (i.e., quartz or coesite). TiO 2 must be considered due to the presence of rutile and/or titanite in the matrix or as inclusions in porphyroblasts. The fluid phase in assumed to be pure H 2 O and was set in excess. CO 2 was neglected as only small amounts of carbonate occur as thin secondary veins. Fe 2 O 3 was set at 22.5 mol% of total FeO according to XRF data (Table . 2).
In order to take into account the sequestration of elements induced by the growth zoning of garnet porphyroblast, effective bulk compositions were
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
adjusted from XRF compositions by removing part of the garnet modal abundance, following the method of [START_REF] Carson | Calculated mineral equilibria for eclogites in CaO-Na 2 O-FeO-MgO-Al 2 O 3-SiO 2-H 2 O: application to the Pouébo Terrane, Pam Peninsula[END_REF]. For the modelling of prograde and peak conditions (EBC-1), half of the modal abundance of zoned garnet [START_REF] Warren | Oxidized eclogites and garnet-blueschists from Oman: P-T path modelling in the NCFMASHO system[END_REF][START_REF] Wei | Metamorphism of high/ultrahigh-pressure pelitic-felsic schist in the South Tianshan Orogen, NW China: phase equilibria and P-T path[END_REF] was removed from the XRF composition. For the retrograde path (EBC-2), garnet porphyroblasts were subtracted from the bulk-rock composition (see Table . 2 for XRF, EBC-1 and EBC-2 compositions).
The P-T pseudosections were calculated using the software Perple_X 6.68 [START_REF] Connolly | Multivariable phase diagrams; an algorithm based on generalized thermodynamics[END_REF](Connolly, , 2005) ) and an internally consistent thermodynamic dataset (hp02ver.dat, [START_REF] Connolly | Metamorphic controls on seismic velocity of subducted oceanic crust at 100-250 km depth[END_REF][START_REF] Holland | Activity-composition relations for phases in petrological calculations: an asymmetric multicomponent formulation[END_REF] based on the effective recalculated bulk rock composition (EBC-1 and EBC-2; Table . 2).
Mineral solid-solution models are Gt(HP) for garnet [START_REF] Holland | An internally consistent thermodynamic data set for phases of petrological interest[END_REF],
Omph(GHP) for omphacite [START_REF] Green | An order-disorder model for omphacitic pyroxenes in the system jadeite-diopsidehedenbergite-acmite, with applications to eclogitic rocks[END_REF], Amph (DP) for amphibole [START_REF] Diener | A new thermodynamic model for clino-and orthoamphiboles in the system Na2O-CaO-FeO-MgO-Al2O3-SiO2-H2O-O[END_REF], Mica(CHA) for white mica [START_REF] Coggon | Mixing properties of phengitic micas and revised garnet-phengite thermobarometers[END_REF], Chl(HP) for chlorite [START_REF] Holland | An internally consistent thermodynamic data set for phases of petrological interest[END_REF], Ep(HP) for epidote/clinozoisite [START_REF] Holland | An internally consistent thermodynamic data set for phases of petrological interest[END_REF], and H 2 O-CO 2 fluid solution model is from [START_REF] Connolly | Petrogenetic grids for metacarbonate rocks: pressure-temperature phase-diagram projection for mixed-volatile systems[END_REF].
The pseudosection for sample 11AT06-1 is shown in Fig. 12b. It is dominated by tri-and quadrivariant fields with a few di-and quini-variant fields.
P-T conditions were further constrained by comparing predicted garnet isopleths with measured garnet compositions (boxes on Fig. 12b incorporate
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
typical uncertainties on EMPA analyses, ca. 3% to 5%, [START_REF] Lifshin | Minimizing errors in electron microprobe analysis[END_REF][START_REF] Williams | The Assessment of Error in Electron Microprobe Trace Element Analysis[END_REF].
Stage I, as defined by core-mantle zoning (i.e., garnet zoning from Grt-C to Grt-M3 in Fig. 7e; see compositions in Table . 3), is marked by an increase in both T and P, from 2.55~2.70 GPa and 495~505℃ to 2.95~3.30 GPa and 500~520℃. Peak pressure (P max ) is constrained by using the garnet mantle compositions (corresponding to Grt-M2 to Grt-M4) which have the lowest X grs content, and coincide with the location of coesite inclusions (Figs. 7a-d).
Stage II is constrained by the mantle-rim zoning (corresponding to Grt-M4, Grt-M5 and Grt-R in Fig. 7d, Table . 3) and the mineral assemblage omp-amp 1 (gln)-mica 1 (ph)-lws-grt-rt. It is marked by a slight increase in T at 550~560℃ and a pressure decrease at 2.35~2.60 GPa, further constrained by the Si content of phengite included in garnet (3.43 p.f.u.; Table . 3). Later retrograde re-equilibration, based on the EBC-2 whole-rock composition, is estimated from measured (Na) M4 contents in amphibole and (Na) M4 isopleths modeled from pseudosection (Fig. 12c) at ca. 1.20 GPa and 548℃.
Discussion
Nature of the protolith and P-T-(fluid) constraints
The studied mafic eclogites show LREE-depleted and HREE-flat N-MORB
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
patterns [START_REF] Sun | Chemical and isotopic systematics of oceanic basalts: implications for mantle composition and processes[END_REF], with enrichments in Rb, Ba, U, Pb, and Sr. δ 18 O values for multistage zircons (~ 9.13 ± 0.09 ‰; Fig. 10b) with modelled bulk-rock δ 18 O P peak of ca. 8.47 ‰ (Table . 6, oxygen fractionation factors from [START_REF] Zheng | Calculation of oxygen isotope fractionation in metal oxides[END_REF]Zheng, , 1993) ) are similar to those of typical altered ocean crust (AOC, ~ 7.9 to 9.5 ‰ from [START_REF] Cocker | Oxygen and carbon isotope evidence for seawater-hydrothermal alteration of the Macquarie Island ophiolite[END_REF][START_REF] Miller | Distinguishing between seafloor alteration and fluid flow during subduction using stable isotope geochemistry: examples from Tethyan ophiolites in the Western Alps[END_REF], suggesting that the metamorphic zircons may have inherited the We therefore hypothesize that external fluid infiltration was limited and/or δ 18 O was internally buffered [START_REF] Martin | The isotopic composition of zircon and garnet: A record of the metamorphic history of Naxos[END_REF][START_REF] Martin | Mobility of trace elements and oxygen in zircon during metamorphism: Consequences for geochemical tracing[END_REF].
The P-T evolution followed by mafic eclogites (Fig. 12) is characterized by burial along a typical subduction gradient (~7°C/km) with a moderate heating (up to 50-80℃) during decompression. This is consistent with recent results of thermodynamic modeling [START_REF] Li | A common high-pressure metamorphic evolution of interlayered eclogites and metasediments from the "ultrahigh-pressure unit"of the Tianshan metamorphic belt in China[END_REF][START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF]Lü et al., 2009;[START_REF] Tian | Metamorphism of ultrahigh-pressure eclogites from the Kebuerte Valley, South Tianshan, NW China: phase equilibria and P-T path[END_REF][START_REF] Wei | Eclogites from the south Tianshan, NW China: petrological characteristic and calculated mineral equilibria in the Na2O-CaO-FeO-MgO-Al2O3-SiO2-H2O system[END_REF], though in contrast to some early claims of counterclockwise [START_REF] Lin | Prograde pressure-temperature path of jadeite-bearing eclogites and associated high-pressure/low-temperature rocks from western Tianshan[END_REF] or hairpin-shaped P-T trajectories [START_REF] Gao | PT path of high-pressure/low-temperature rocks and tectonic implications in the western Tianshan Mountains[END_REF].
Thermodynamic modelling of successive re-equilibration stages yield P-T estimates within ~ 0.2 GPa and 20 ℃ (1ζ-error, see also discussion by [START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF], considering uncertainties on solid solution models and thermodynamic properties [START_REF] Dachs | The uncertainty of the standard entropy and its effect on petrological calculations[END_REF][START_REF] Worley | High-precision relative thermobarometry: theory and a worked example[END_REF], effective bulk composition [START_REF] Carson | Calculated mineral equilibria for eclogites in CaO-Na 2 O-FeO-MgO-Al 2 O 3-SiO 2-H 2 O: application to the Pouébo Terrane, Pam Peninsula[END_REF][START_REF] Evans | A method for calculating effective bulk composition modification due to crystal fractionation in garnetbearing schist: implications for isopleth thermobarometry[END_REF] and microprobe analyses [START_REF] Lifshin | Minimizing errors in electron microprobe analysis[END_REF][START_REF] Williams | The Assessment of Error in Electron Microprobe Trace Element Analysis[END_REF]. Conventional thermobarometry give consistent and robust P-T results (Fig. 13a) but with larger uncertainties for the pressure peak (Fig. 12a, Table S1). When considering these uncertainties, most previous published P-T paths in fact will overlap (Fig. 13b, except for [START_REF] Li | A common high-pressure metamorphic evolution of interlayered eclogites and metasediments from the "ultrahigh-pressure unit"of the Tianshan metamorphic belt in China[END_REF][START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF]; van der Straaten et [START_REF] Pearce | Geochemical fingerprinting of oceanic basalts with applications to ophiolite classification and the search for Archean oceanic crust[END_REF][START_REF] Wei | Eclogites from the south Tianshan, NW China: petrological characteristic and calculated mineral equilibria in the Na2O-CaO-FeO-MgO-Al2O3-SiO2-H2O system[END_REF]. Most studies concord on T max conditions for peak burial in the range 510-570 ℃ (Fig. 13b). Our results (520 ± 30℃ for UHP peak burial and 550 ± 30℃ for the temperature peak) are consistent with the recent independent results obtained by Raman spectroscopy of carbonaceous matter (ca. 540 ± 30℃, [START_REF] Meyer | An (in-) coherent metamorphic evolution of high-pressure eclogites and their host rocks in the Chinese southwest Tianshan?[END_REF].
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
Future studies should help clarify their contrasts in P max estimates, which cluster in three distinct groups (dashed boxes; Fig. 13b): 1) a UHP group ranging from ca. 2.7 to 3.3 GPa (e.g., [START_REF] Tian | Metamorphism of ultrahigh-pressure eclogites from the Kebuerte Valley, South Tianshan, NW China: phase equilibria and P-T path[END_REF][START_REF] Wei | Metamorphism of high/ultrahigh-pressure pelitic-felsic schist in the South Tianshan Orogen, NW China: phase equilibria and P-T path[END_REF][START_REF] Xin | Petrology and U-Pb zircon dating of coesite-bearing metapelite from the Kebuerte Valley, western Tianshan[END_REF] (and this study), 2) a HP group in the range of ca. 1.8-2.5 GPa (e.g., [START_REF] Beinlich | Trace-element mobilization during Ca-metasomatism along a major fluid conduit: Eclogitization of blueschist as a consequence of fluid-rock interaction[END_REF][START_REF] Du | A new PTt path of eclogites from Chinese southwestern Tianshan: constraints from PT pseudosections and Sm-Nd isochron dating[END_REF][START_REF] Li | A common high-pressure metamorphic evolution of interlayered eclogites and metasediments from the "ultrahigh-pressure unit"of the Tianshan metamorphic belt in China[END_REF][START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF][START_REF] Liu | Paleozoic subduction erosion involving accretionary wedge sediments in the South Tianshan Orogen: Evidence from geochronological and geochemical studies on eclogites and their host metasediments[END_REF][START_REF] Meyer | An (in-) coherent metamorphic evolution of high-pressure eclogites and their host rocks in the Chinese southwest Tianshan?[END_REF][START_REF] Soldner | Metamorphic P-T-t-d evolution of (U)HP metabasites from the South Tianshan accretionary complex (NW China) -Implications for rock deformation during exhumation in a subduction channel: Gondwana Research[END_REF] and 3) a blueschist-facies group ranging from ca. 1.3 to 1.9 GPa (e.g., [START_REF] Gao | PT path of high-pressure/low-temperature rocks and tectonic implications in the western Tianshan Mountains[END_REF][START_REF] Klemd | P -T evolution of glaucophane-omphacite bearing HP-LT rocks in the western Tianshan Orogen, NW China:new evidence for 'Alpine-type' tectonics[END_REF][START_REF] Van Der Straaten | Blueschist-facies rehydration of eclogites (Tian Shan, NW-China): Implications for fluid-rock interaction in the subduction channel[END_REF][START_REF] Wei | Eclogites from the south Tianshan, NW China: petrological characteristic and calculated mineral equilibria in the Na2O-CaO-FeO-MgO-Al2O3-SiO2-H2O system[END_REF].
Linking garnet with zircon growth
This section attempts to link the potential of garnet as thermobarometer (e.g., [START_REF] Konrad-Schmolke | Combined thermodynamic and rare earth element modelling of garnet growth during subduction: Examples from ultrahigh-pressure eclogite of the Western Gneiss Region[END_REF] with that of zircon as geochronometer (e.g., [START_REF] Rubatto | Zircon trace element geochemistry: partitioning with garnet and the link between U-Pb ages and metamorphism[END_REF] to closely tie U-Pb ages to metamorphic conditions and finally derive a precise P-T-time path.
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
Figure 9b shows that REE patterns for successive garnet and zircon metamorphic growth are almost parallel, except for the zircon rim. Partitioning data for REE between zircon and garnet [START_REF] Rubatto | Zircon trace element geochemistry: partitioning with garnet and the link between U-Pb ages and metamorphism[END_REF][START_REF] Rubatto | Zircon formation during fluid circulation in eclogites (Monviso, Western Alps): implications for Zr and Hf budget in subduction zones[END_REF] suggest that zircon preferentially sequesters Y and HREE over garnet, with D-values of ~2-4 (Fig. 14a). Data show that only the zircon high HREE core and/or core domains (and not the zircon mantle and rim) have higher HREE contents than garnet (Fig. 14b, Table . 4) and may therefore have equilibrated with it.
Zircon high-HREE cores may have co-crystallized either with the garnet core or mantle domains (Fig. 14b), but the D Y+HREEs zrn/grt pattern of the second option is more similar to the one reported in eclogite vein by [START_REF] Rubatto | Zircon formation during fluid circulation in eclogites (Monviso, Western Alps): implications for Zr and Hf budget in subduction zones[END_REF]. For the zircon core domain, the most likely candidate would be the garnet rim. We thus tentatively propose (Figs. 14c, 15) that zircon high-HREE core grew in equilibrium with the garnet mantle at ~ ca. 318 Ma at UHP conditions (~ ca. 2.6-3.1 GPa; Grt-M1 to M2 stages with coesite inclusions; Fig. 7), while zircon core grew in equilibrium with the garnet rim at ~ ca. 316 Ma (~ ca. 2.3-2.7 GPa; Grt-M5 to R stage).
The zircon mantle domain (~ ca. 303 Ma, Fig. 10f), with similar but lower HREE patterns (Fig. 9a), could have inherited its HREE content from the partial resorption of garnet (and/or zircon cores, i.e., [START_REF] Degeling | Zr budgets for metamorphic reactions, and the formation of zircon from garnet breakdown[END_REF], while garnet abundance was still buffering the HREE budget. By contrast, the
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
different REE pattern of the last zircon rim overgrowth (~ ca. 280 Ma, Fig. 10g), hints to the breakdown of another Zr-bearing mineral during greenschist/blueschist-facies exhumation, possibly rutile [START_REF] Kohn | The fall and rise of metamorphic zircon[END_REF][START_REF] Lucassen | Redistribution of HFSE elements during rutile replacement by titanite[END_REF]. Similar multi-stage zircon growth from the Dabie mountain (but with zircon from separate eclogite and quartz vein, Liu et al., 2014b;[START_REF] Zheng | Fluid flow during exhumation of deeply subducted continental crust: zircon[END_REF] was interpreted as a witness of channelized fluid flow during exhumation.
Age constraints and regional-scale tectonic implication
Based on garnet-zircon equilibrium, ages of ~ ca. 318 and ca. 316 Ma can be ascribed to the peak burial (UHP) and peak temperature (HP) metamorphic stages, respectively (Figs. 14c and15). Whole rock-grt-omp and grt-omp Sm-Nd isochrons give additional, consistent age constraints at ~ 312 ± 2.5 (Fig. 11; Table . 7). Although experimentally determined closure temperatures for Nd diffusion differ in garnet (~500-850℃, [START_REF] Li | Sm-Nd and Rb-Sr isotopic chronology and cooling history of ultrahigh pressure metamorphic rocks and their country rocks at Shuanghe in the Dabie Mountains, Central China[END_REF] and omphacite (~1050℃, [START_REF] Sneeringer | Strontium and samarium diffusion in diopside[END_REF], questioning Nd equilibrium between these phases at a peak temperature of 530 ± 30°C, consistent mineral pair and whole rock Sm-Nd isochron were reported for similar fine-grained eclogites from distinct localities in the Dabie mountains [START_REF] Li | Sm-Nd and Rb-Sr isotopic chronology and cooling history of ultrahigh pressure metamorphic rocks and their country rocks at Shuanghe in the Dabie Mountains, Central China[END_REF].
This WR-garnet-omphacite Sm-Nd age of ca. 312 Ma (Fig. 11, Table . 7) can therefore be treated as constraining the post-peak eclogite-facies
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
deformation event coeval with omphacite growth in the matrix (Fig. 6a, 6b), slightly after or coeval with the growth of zircon cores in equilibrium with garnet rims (i.e., ~ ca. 316 Ma). These age constraints are broadly consistent with a recent Sm-Nd WR-grt-omp age of ~ 307 ± 11 Ma interpreted as dating the timing of HP metamorphism [START_REF] Du | A new PTt path of eclogites from Chinese southwestern Tianshan: constraints from PT pseudosections and Sm-Nd isochron dating[END_REF] and a Sm-Nd WR-grt-gln age of 318.4 ± 3.9 Ma for a blueschist equilibrated close to HP peak metamorphism [START_REF] Soldner | Metamorphic P-T-t-d evolution of (U)HP metabasites from the South Tianshan accretionary complex (NW China) -Implications for rock deformation during exhumation in a subduction channel: Gondwana Research[END_REF]. Ages between 318 and 312 Ma are also consistent with the 320.4 ± 3.7 Ma zircon-rim U-Pb age obtained on coesite-bearing meta-volcanosedimentary rocks [START_REF] Xin | Petrology and U-Pb zircon dating of coesite-bearing metapelite from the Kebuerte Valley, western Tianshan[END_REF] and with the 315.2 ± 1.6 Ma garnet-multi-point Lu-Hf age obtained for eclogites [START_REF] Klemd | Changes in dip of subducted slabs at depth: Petrological and geochronological evidence from HP-UHP rocks (Tianshan[END_REF].
Overall, the present study allows to conclude that 315 ± 5 Ma can be taken as a reliable value for the attainment of HP to UHP conditions in the Chinese Southwestern Tianshan UHP-HP/LT metamorphic complex. The dispersion of Ar/Ar ages (Fig. 16b, [START_REF] Gao | The formation of blueschist and eclogite in West tianshan mountains, China and its uplift history insight from the Ar/Ar ages[END_REF][START_REF] Klemd | New age constraints on the metamorphic evolution of the highpressure/low-temperature belt in the western Tianshan Mountains[END_REF][START_REF] Wang | Structural and geochronological study of high-pressure metamorphic rocks in the Kekesu section (northwestern China): Implications for the late Paleozoic tectonics of the Southern Tianshan[END_REF][START_REF] Xia | Late Palaeozoic40Ar/39Ar ages of the HP-LT metamorphic rocks from the Kekesu Valley, Chinese southwestern Tianshan: new constraints on exhumation tectonics[END_REF] may therefore be due to a problem of excess argon, at least in some eclogite/blueschist samples. Indeed the same outcrop yielded very distinct Ar-Ar plateau ages (e.g., 401 ± 1 and 364 ± 1 Ma by Gao and Zhang, 2000 versus ca. 320 Ma by [START_REF] Klemd | New age constraints on the metamorphic evolution of the highpressure/low-temperature belt in the western Tianshan Mountains[END_REF], and similar ages were obtained for Rb/Sr and Ar/Ar ages by [START_REF] Klemd | New age constraints on the metamorphic evolution of the highpressure/low-temperature belt in the western Tianshan Mountains[END_REF], for which white mica closure temperatures (if strictly under fluid-assisted recrystallization-free assuming) differ by 100 ℃ or more [START_REF] Li | Excess argon in phengite from eclogite: Evidence from dating of eclogite minerals by Sm-Nd, Rb-Sr and 40Ar-39Ar methods[END_REF][START_REF] Ruffet | Rb-Sr and 40Ar-39Ar laser probe dating of high-pressure phengites from the Sesia zone (Western Alps): underscoring of excess argon and new age constraints on the high-pressure metamorphism[END_REF]; Villa,
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
1998; [START_REF] Villa | Geochronology of the Larderello geothermal field: new data and the "closure temperature[END_REF].
The 303 Ma age derived from zircon mantle recrystallization (Fig. 10f) is here taken as a minimum age for the garnet-free or garnet resorption greenschist-facies metamorphism. This conclusion is strengthened by a greenschist-facies Ar-Ar plateau age of ~ 293.1 ± 1.7 Ma reported by [START_REF] Xia | Late Palaeozoic40Ar/39Ar ages of the HP-LT metamorphic rocks from the Kekesu Valley, Chinese southwestern Tianshan: new constraints on exhumation tectonics[END_REF]. Later recrystallization of zircon rims at ~280 Ma, though of poor quality (Fig. 10g) could correspond to the late exhumation movements, as similar ages were reported for the extensive ductile strike-slip deformation of the Southwestern Central Tianshan Suture zone (i.e., [START_REF] De Jong | New 40Ar/39Ar age constraints on the Late Palaeozoic tectonic evolution of the western Tianshan (Xinjiang, northwestern China), with emphasis on Permian fluid ingress[END_REF][START_REF] Laurent-Charvet | Late Paleozoic strikeslip shear zones in eastern Central Asia (NW China): New structural and geochronological data: Tectonics[END_REF]. Note that late, post-exhumation rodingitization was constrained at 291 ± 15.0 Ma (Li et al., 2010c) and that crosscutting, undeformed leucogranites were dated at ~ 284.9 ± 2.0 Ma [START_REF] Gao | The collision between the Yili and Tarim blocks of the Southwestern Altaids: Geochemical and age constraints of a leucogranite dike crosscutting the HP-LT metamorphic belt in the Chinese Tianshan Orogen[END_REF].
These age constraints point to decreasing exhumation velocities, from ca.
12.0 ± 6.5 mm/yr for early UHP to HP exhumation (if < 2 Ma) to 3.6 ± 2.0 mm/yr for blueschist-facies exhumation (within ca. 13 Ma) and 0.8 ± 0.4 mm/yr for greenschist-facies exhumation (within ca. 22 Ma). The average exhumation rate, ca. 2.6 mm/yr, falls within the range of 1-5 mm/yr for exhumed oceanic rocks worldwide [START_REF] Agard | Exhumation of oceanic blueschists and eclogites in subduction zones: timing and mechanisms[END_REF]. Importantly, the timing of HP-UHP metamorphism appears very restricted (at 315 ± 5 Ma; i.e., < 10 Ma) compared to the likely duration of active subduction (> 50-100 My, since an oceanic
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
domain existed in the region from at least ~422 ± 10 Ma to ca. 280-300 Ma, Li et al., 2010c).
Although this study focuses on a single key exposure, a simple geodynamic reconstruction is provided (Figs. 17a,17b) to set back our results in a tentative evolution of the Chinese Southwestern Tianshan metamorphic complex. HP-UHP/LT rocks were detached or exhumed after their peak burial only at about 315 ± 5 Ma (Figs. 16a,16b) and brought back to the surface to greenschist-facies conditions between ca. 310 and 300 Ma (Fig. 17b).
Pervasive north-trending shear senses deduced from eclogite-, blueschist-and greenschist-facies recrystallizations (this study, Fig. 6, see also in [START_REF] Lin | Palaeozoic tectonics of the south-western Chinese Tianshan: new insights from a structural study of the high-pressure/low-temperature metamorphic belt[END_REF][START_REF] Wang | Structural and geochronological study of high-pressure metamorphic rocks in the Kekesu section (northwestern China): Implications for the late Paleozoic tectonics of the Southern Tianshan[END_REF] are interpreted to result from southwest-trending extrusion of the AMC (Figs. 17a,17b).
At present, thorough large-scale structural studies and/or extensive P-T(-t) mapping across the whole HP/UHP complex are still missing to assess whether the ponctated exhumation at 315 ± 5 Ma is associated with chaotic block-in-matrix mixing, with heterogeneous P-T and/or fluid conditions (e.g., extremely complex zoning of both garnet and amphibole, [START_REF] Li | Poly-cyclic Metamorphic Evolution of Eclogite: Evidence for Multistage Burial-Exhumation Cycling in a Subduction Channel[END_REF] or to accretion of tectonic slices of volcanosedimentary material hosting stripped pieces from the seafloor (i.e., mafic meta-volcanics).
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
Conclusions
This study provides P-T-radiometric-isotopic constraints on several eclogitic samples from a key coesite-bearing location in the Chinese Southwestern Tianshan UHP-HP/LT metamorphic complex. Together with an exhaustive review of available P-T (-time) constraints for the area, the following conclusions can be drawn from these new data:
(1) Mafic eclogites have slightly LILE-enriched N-MORB like features with δ 18 O zircon values of ~ ca. 9.0 ‰ typical of an AOC protolith. The constant δ 18 O values for the successive zircon growths suggest a metamorphic system closed (or very limited) to external fluid infiltration.
(2) Thermodynamic modeling permits a complete recovery of their P-T trajectory from pre-peak burial to UHP conditions (~ 2.95 GPa ± 0.2 and 510°C ± 20) and later exhumation. REE partitioning between multistage zircon and garnet growth domains allows for a critical assessment of the P-T-time path: the ca. 318 Ma and 316 Ma zircon ages are tied to UHP and HP metamorphism, respectively, along with a consistent ca. 312 Ma Sm-Nd age for HP eclogite-facies deformation. Thus, peak burial can be constrained at about 315 ± 5 Ma. These P-T-t constraints point to decreasing exhumation velocities, from ca. 12.0 ± 6.5 mm/yr for early UHP to HP exhumation to 3.6 ± 2.0 mm/yr for blueschist facies exhumation and 0.8 ± 0.4 mm/yr for greenschist facies exhumation.
presented in Table . 2.
Mineral major elements
In situ major element compositions of garnet and inclusion minerals were obtained from polished thin sections by electron microprobe analyses at the IGGCAS with the use of JEOL JXA 8100. Quantitative analyses were performed using wavelength dispersive spectrometers with an acceleration voltage of 15 kV, a beam current of 15 nA, a 3 μm beam size and 30 s counting time. Natural minerals and synthetic oxides were used as standards, and a program based on the ZAF procedure was used for data correction.
Representative microprobe analyses for pseudosection modeling and for Thermocalc averagePT calculations are presented in Table . 3 and Table . 8, respectively.
Mineral trace elements
In situ trace element analyses of zoned garnet and zircon were performed [START_REF] Van Achterbergh | GLITTER: On-line Interactive Data Reduction for the Laser Ablation Inductively Coupled Plasma Mass Spectrometry Microprobe[END_REF]. Representative average trace element data of the relevant garnet and zircon domains are given in Table . 4.
Raman analyses
In order to identify coesite inclusions in garnet or small high-pressure mineral inclusions in zircon (e.g. omphactie or rutile), Raman spectroscopy was performed at IGGCAS using a Renishaw Raman MKI-1000 system equipped with a CCD detector and an Ar ion laser. The laser beam with a wavelength of 514.5 nm was focused on the coesite inclusion through 50× and 100×
objectives of a light microscope. The laser spot size was focused to 1 μm. The reproduction of spectra for the same spot is better than 0.2 cm -1 . 7.
Sm-Nd isotope chronology analyses
Sm
U-Pb isotope chronology and Oxygen isotope of zircon
For the preparation of zircon study, and considering the relative low Zr content (only ca. 50 ug/g) of this N-MORB type meta-basalt (Table . 2), about determined relative to the standard zircon 91500 [START_REF] Wiedenbeck | Three natural zircon standards for U -Th-Pb, Lu-Hf, trace element and REE analyses[END_REF], analyses of which were interspersed with those of unknown grains, using operating and data processing procedures similar to those described by [START_REF] Li | Precise determination of Phanerozoic zircon Pb/Pb age by multicollector SIMS without external standardization[END_REF]. A long-term uncertainty of 1.5% (1 RSD) for 206 Pb/ 238 U measurements of the standard zircons was propagated to the unknowns (Li et al., 2010a) despite that the measured 206 Pb/ 238 U error in a specific session is generally around 1% (1 RSD) or less. Measured compositions were corrected for common Pb using non-radiogenic 204 Pb. Further details on instrument
A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT
parameters, analytical method, calibration and correction procedures can be found in (Li et al., 2010b). Results of U-Pb isotopic chronology of zircon for sample 11AT06 are listed in Table . 5.
Oxygen isotope analyses at the exact locations of U-Pb isotope zircon were also processed with the use of the Cameca IMS-1280 SIMS at IGGCAS. . S2. "1": P max range for previous "blueschist" ages lacking precise P-T constraint; "2": same for eclogites; "3": range of zircon U-Pb data from SP leucogranite dikes [START_REF] Gao | The collision between the Yili and Tarim blocks of the Southwestern Altaids: Geochemical and age constraints of a leucogranite dike crosscutting the HP-LT metamorphic belt in the Chinese Tianshan Orogen[END_REF]; "4": range of zircon (inherited core) U-Pb data from regional rodingite (Li et al., 2010c). [START_REF] Pearce | Geochemical fingerprinting of oceanic basalts with applications to ophiolite classification and the search for Archean oceanic crust[END_REF]. Reference curves for average N-MORB and OIB from [START_REF] Sun | Chemical and isotopic systematics of oceanic basalts: implications for mantle composition and processes[END_REF], and from [START_REF] Rudnick | Composition of the continental crust: Treatise on geochemistry[END_REF] for upper continental crust. S1). Mineral abbreviations as for figure 5. [START_REF] Hawthorne | On the classification of amphiboles[END_REF]. Mineral abbreviations as for figure 5. Other zircon grains are shown in Fig. S3.
FIGURE AND TABLE CAPTIONS
A C C E P T E D M A N U S C R I P T
ACCEPTED MANUSCRIPT
ACCEPTED MANUSCRIPT
The results of SIMS U-Pb age and δ 18 O composition of ,respectively, (d, number "1" with square in Fig. 10b) zircon core domain with high HREEs content (corresponding to green contour in Fig. 9a), (e, number "2" with square in Fig. 10b) zircon core domain (corresponding to red contour in Fig. 9a), (f, number "3" with square in Fig. 10b) zircon mantle domain (corresponding to blue contour in Fig. 9a) and (g, number "4" with square in Fig. 10b) zircon rim domain (corresponding to yellow contour in Fig. 9a) are listed in Tables . 5 and 6.
Fig. 11: WR-garnet-omphacite Sm-Nd isochron age for the oriented sample (11AT06-2; Table . 7). . 2). Seven spots across the core-rim EMPA profile (Fig. 7e; Table . 3) were used to estimate PT conditions based on calculated isopleths. Blue boxes also consider ca. ) and results from this study place constraints on the main tectonic stages for the area (i.e., detachment, exhumation, etc), as well as on exhumation rates. NOTES: Mineral modes were determined from the thin section by point counting on the basis of petrographic observations. The uncertainty of modal abundances of minerals was estimated to be less than 10% by repeating the same operations. Modeled abundances of mineral assemblage for peak T condition are calculated by pseudosection modeling with EBC-1.
A C C E P T E D
A N U S C R I P T [START_REF] Zheng | Calculation of oxygen isotope fractionation in metal oxides[END_REF]Zheng ( , 1993)); P peak is setted on 530℃.
ACCEPTED MANUSCRIPT
in garnet. Paragonite flakes occur as subhedral fine grains (~0.05 mm across) or occasionally as porphyroblasts in the matrix (~0.5 to 2 mm across, Figs. 5f, 5g, 5i). They are parallel or subparallel to the foliation (Figs. 5f, 5g), contain omphacite and rutile inclusions (Figs. 5g, 5i), and are interpreted to result from post-peak retrograde metamorphism. Phengite mainly occurs as subhedral fine A C C E P T E D M A N U S C R I P T
the same CL characteristics as the core, but more rarely preserved), (3) a mantle zone and (4) the zircon rim. Chondrite-normalized REE patterns of zircon cores (Fig. 9a) show a typically positive slope from LREE to MREE, a relative enrichment in HREE ((Lu/Sm) N ~ 22.94) and flat HREE patterns ((Lu/Tb) N ~ 1.53). This core is also characterized by low Th/U ratios (< 0.01), medium U (~980 ppm), low Th (~4 ppm) and moderate Y, Ti, P and Nb contents (with average values of 89, 2670, 27 and 6 ppm, respectively; Table. 4). Some of the zircon cores show HREE contents 2-3 times higher ((Lu/Sm) N ~ 51.72; (Lu/Tb) N ~ 2.71; Y ~ 230 ppm; Figs. 9a, S3i-l; Table. 4). They are also distinct in terms of U-Pb isotopic composition (Fig. 10), but have similar Th/U ratios (< 0.01), U and Th contents. The zircon mantle domain (Fig. 9b) shows a more gentle positive slope from LREE to MREE and a flat HREE distribution with lower HREE and Y absolute contents ((Lu/Eu) N ~ 3.17; (Lu/Tb) N ~ 0.94; Y content ~ 45 ppm; Table.
S1). Quartz (or coesite) was assumed to be present in all calculations and water activity was set to 1.0. Lowering the water activity (e.g. a H2O = 0.8) only A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT results in a minor decrease in temperature (≤ 15 ℃) and pressure (≤ 0.05 GPa). P-T estimates obtained for the garnet rim, omphacite-amphibole-epidote± phengite assemblage marking the foliation of both sheared eclogitic samples (Figs. 6a, 6b; GJ01-6 and 11AT06-2) are 525 ± 25 ℃ & 2.0 ± 0.15 GPa. P-T conditions for the retrograde, blueschist-to (or) greenschist-facies equilibration stages associated with top-to-NNE shear senses (Fig. 6c, GJ01-1)
composition of the bulk (e.g.,[START_REF] Rubatto | Oxygen isotope record of oceanic and high-pressure metasomatism: a P-T-time-fluid path for the Monviso eclogites (Italy)[END_REF] Some characteristic element patterns (e.g., in Rb vs K, Nb/U vs U, K/Th vs Ba/Th and Ba/Nb vs U/Nb diagrams,[START_REF] Bebout | Metamorphic chemical geodynamics of subduction zones[END_REF]Bebout, , 2013) ) suggest that the N-MORB protolith was later slightly enriched in LILE during metamorphism. By contrast, the meta-volcanosedimentary host rock yields an Upper Continental Crust-like trace-element and REE-distribution pattern, with notable Nb and Ta anomalies (Fig.4). The δ 18 O values of the various zircon domains are remarkably constant despite considerable multistage growth from ca. ~ 318 to 280 Ma. This suggests that the rock system remained essentially closed over ~ca. 40 Ma, during metamorphic re-equilibration, with respective to potential external fluid infiltration, at both the cm-and m-scale (zircon grains were collected from rock fragments from different parts of the outcrop), and/or that fluids were derived from (or equilibrated with) a similar source. External fluids derived from subduction-related lithologies would indeed have likely shifted the δ 18 O A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT signature (unaltered slab mantle and oceanic crust ~ 5.7 ‰, serpentinized slab mantle ~ 1-8 ‰, metasediment ~ 12-29 ‰ from van der Straaten et al., 2012).
by LA-ICP-MS at the IGGCAS with a single collector quadrupole Agilent 7500a ICM-MS, equipped with an UP193Fx argon fluoride New Wave Research Excimer laser ablation system. The glass reference material NIST SRM 610 and NIST SRM 612 were used as standards for external calibration. LA-ICP-MS measurements were conducted using a spot size diameter of 40 to A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT J/cm2. Acquisition time was 20 s for the background and 120 s for the mineral analyses. The Ca-content of garnet determined by EMP analyses and the Si-content of zircon constrained by standard zircon 91500 were used as internal standards. Reproducibility and accuracy, which were determined for NIST SRM 610 and NIST SRM 612, are usually < 8 % and < 6 %. Trace element concentrations were then calculated using GLITTER Version 3 (Van
-Nd isotopic analyses were obtained at the Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing. 100 mg of samples were first mixed with 149 Sm-150 Nd diluent, dissolved afterwards in a purified A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT HF-HClO 4 -HNO 3 -mixture, and finally heated on an electric hot plate for a week. Separation and purification of bulk REE were conducted by a silica-column with an AG50W-X12 exchange resin (200-400 mesh, 2ml), those of Sm and Nd by a silica-column with Teflon powder (1 ml) as exchange medium. Isotopic ratios of Sm and Nd were measured using a Finnigan MAT-262 thermal ionization mass spectrometer. Nd isotopic data of unknowns were normalized to 146 Nd/ 144 Nd of 0.7219, and corrected using Ames ( 143 Nd/ 144 Nd = 0.512138) as external standard. Errors in element concentrations and 147Sm/144Nd-ratios are less than 0.5% (2ζ). Sm-Nd data for the studied sample is shown in Table.
100 kilograms eclogite were sampled from outcrop (detailed sampling locations are shown in Figs. 3a, 3c. 11AT06-1, ca. 40 kg, sampled in 2011; 11AT06-2, ca. 60 kg, sampled in 2013), and approximately 500 zircon grains were collected and mounted in three different epoxy mounts. Fifty point analyses on different zircon domains from grains selected via CL images were chosen for investigating the U-Pb isotopic chronology and Oxygen isotopic composition of the potential multistage growth of zircon. A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT Zircon grains of sample 11AT06 were prepared by conventional crushing techniques and were hand-picked, mounted onto epoxy resin disks and polished with 0.25 μm diamond paste. The zircon grains and zircon standards (Plesovice, Peng Lai and Qing Hu zircon were used here as standards) were mounted in epoxy mounts and then polished to section the crystals in half. Assessment of zircon grains and the choice of analytical sites were based on the transmitted and reflected light microscopy and cathodoluminescence (CL) images. CL imaging was processed on the LEO145VP scabbubg electron microscope with a Mini detector at the Institute of Geology and Geophysics, Chinese Academy of Sciences in Beijing (IGGCAS). The mount was vacuum-coated with high-purity gold prior to secondary ion mass spectrometry (SIMS) analyses. Measurements of U, Th and Pb were conducted using the Cameca IMS-1280 SIMS. U-Th-Pb ratios and absolute abundances were
After U-Pb dating, the mount was carefully repolished for the O isotope analyses. The Gaussian focused Cs+ ions are used as a primary beam to sputter zircon for O-isotope analyses. The primary beam size is ~10 um in diameter, and 2.5-3 nA in intensity. The 16 O and 18 O ions are detected simultaneously by two Faraday cups, and the currents are amplified by 10 10 ohms and 10 11 ohms resistors, respectively. Each spot analysis consists of pre-sputtering, beam centering in apertures, and a signal collecting process. A single spot analysis lasts 3 mins, including 2 mins for pre-sputtering and centering the secondary beam, and 1 min to collect 16 cycles of 16 O and 18 O signals. Oxygen isotopes were measured using the multi-collection mode. The instrumental mass fractionation (IMF) was corrected using an in-house zircon standard Penglai with δ 18 OVSMOW = 5.31 ± 0.10‰(Li et al., 2010d). The measured 18 O/ 16 O ratios were normalized to the VSMOW composition, then corrected for IMF as described in(Li et al., 2010b): IMF =δ 18 O M δ 18 O Standard , and δ 18 O Sample = δ18O M + IMF, where δ 18 O M = [( 18 O/ 16 O) M /0.0020052-1] ×1000 (‰) and δ 18 O Standard is the recommended δ 18 O value for the zircon A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT standard on the VSMOW scale. Corrected δ 18 O values are reported in the standard per mil notation with 2ζ errors. Analytical conditions, instrumentation and operation conditions are similar to Li et al. (2010d); Tang et al. (2015), and the results of oxygen isotope for zircon in sample 11AT06 are listed in Table.6.
Fig. 1 :
1 Fig. 1: Simplified geological map of (a) the Chinese Western Tianshan and (b)
Fig. 2 :
2 Fig. 2: Compilation of age data (a) as a function of estimated P max and (b)
Fig. 1 andTable. S2. "1": P max range for previous "blueschist" ages lacking
Fig. 3 :
3 Fig. 3: (a) Schematic outline of the studied outcrop; (b) field view of the
Fig. 4 :
4 Fig. 4: Whole-rock discrimination diagrams for the samples: (a)
Fig. 5 :
5 Fig. 5: Mineral assemblages and microstructures of the studied eclogites. (a,b)
Fig. 6 :
6 Fig. 6: Thin-section scale shear senses in eclogite and greenschist facies for
Fig. 7 :
7 Fig. 7: Garnet porphyroblast and mineral inclusions: (a) BSE imaging and
Fig. 8 :
8 Fig. 8: Cathodoluminescence imaging and sketches of analyzed zircon grains,
5).
Fig. 9 :
9 Fig. 9: Chondrite-normalized (Sun and McDonough, 1989) REE abundances of
Fig. 10 :
10 Fig. 10: Results of SIMS U-Pb isotopic dating and δ 18 O compositions for
Fig. 12 :
12 Fig. 12: P-T estimates for the studied UHP eclogites, inferred from Perple_X
Fig. 13 :
13 Fig. 13: P-T trajectory estimates from this study (a) and previous works (b). See
Fig. 14 :
14 Fig. 14: (a-b) Patterns of trace element distribution coefficients between
Fig. 15 :
15 Fig. 15: Sketch depicting the petrologic evolution of mineral growth within
Fig. 16 :
16 Fig. 16: Previous radiometric data (see Table.S2) and results from this study
Fig. 16: Previous radiometric data (see Table.S2) and results from this study
Fig. 17 :
17 Fig. 17: Simplified geodynamic cartoon setting back the studied samples and
meta-volcano-sedimentary host rock. Estimated effective bulk rock compositions for pseudosection modelling are indicated (EBC1 and EBC2).
Figure 1
similar to b), as further supported by a Th/Yb vs. Nb/Yb plot (Fig.4c) and by the εNd(t) value of+8.5 (t = ca. 312 Ma, Table. 7). They show a
positive slope in LREEs to MREEs and flat in HREEs, with (La/Yb) N = 0.542 -
1.097, (La/Sm) N = 0.524 -0.666, (Gd/Yb) N = 0.999 -1.536 and a ΣREE
concentration of 49.9 to 58.2 ppm (Table. 2). Primitive mantle-normalized
trace-element variations show moderate Rb, Ba, U and Pb enrichments (Fig.
4b) for these three eclogite samples. As expected for MORB type samples,
there is no Nb-Ta depletion recorded.
The meta-volcanosedimentary rock is characterized by relatively high SiO 2
(~ 54.3 wt.%) and Al 2 O 3 (~ 14.7 wt.%) contents. LREEs are enriched with La contents ~100 times higher than for chondrite, while HREE patterns are flat.
REE and trace-element patterns for this meta-volcanosedimentary sample are close to that of the average upper continental crust (Fig.
4b
,
[START_REF] Rudnick | Composition of the continental crust: Treatise on geochemistry[END_REF]
except for Rb.
Concordia age of 316.8 ± 0.83 Ma (MSWD = 1.01, n = 31) and a weighted mean age of 316.3 ± 1.6 Ma (MSWD = 0.71, n = 31) (Fig.10e). Similar results are found for the HREE enriched cores, with apparent 206 Pb/ 238 U ages varying Ma and the upper intercept with 207 Pb/ 206 Pb = 0.381 for the common Pb composition (Fig.10g). The weighted mean 206 Pb/ 238 U age of this
from 315.3 to 320.4 Ma, a mean Concordia age of 318.0 ± 2.3 Ma (MSWD = 2.7,
n = 4) and a weighted mean age of 317.4 ± 4.6 Ma (MSWD = 0.25, n = 4) (Fig.
10d).
The mantle domain gives younger apparent 206 Pb/ 238 U ages, from 297.2 to
rim domain is 280.5 ± 4.3 Ma (MSWD = 0.64, n = 8) using the 207 Pb-based
common-lead correction
Table. 5; Th/U ratios vary between 0.001 to 0.005). Core domains give apparent 206 Pb/ 238 U ages from 310.0 to 324.3 Ma, a mean A C C E P T E D M A N U S C R I P T ACCEPTED MANUSCRIPT 307.1 Ma, a mean Concordia age of 303.1 ± 1.7 Ma (MSWD = 0.49, n = 7) and a weighted mean age of 303.1 ± 3.3 Ma (MSWD = 0.58, n = 7) (Fig. 10f). Owing to the low U contents of the outer rims (5 of 8 analyzed spots have U contents mostly ranging from 1 to 49 ppm; Table. 5), U-Pb results for this domain are presented on the Tera-Wasserburg plot and of poorer quality than the other domains. Common lead contents of the 8 analyzed spots are highly variable, with values of f 206 between 0.09 % and 34.21 % (Table. 5). The linear regression of the data points (MSWD = 0.81, n = 8) gives a lower intercept age of 280.8 ± 4.9
Table 1 :
1 Mineral modes in the studied samples (vol.%).
Table 2 :
2 Bulk rock major-and trace-element compositions of layered eclogite
Table 3 :
3 Representative microprobe analyses.
Table 4 :
4 Average Laser-ICP-MS trace-element compositions of various garnet and zircon domains.
Table 5 :
5 SIMS zircon U-Pb isotopic data.
Table 6 :
6 SIMS zircon oxygen isotopic data and modeled mineral & bulk-rock oxygen isotopic data at peak pressure conditions.
Table 7 :
7 TIMS Sm-Nd isotopic data.
Table 1
1 Modal abundances of mineral assemblage in vol.%
Samples Grt Omp Amp Ph Pg Ep/Zo Qz Ap Rt Ttn Lws Fe-Ti oxide
eclogite
11AT06-1 18 58 5 0.5 5.5 8 1 1 1.5 0.5 - 1
11AT06-2 19 55 7 1 4 6 3 2 1 1 - 1
GJ01-6 22 56 6 0.5 3.5 7 2 1 1 0.5 - 0.5
Modeled abundances for Peak T
18.4 71 - 0.8 - - 1 - 0.9 - 7.4 -
condition (ca. 550℃+2.5GPa)
Wall-rock mica schist
GJ01-1 20 29 2 5 15 1 22 5 0.5 0.5 - 0.1
The δ 18 O values have been corrected by the instrumental mass fractionation; Modeled results were calculated based on methods and relevant factors from
ACCEPTED MANUSCRIPT
Table 6 11AT06-1d @30 8.90 0.23 11AT06-1e @19 8.91 0.44
Sample zircon core 11AT06-1a @10 11AT06-1a @14 11AT06-1a @15 11AT06-1a @18 11AT06-1a @2 11AT06-1a @20 11AT06-1a @28 zircon mantle 11AT06-1a @1 11AT06-1a @11 11AT06-1a @21 11AT06-1a @9 11AT06-1a @37 11AT06-1c @1 Modeled results δ 18 O(‰) a 2ζ(‰) 9.49 9.23 9.52 9.34 9.46 8.99 9.20 9.71 9.44 8.87 8.35 9.36 9.41 δ 18 O(‰) Assumed mode in P peak (vol %) 0.22 0.30 0.32 0.27 0.48 0.35 0.37 0.25 0.37 0.20 0.39 0.31 0.35 M A N U S C R I P T Sample zircon rim 11AT06-1a @13 11AT06-1a @31 11AT06-1a @7 11AT06-1d @31 11AT06-1e @3 11AT06-1e @4 11AT06-1d @9 11AT06-1e @20 zircon high HREEs 11AT06-1b @1 11AT06-1a @42 11AT06-1b @7 11AT06-1d @29 11AT06-1e @5 assumed end-member δ 18 O(‰) a 2ζ(‰) 9.25 8.10 9.09 9.35 9.00 8.99 9.17 8.66 9.49 9.45 8.98 9.44 9.08 0.26 0.33 0.29 0.49 0.28 0.30 0.27 0.22 0.22 0.46 0.47 0.38 0.32
11AT06-1a @29 11AT06-1a @32 11AT06-1a @36 11AT06-1a @38 11AT06-1a @39 11AT06-1a @6 11AT06-1b @6 11AT06-1c @2 zircon garnet omphacite coesite bulk-rcok NOTES: a 9.82 9.60 9.08 9.19 9.61 9.26 9.29 9.74 9.0000 9.0995 9.1144 9.1534 8.47 A C C E P T E D 0.46 0.23 0.24 0.25 0.36 0.28 0.31 0.38 1.00 20.00 70.00 2.00 - 11AT06-1d @34 11AT06-1e @6 11AT06-1e @7 11AT06-1e @8 11AT06-1e @9 11AT06-1e @10 11AT06-1e @11 11AT06-1e @12 zircon100% alm60%,grs20%,prp20% Di50%,Jd50% Stishovite100% - 9.19 9.19 8.06 8.93 9.02 9.43 8.88 9.16 0.21 0.20 0.29 0.15 0.33 0.28 0.28 0.40
11AT06-1c @6 9.10 0.22 11AT06-1e @13 9.22 0.28
11AT06-1d @1 9.24 0.44 11AT06-1e @14 9.10 0.35
11AT06-1d @33 9.40 0.36 11AT06-1e @15 8.94 0.24
11AT06-1d @22 9.11 0.46 11AT06-1e @16 9.27 0.29
11AT06-1d @6 9.04 0.31 11AT06-1e @17 9.22 0.36
11AT06-1d @19 9.19 0.22 11AT06-1e @18 8.55 0.22
ACKNOWLEDGMENTS
This study was essentially funded by the National Natural Science Foundation of China (41390440, 41390445, 41025008). Additional support was provided by project "Zooming in between plates" (Marie Curie International Training Network) to Prof. Philippe Agard. We would like to thank the editor Prof.
Marco Scambelluri and anonymous reviewers for their constructive comments that greatly helped in improving the article. We further thank He Li and Bingyu Gao for helping the major and trace elements analysis, Qian Mao and Reiner Klemd for helps and data processing during microprobe analyses, Qiuli, Li, Yu Liu, Guoqiang Tang and Jiao Li for helps during SIMS zircon U-Pb dating and oxygen analyse, Yueheng Yang for help during zircon and garnet in situ trace elements analyse, and Zhuyin Chu for help during Sm-Nd isotope analyse.
(3) The comparison of P-T (-time) estimates between this study and the compilation of previous works outlines the existence of a short-lived detachment of subducted eclogites (< 10 Ma) with respect to the "long-term" subduction duration (> 50-100 Ma).
A C C E P T E D
M A N U S C R I P T
APPENDIX Analytical methods
Bulk-rock analyses
Four bulk-rock chemical analyses were performed at the Institute of Geology and Geophysics, Chinese Academy of Sciences (IGGCAS) on three samples from one eclogite outcrop (see location in Fig. 4; 11AT06-1, 11AT06-2 and GJ01-6), and one sample from the wall-rock mica schist ~3 meters away (GJ01-1). Major oxides were determined by a PHILLIPS PW1480 X-ray fluorescence spectrometer (XRF) on fused glass discs. Loss on ignition (LOI)
was measured after heating to 1,000 °C. Uncertainties for most major oxides are ca. 2%, for MnO and P2O5 ca. 5%, and totals are within 100 ± 1 wt.%.
Whole rock Fe 2 O 3 content is constrained by potassium permanganate titration.
Trace element concentrations were analyzed by inductively coupled plasma mass spectrometry (ICP-MS) using a Finnigan MAT ELEMENT spectrometer at the IGGCAS. The detailed analytical procedure is identical to that used by [START_REF] Qian | Petrogenesis and tectonic settings of Carboniferous volcanic rocks from north Zhaosu, western Tianshan Mountains: constraints from petrology and geochemistry[END_REF]. Relative standard deviations (RSD) are within ±10% for most trace elements but reach ± 20% for V, Cr, Co, Ni, Th and U according to analyses of rock standards. Detail major and trace elements analyses are Exhaustively compiled regional data outline a short-lived detachment and exhumation. | 81,707 | [
"1127475"
] | [
"469160",
"485997",
"90040",
"476611",
"485997",
"469160",
"115414",
"485997",
"487899",
"115414",
"485997",
"469160",
"486000"
] |
01480553 | en | [
"phys"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01480553/file/02_MaxSpeed.pdf | Luca Roatta
email: [email protected]
Discretization of space and time: determining the speed limit
Introduction
Let's assume, as work hypothesis, the existence of both discrete space and discrete time, namely spatial and temporal intervals not further divisible; this assumption leads to some interesting consequences. Here we find the upper limit for the speed that a particle or a body can reach.
So, if we suppose that neither space nor time are continuous, but that instead both are discrete, and following the terminology used in a previous document [START_REF] Roatta | Discretization of space and time in wave mechanics: the validity limit[END_REF] , we call l 0 the fundamental length and t 0 the fundamental time.
Determining the maximum speed
The existence of a minimum time below which it is impossible to descend, necessarily implies the existence of a maximum speed: in fact, if ad absurdum it were not so, and the speed could take arbitrarily large values, it would always be possible to find a speed υ at which the time taken to travel a certain space would be smaller than t 0 . But this is not possible because it is in contrast with the initial hypothesis that t 0 is the smallest time interval.
So there is a maximum speed; let's call it υ max . Directly from the definition of constant speed υ = s/t, we have t = s/υ. The minimum value for t in a discrete context is of course t 0 that can be obtained minimizing the numerator s and maximizing the denominator υ.
The minimum value for s is by hypothesis l 0 and the maximum value for υ is υ max as shown above. So we have:
t 0 = l 0 υ max (1)
or
υ max = l 0 t 0 (2)
We have already obtained [START_REF] Roatta | Discretization of space and time in wave mechanics: the validity limit[END_REF] that l 0 t 0 =c (3) so we must conclude that υ max =c (4)
Conclusion
The assumption that both space and time are discrete has led to find the existence of a maximum speed that can not be exceeded. This upper limit results to be coincident with the light speed c. | 2,043 | [
"1002559"
] | [
"302889"
] |
01389807 | en | [
"spi"
] | 2024/03/04 23:41:46 | 2015 | https://hal.science/hal-01389807v2/file/Paper.pdf | Marzieh Fatemi
email: [email protected]
Reza Sameni
email: [email protected].
An Online Subspace Denoising Algorithm for Maternal ECG Removal from Fetal ECG Signals
Keywords: Online subspace denoising, semi-blind source separation, maternal ECG cancellation, noninvasive fetal ECG extraction, online generalized eigenvalue decomposition
Noninvasive extraction of the fetal electrocardiogram (fECG) from multichannel maternal abdomen recordings is an emerging technology used for fetal cardiac monitoring and diagnosis. The strongest interference for the fECG is the maternal ECG (mECG), which is not always removed through conventional methods including blind source separation (BSS), especially for low-rank abdominal recordings.
In this work, we address the problem of maternal cardiac signal removal and introduce an online subspace denoising procedure customized for mECG cancellation. The proposed method is a general online denoising framework, which can be used for the extraction of a signal subspace from noisy multichannel observations in low signal-to-noise ratios, using suitable prior information of the signal and/or noise. The method is fairly generic and may also be useful for the separation of other signals and noises. The performance of the proposed technique is evaluated on both real and synthetic data and benchmarked versus state-of-the-art methods.
Introduction
The fetal electrocardiogram (fECG) provides vital information about the fetal cardiac status. Recent measurement and processing technologies have enabled the noninvasive extraction of the fECG, from an array of sensors placed on the maternal abdomen [START_REF] Sameni | A Review of Fetal ECG Signal Processing; Issues and Promising Directions[END_REF]. One of the most challenging issues in this context is to remove maternal cardiac (mECG) interferences, without affecting the fECG. The mECG can be up to two orders of magnitude stronger than the fECG [START_REF] Sameni | A Review of Fetal ECG Signal Processing; Issues and Promising Directions[END_REF].
To date, various methods have been developed for mECG removal, including spatial filtering [START_REF] Bergveld | A New Technique for the Suppression of the MECG[END_REF], adaptive filtering [START_REF] Widrow | Adaptive Noise Cancelling: Principles and Applications[END_REF][START_REF] Strobach | Eventsynchronous cancellation of the heart interference in biomedical signals[END_REF][START_REF] Swarnalath | A Novel Technique for Extraction of FECG using Multi Stage Adaptive Filtering[END_REF], template subtraction techniques [START_REF] Ungureanu | Basic aspects concerning the event-synchronous interference canceller[END_REF][START_REF] Martens | A robust fetal ECG detection method for abdominal recordings[END_REF] and Kalman filtering [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF][START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF].
Although adaptive and Kalman filters have been very effective for single channel ECG denoising, they have two major limitations for fECG extraction: [START_REF] Sameni | A Review of Fetal ECG Signal Processing; Issues and Promising Directions[END_REF] the inter-channel correlation of the ECG is not used, (2) the fECG is removed with the mECG during periods of mECG and fECG temporal overlap [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF]. Both issues can be avoided by using multiple channels.
A well-known multichannel technique for extraction of fECG is blind source separation (BSS) using independent component analysis (ICA), which has been shown to be more accurate and robust as compared to similar approaches [START_REF] Zarzoso | Noninvasive fetal electrocardiogram extraction: blind separation versus adaptive noise cancellation[END_REF]. However, a basic limitation in conventional ICA is that the performance highly degrades in presence of full-rank Gaussian noise [START_REF] Graupe | Extracting fetal from maternal ecg for early diagnosis: theoretical problems and solutions -baf and ica[END_REF], resulting in residual mECG within the fECG. It is therefore more effective to remove the mECG before applying ICA techniques [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF].
More recently, a deflation subspace decomposition procedure, which we call denoising by deflation (DEFL), was proposed for signal subspace separation from full-rank noisy multichannel observations [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF][START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF][START_REF] Sameni | Extraction of Fetal Cardiac Signals[END_REF][START_REF] Fatemi | A robust framework for noninvasive extraction of fetal electrocardiogram signals[END_REF]]. An interesting application of this framework is for mECG removal from maternal abdominal recordings [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF]. The method has resulted in very good fECG separation, especially in low signal-to-noise ratios (SNR). Yet, a limiting factor of DEFL is the offline block-wise procedure required for generalized eigenvalue decomposition (GEVD), as the core of this algorithm. This issue has been the major obstacle in using DEFL for real-time online fECG extraction.
In this work, using recent developments in online GEVD [START_REF] Zhao | Incremental Common Spatial Pattern algorithm for BCI[END_REF], an online extension of DEFL-called online denoising by deflation (ODEFL)-is introduced for eliminating the mECG from noninvasive maternal abdominal recordings . As with the offline version, the proposed method is fairly general and applicable to various scenarios depending on the prior knowledge regarding the signal and noise subspaces.
Problem definition
Electrical signals recorded from the abdomen of a pregnant woman consist of mixtures of various signals including the mECG, fECG, baseline wanders and muscle contractions considered as noise. Bio-potentials recorded at the body surface are low frequency signals compared with the high propagation velocity of the electrical signals and the sensor distances [START_REF] Geselowitz | On the Theory of the Electrocardiogram[END_REF]. Therefore, the following linear instantaneous data model has been shown to be rather realistic for modeling multichannel maternal abdominal signals [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF]:
x(t) = H m (t)s m (t) + H f (t)s f (t) + H η (t)v(t) + n(t) ∆ = x m (t) + x f (t) + η(t) + n(t) [START_REF] Sameni | A Review of Fetal ECG Signal Processing; Issues and Promising Directions[END_REF] where s m (t) is the maternal ECG source, s f (t) is the fetal ECG source and v(t) represents structured noises, such as electrode movements and muscle contractions. n(t) is full-rank measurement noise and H m (t), H f (t) and H η (t) are the transfer functions that model the propagation media from the corresponding source signals onto the body surface [START_REF] Sameni | Multichannel ECG and Noise Modeling: Application to Maternal and Fetal ECG Signals[END_REF]. In a realistic model, the cardium (of the mother and fetus) should be considered as a distributed source. Therefore, s m (t) and s f (t) are generally full-rank [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF]; but the effective number of dimensions can be relatively small (typically below six [START_REF] Sameni | What ICA Provides for ECG Processing: Application to Noninvasive Fetal ECG Extraction[END_REF]), depending on the sensor positioning and SNR.
The overall objective of noninvasive fECG extraction is to extract x f (t) from this mixture. Among the different interferences and noises, the mECG is the dominant interference, which cannot be fully separated from the fECG through conventional ICA, due to
" # $ $ % & ' u ( ) 0 ( 1 2 2 2 1 3 0 4 5 6 6 7 8 9 u @ A B C D E F G D H E I B D G A B D F P B D Q D E F R S E T U G V W D E F R P U G T U G X
Fig. 1 Block-wise deflation scheme, adapted from [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF] its full-rank nature, high amplitude, and background noise. This results in residual components within the extracted fECG. The DEFL algorithm was proposed to overcome this issue [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF]. Before introducing its online version, DEFL is reviewed in the following section.
Background
Denoising by deflation
The DEFL algorithm is a subspace denoising method, which removes the undesired parts of a multichannel noisy data using a sequence of linear decomposition, denoising and linear re-composition, in a block-wise manner. As shown in Fig. 1, a block of multichannel noisy data X = [x(1), . . . , x(T )] ∈ R N×T is given as input to the DEFL algorithm and a denoised block of the same dimension, namely Y = [y(1), . . . , y(T )] ∈ R N×T is obtained.
The first stage of DEFL consists of finding a suitable invertible spatial filter W ∈ R N ×N , which works as a feature enhancer for transforming X to a space in which the data is ranked from the most to least resemblance to the "desired property". In other words, in the transformed space, the SNR is improved within the first few channels, allowing better signal/noise separability for the first few channels. At the second stage, the signal and noise contents of the first L channels are separated using a suitable denoising method, which is customized per-application, according to the nature of the signals and noises. In the last stage, the residual signals and the N -L unchanged channels are back-projected to the original space. These three stages make the first iteration of the DEFL algorithm. This procedure is repeated in multiple iterations, each time over the output of the previous iteration, until all the undesired components within the data are eliminated. The number of iterations can be selected using a termination criterion that is application-dependent and measures the quality of the signal according to a desired characteristics. For instance, the periodicity measure (PM) defined in Section 6.2 can be used to indicate the portion of the maternal ECG that is removed (or retained) after each iteration, in each channel.
Each iteration of DEFL can be summarized as follows:
Y = W -T G W T X, L ( 2
)
where X is the input data block, Y is output data block, G(•, •) is the denoising operator applied to the first L channels of the input, and W is the spatial filter, as defined above.
The matrix W is application-dependent . As proposed in [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF], it can be obtained by maximizing a Rayleigh quotient in a GEVD procedure. For the application of interest, periodic component analysis (πCA) [START_REF]Multichannel Electrocardiogram Decomposition using Periodic Component Analysis[END_REF] is used for estimating W.
For multichannel ECG observations x(t) ∈ R N , πCA consists of finding w ∈ R N in s(t) = w T x(t), such that the following objective function is maximized.
w * = argmax w E t {s(t)s(t + τ t )} E t {s(t) 2 } = argmax w w T C τ w w T Cw (3) E t {•} denotes averaging over time; C ∆ = E t {x(t)x T (t)} and C τ ∆ = E t {x(t)
x T (t + τ t )} are the covariance and lagged covariance matrices, respectively; τ t is a variable period calculated using the reference (here the maternal) ECG R-wave peaks, as defined in [START_REF]Multichannel Electrocardiogram Decomposition using Periodic Component Analysis[END_REF]. Estimating the matrix W in equation ( 3) is equivalent to solving the following GEVD problem for W ∈ R N ×N :
W H C τ W = Λ, W H CW = I N ( 4
)
where W = [w 1 , . . . , w N ] is a matrix of generalized eigenvectors, I N is an N × N identity matrix and Λ = diag(λ 1 , . . . , λ N ) is a diagonal matrix containing the generalized eigenvalues on its diagonal. It can be shown that w * = w 1 , i.e., the eigenvector corresponding to the largest generalized eigenvalue λ 1 maximizes (3). Moreover, if C and C τ are symmetric matrices,
λ 1 ≥ λ 2 ≥ • • • ≥ λ N are
real and the components of s(t) = W T x(t) are ranked according to their resemblance with the desired (the maternal) ECG [START_REF]Multichannel Electrocardiogram Decomposition using Periodic Component Analysis[END_REF].
An interesting property of the DEFL algorithm is that unlike most PCA and ICA denoising schemes, the data dimensionality is preserved. Moreover, due to the denoising block between the linear projection stages, it overall performs as a nonlinear filtering scheme, which can deal with full-rank and even non-additive mixtures. Apparently, the method is only applicable when prior information about the signal/noise subspaces is available and the maternal ECG is normal (pseudoperiodic). In previous studies, this algorithm has been used for various applications [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF][START_REF] Amini | MR Artifact Reduction in the Simultaneous Acquisition of EEG and fMRI of Epileptic Patients[END_REF][START_REF] Gouy-Pailler | Iterative Subspace Decomposition for Ocular Artifact Removal from EEG Recordings[END_REF][START_REF] Sameni | An Iterative Subspace Denoising Algorithm for Removing Electroencephalogram Ocular Artifacts[END_REF]. Despite its vast range of applications, the block-wise nature of the algorithm has limited its application to batch processing. In this work, an online extension of DEFL is presented.
Incremental common spatial pattern
Common spatial pattern (CSP) has found vast applications in machine learning and signal processing in the recent decade. It has been widely used in biomedical applications such as brain computer interface [START_REF] Ramoser | Optimal spatial filtering of single trial EEG during imagined hand movement[END_REF]. From an algebraic viewpoint, CSP consists of finding a matrix W, which jointly diagonalizes two matrices (R l and R c ) using GEVD.
An online extension of CSP, known as incremental common spatial pattern (ICSP), has also been developed for time-varying matrices R l (t) and R c (t) [START_REF] Zhao | Incremental Common Spatial Pattern algorithm for BCI[END_REF]. In ICSP, the sample-wise update of the first spatial pattern is as follows:
w 1 (t) = w T 1 (t -1)R c (t)w 1 (t -1) w T 1 (t -1)R l (t)w 1 (t -1) R -1 c (t)R l (t)w 1 (t-1) (5)
The minor patterns are found by repeating [START_REF] Swarnalath | A Novel Technique for Extraction of FECG using Multi Stage Adaptive Filtering[END_REF], after applying a deflation procedure on R l :
R l ← I N - R l w 1 w T 1 w T 1 R l w 1 R l (6)
In Section 4, this recursive update algorithm is integrated in the πCA algorithm to develop an online extension of DEFL.
Method
Herein, an online extension of DEFL, which we coin as online denoising by deflation (ODEFL) is proposed for mECG cancellation. The overall block-diagram of ODEFL is summarized in Algorithm 1. In this algorithm, x(t) is the input multi-channel data, y i (t) (1 ≤ i ≤ K) is the output of each iteration, K is the number of iterations, T is the number of samples, and G i (•, L) is the denoising function for removing the undesired parts1 , applied to the first L channels in iteration i.
In Algorithm 1, unlike DEFL, which works on a block of data, ODEFL proceeds sample-by-sample in parallel units corresponding to the successive iterations of the deflation algorithm. In ODEFL, the matrix W is recursively updated from one sample to another and all stages of DEFL are repeated on a sample-wise basis in each iteration. The major stages of Algorithm 1 are detailed below.
Online estimation of covariance matrices for πCA
For an online formulation, the signal statistics contained in C and C τ , should be tracked in time. In order to re-estimate them as the signal evolves, the temporal averaging in the definitions of C and C τ can be replaced with a weighted sum as follows [START_REF] Yang | Projection approximation subspace tracking[END_REF]:
C(t) = t-1 i=0 β i x(t -i)x T (t -i) C τ (t) = t-1 i=0 γ i x(t -i)x T (t -i + τ t-i ) (7)
where β ∈ [0, 1] and γ ∈ [0, 1] are forgetting factors. This is an infinite impulse response (IIR) formulation, in which all samples in the range 1 ≤ i ≤ t contribute in estimating the covariance matrices; but with smaller weights to the older samples 2 .
The weighted sum in ( 7) can be replaced with the following recursion formulas, in favor of computational and memory efficiency:
C(t) = βC(t -1) + x(t)x T (t) C τ (t) = γC τ (t -1) + x(t)x T (t + τ t ) (8)
The forgetting factors enable the adaptation of the algorithm in stationary and non-stationary environments.
For stationary data, selecting β = γ = 1 incorporates all the samples with identical weights. For nonstationary data, the value is chosen less than 1, which for t 1 is similar to using a sliding window with the effective window length of 1/(1 -β) [START_REF] Yang | Projection approximation subspace tracking[END_REF].
In order to guarantee the symmetry of C and C τ (to have real generalized eigenvalues extracted by GEVD), the following update is applied after re-estimation of the second order statistics.
C(t) ← C(t) + C T (t) 2 , C τ (t) ← C τ (t) + C T τ (t) 2 (9)
Online demixing matrix update
In order to obtain an online solution for the GEVD problem in ( 3) and ( 4), the time-varying covariance matrix updates are integrated into the online update formula (5) as follows.
w 1 (t) = w T 1 (t -1)C τ (t)w 1 (t -1) w T 1 (t -1)C(t)w 1 (t -1) C -1 τ (t)C(t)w 1 (t -1) (10)
where w 1 (t) is the the first generalized eigenvector corresponding to the largest generalized eigenvalue at time index t. As noted in Section 3.2, the other minor eigenvectors are computed in a sequential (deflation) manner [START_REF] Zhao | Incremental Common Spatial Pattern algorithm for BCI[END_REF].
As shown in Step 11 of Algorithm 1, in order to reduce the computational complexity of the matrix inversion required in [START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF], C -1 τ (t) is recursively calculated by applying the matrix inversion lemma to the samplewise covariance matrix update in [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF].
It should be noted that since the online πCA algorithm requires the R-peak locations for calculating C τ (t), the update of this matrix has a minimum delay of one ECG beat, which can be fixed to the longest expected mECG beat gap (e.g., 1.2 s). Therefore, the ODEFL output has a fixed delay with its input (of the order of a second), which is acceptable for fECG extraction.
Real-time implementation
The parallel structure of Algorithm 1 is specifically appealing for real-time applications. The algorithm can be efficiently implemented using reconfigurable hardware architectures, such as field-programmable gate arrays (FPGA), or using real-time processors, embedded systems or graphics processing units (GPU). For FPGA implementations, the iteration over K is converted into K parallel units (known as modules). For software implementations (e.g. using GPU), parallelization techniques such as loop unrolling can be used to implement the algorithm concurrently on K parallel processors. In either case, the iteration over time (t) is performed sample-by-sample as the data flows into the processor in real-time, with a single sample dependency to sample t -1.
As later noted in Section 7.4, for real-time implementations (either on FPGA, embedded systems or GPU), the number of iterations K and the number of denoised channels L can be fixed to predefined values to obtain clock-wise accuracy and a constant processing load over time and processing units.
Algorithm 1 Online denoising by deflation (ODEFL) 1: x1(t) ← x(t)
Initialize with the input data 2: for i = 1, . .
. , K do
In each of the parallel stages of ODEFL 3:
C i (0) ← I N Initialize with identity (or random unitary) matrices 4:
C τ,i (0) ← I N 5: W i (0) = [w i1 (0), . . . , w iN (0)] ← I N 6:
for t = 1, . . . , T do Repeat for all samples of the data 7:
C i (t) ← βC i (t -1) + x i (t)x i (t) T Covariance matrix update 8: C τ,i (t) ← γC τ,i (t -1) + x i (t)x i (t + τt) T Lagged covariance matrix update 9: C i (t) ← [C i (t) + C i (t) T ]/2
Covariance matrix symmetrization 10:
C τ,i (t) ← [C τ,i (t) + C τ,i (t) T ]/2
Lagged covariance matrix symmetrization 11:
C -1 τ,i (t) ← γ -1 C -1 τ,i (t -1) - γ -1 C -1 τ,i (t -1)x i (t)x i (t + τt) T C -1 τ,i (t -1) γ + x i (t) T C -1 τ,i (t -1)x i (t + τt) Matrix inversion lemma 12: C ← C i (t) 13:
for j = 1, . . . , N do Perform Online GEVD of ( C, Cτ (t)) over all channels 14:
w ij (t) ← w T ij (t -1)C τ,i (t)w ij (t -1) w T ij (t -1) Cw ij (t -1) C -1 τ,i (t) Cw ij (t -
y i (t) ← W -T i (t)s(t)
Return to original space 23:
x i+1 (t) ← y i (t) Use output as input for next stage 24:
end for 25: end for
Benchmark algorithms
The proposed algorithm has been evaluated on both real and synthetic data and compared with the blockwise DEFL [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF], single-channel ECG Kalman Filter [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF][START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF], standard ANC [START_REF] Widrow | Adaptive Noise Cancelling: Principles and Applications[END_REF], a modified multistage ANC [START_REF] Swarnalath | A Novel Technique for Extraction of FECG using Multi Stage Adaptive Filtering[END_REF], template subtraction [START_REF] Martens | A robust fetal ECG detection method for abdominal recordings[END_REF], ICA denoising [START_REF] Zarzoso | Noninvasive fetal electrocardiogram extraction: blind separation versus adaptive noise cancellation[END_REF] and a single-channel wavelet denoiser. In this section, the details of the benchmark methods used for evaluation are reviewed.
Kalman filter
The Kalman filter (KF) and its nonlinear version, the extended Kalman filter (EKF), are methods for estimating hidden states of a system, having its dynamics and a set of observations. In the past decade, this filter has been adapted for estimating ECG signals from noisy measurements and other applications [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF][START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF]. In summary, using a polar extension of the morphological ECG model proposed by McSharry et al. [START_REF] Mcsharry | A Dynamic Model for Generating Synthetic Electrocardiogram Signals[END_REF], the following state space and observation models have been used as the ECG dynamic model [START_REF] Sameni | Extraction of Fetal Cardiac Signals from an Array of Maternal Abdominal Recordings[END_REF][START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF]:
θ k+1 = (θ k + ωδ) mod 2π z k+1 = z k - i δ α i ω b 2 i ∆θ i exp[- ∆θ 2 i 2b 2 i ] + η ( 11
)
s k = z k + v k φ k = θ k + u k ( 12
)
where ∆θ i = (θ k -θ i ) mod 2π, δ is the sampling period, η is an additive noise, and the summation is taken over finite number of Gaussian waveforms used for modeling P, Q, R, S and T waves with amplitude, center and width parameters α i , θ i and b i , respectively. The variable z k , the amplitude of the noiseless ECG at time instant k, and θ (known as the cardiac phase), are assumed as state variables for this model. The parameters θ i , ω, α i , b i and η are i.i.d Gaussian random variables considered as process noise vectors. In the observation equations, s k and φ k are amplitude and phase of the noisy observation ECG and v k and u k are observation noise vectors of the ECG and its phase. Using an EKF, the ECG signal z k can be estimated from the background noise v k [START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF]. For our application of interest, z k is the maternal ECG, which should be estimated and removed from the maternal abdominal sensors. Further details can be found at [START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF]. The required source codes are online available at [START_REF] Sameni | The Open-Source Electrophysiological Toolbox (OSET)[END_REF].
Adaptive noise cancellation
Adaptive noise cancellation (ANC) is a well-known method for online signal denoising developed by Widrow et al. [START_REF] Widrow | Adaptive Noise Cancelling: Principles and Applications[END_REF]. Standard ANC consists of a primary input that is the corrupted signal and a reference input containing the noise that is correlated with the primary noise. The weights of the filter adaptively change over time to retrieve an estimate of the noise and the weight update algorithm depends on the defined cost function. By subtracting the filter output (noise estimate) from the primary input, the primary signal is estimated and the corrupted signal is denoised.
For mECG cancellation, the reference input is obtained by a mECG channel recorded directly from the maternal chest. The primary input is obtained by maternal abdomen recordings containing both maternal and fetal ECG.
For multichannel recordings, the ANC is applied to each channel separately. As discussed in [START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF], the drawback of conventional ANC for ECG denoising is that the reference ECG should be morphologically similar to the contaminating ECG. However, since the ECG morphology highly depends on the lead position, the mECG contaminating the maternal abdominal leads do not necessarily resemble the chest lead ECG morphology. As a result, the performance of this method widely differs from one channel to another, which leads to a weak overall performance over all channels, as compared to other methods. Nonetheless, the method remains as a well-know benchmark for mECG cancellation.
More rigorously, considering n(t) as the mECG, s(t) as the non-mECG (fECG plus background signals), d(t) = s(t) + n(t) as the noisy observations, x(t) as the reference mECG, and w = [w 0 , . . . , w p-1 ] T as the weight coefficient of length p, using a least mean squares (LMS) algorithm, the output of an ANC is obtained from Algorithm 2.
Algorithm 2 Adaptive noise cancelation (ANC) algorithm
1: for t = 1 → T do 2: x(t) = [x(t), x(t -1), . . . , x(t -p + 1)] T 3: n(t) = w T x(t) 4: ŝ(t) = d(t) -n(t) 5:
w(t + 1) = w(t) + 2µŝ(t)x(t) 6: end for In Algorithm 2, T is the number of data samples, n(t) and ŝ(t) are estimates of primary noise and pri-mary signal, respectively. The parameter µ is a step size that controls the filter stability and convergence rate and should be in the range [0, λ max ], where λ max is the greatest eigenvalue of the covariance matrix R = E x(t)x(t) T [28, Ch. 9].
More recently, other extensions of the ANC have also been introduced for fECG extraction. One of the extensions that is used in this study for comparison is a multistage ANC [START_REF] Swarnalath | A Novel Technique for Extraction of FECG using Multi Stage Adaptive Filtering[END_REF]. The modified ANC consists of two sequential adaptive filters, which enables the application of different adaptive algorithms such as LMS, recursive least squares (RLS) and normalized least mean square (NLMS) in a single filter. Another aspect of this method is that the primary and reference inputs are applied to the algorithm after a sequence of operations such as squaring and/or rescaling to increase reliability of the algorithm to situations in which the maternal ECG in the primary input is not quite similar to the reference input. Further details regarding this filtering scheme can be followed from [START_REF] Swarnalath | A Novel Technique for Extraction of FECG using Multi Stage Adaptive Filtering[END_REF].
ICA-based BSS and denoising
ICA-based BSS was first used in [START_REF] Zarzoso | Noninvasive fetal electrocardiogram extraction: blind separation versus adaptive noise cancellation[END_REF] for fECG extraction from maternal abdominal sensors. This method exploits the statistical independence and spatial diversity of the sources (here the maternal and fetal heart signals plus noises), for separating fECG from other signals. In classical ICA, it is assumed that the observed signals x(t) ∈ R N are linear mixtures of N independent sources s(t) ∈ R N :
x(t) = A(t)s(t) [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF] in which the mixing matrix A(t) ∈ R N ×N models the propagation media and s(t) contains the source signals.
ICA methods are used to find the separating matrix B(t) such that ŝ(t) = B(t)x(t) is an estimate of the sources and Â(t) = B -1 (t) is an estimate of the mixing matrix. Among the different ICA algorithms, the joint diagonalization of eigenmatrices (JADE) [START_REF] Cardoso | Multidimensional independent component analysis[END_REF], is used in this work as a benchmark.
In fECG applications, due to the multidimensional nature of the sources, source signals are categorized into sets of multichannel components including mECG, fECG and noise subspaces as described in multidimensional ICA (MICA) [START_REF] Cardoso | Multidimensional independent component analysis[END_REF] and blind source subspace separation (BSSS) [START_REF] De Lathauwer | Fetal electrocardiogram extraction by blind source subspace separation[END_REF] schemes. Suppose that ŝf (t) = [ŝ f1 (t), . . . , ŝfM (t)] represents M -dimensional fetal components and the remaining components of ŝ(t) include mECG and noises. Accordingly, the corresponding columns of the mixing matrix are stored in Âf (t) = [â f1 , . . . , âfM ] ∈ R N ×M . As a result, the contribution of the fetal signals in the observation signals is obtained as follows: xf (t) = Âf (t)ŝ f (t) [START_REF] Sameni | Extraction of Fetal Cardiac Signals[END_REF] in which xf (t) is the extracted fECG signal in the original domain. A known drawback of conventional ICA is that they cannot preserve the order, sign and amplitude of the sources [START_REF] Hyvärinen | Independent Component Analysis[END_REF]. Therefore, for automatic applications, reliable source type detection and block-wise sign/amplitude correction is required to identify and correct the fECG sources among the other extracted components. In practice, due to the rather structured morphology of the ECG, the significant amplitude of the mECG compared to the fECG and accessible of prior information about the mECG (from maternal chest leads), the mECG signals can be systematically identified in the transformed space. In this work, we detect mECG signals using a channel assessment criteria based on maternal R-peaks.
Evaluation
Both synthetic and real data are used for qualitative and quantitative evaluation of the proposed method. The details of both datasets are presented in this section.
Real data
The widely used DaISy fECG dataset, shown in Fig. 6(a) is used for evaluation [START_REF] Moor | Database for the Identification of Systems (DaISy)[END_REF]. This dataset consists of five abdominal and three thoracic channels, recorded from the abdomen and chest of a pregnant woman, with a sampling rate of 250 Hz.
Synthetic ECG generation
Synthetic maternal and fetal ECG mixtures are generated using a realistic model adopted from the opensource electrophysiological toolbox (OSET) [START_REF] Sameni | Multichannel ECG and Noise Modeling: Application to Maternal and Fetal ECG Signals[END_REF][START_REF] Sameni | The Open-Source Electrophysiological Toolbox (OSET)[END_REF]:
x(t) = H m (t)s m (t) + H f (t)s f (t) + H η (t)v(t) + n(t) ∆ = x m (t) + x f (t) + η(t) + n(t) (15)
This model is based on the single dipole model of the heart, which assumes three geometrically orthogonal lead pairs, known as the Frank lead electrodes, or the vectorcardiogram (VCG), and a linear propagation media for the body volume conductor to map the three dimensions to body-surface potentials, using a Dower-like transformation [START_REF] Edenbrandt | Vectorcardiogram synthesized from a 12-lead ECG: Superiority of the inverse Dower matrix[END_REF]. Although the single dipole model is only an approximation of the true cardiac activity [START_REF] Sameni | What ICA Provides for ECG Processing: Application to Noninvasive Fetal ECG Extraction[END_REF], the model was found to be accurate enough for the hereby presented study, as it has all the required spatiotemporal features of the ECG.
Based on this model, we generate three-dimensional s m (t) and s f (t), representing the ECG signal of maternal and fetus hearts respectively, using a threedimensional VCG. The ECG sources are then mapped to twelve body surface channels using the H m (t) and H f (t) matrices, which model the propagation media. As a result, both maternal and fetal ECG are distributed in all body surface ECG channels; but with only three underlying dimensions. A realistic full-rank noise with a desired SNR is also added to the signal using the idea proposed in [START_REF] Sameni | Multichannel ECG and Noise Modeling: Application to Maternal and Fetal ECG Signals[END_REF]. Using this model, 10000 samples (20 s) of twelve lead synthetic maternal/fetal ECG mixtures were generated at a sampling rate of 500 Hz, for evaluation.
Quantitative measures
After applying the denoising procedure, various measures can be used to evaluate the effectiveness of mECG cancellation algorithm, which we detail below.
Signal-to-noise and signal-to-interference ratios
Following (1), consider x(t) as the noisy input observations, x f (t) as the fECG signal, x m (t) as maternal interference and η(t) + n(t) as noise for the fECG. The total interference plus noise for the fECG is
I(t) = x m (t) + η(t) + n(t) (16)
and the overall fetal signal-to-interference-plus-noise ratio (SINR) is defined [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF]:
SINR ∆ = 10 log tr E{x f (t)x T f (t)} tr (E{I(t)I T (t)}) (17)
SINR can be used to quantify the data quality before denoising. For synthetic data, the SINR can be set to arbitrary ratios by scaling the mixing matrices H m (t), H f (t), H η (t) and the noise variances in (1) by appropriate factors (cf. [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF] for further details).
In order to assess the mECG cancellation quality, we additionally define the signal-plus-noise-to-interference ratio (SIR)
SIR ∆ = 10 log tr E{x s (t)x T s (t)} tr (E{x m (t)x T m (t)}) (18)
where x s (t)
∆ = x f (t) + η(t) + n(t)
is the summation of all non-mECG components, which we call the mECG complement. xm (t) is the mECG (noise) residue in the mECG canceler's output:
xm (t) = y(t) -x s (t) (19)
and y(t) denotes the denoised signal. Since the objective of the proposed method is to remove mECG, in an ideal mECG canceler, y(t) should be equal to x s (t). In the later presented results, SIR improvement is defined as the output SIR minus the input SIR in dB. Therefore, SIR improvement is a measure of mECG cancellation in dB.
Periodicity measure
The most dominant characteristic of the ECG is its pseudo-periodicity. We define the ECG periodicity measure (PM) as follows
PM ∆ = tr E{y(t)y T (t + τ t )} tr (E{y(t)y T (t)}) × 100 (20)
The PM, measures the amount of periodicity of denoised data according to the period of a reference ECG. By definition, 0 ≤ PM ≤ 100 (PM = 0 for fully aperiodic signals and PM = 100 for a fully periodic signal). By computing the PM for mECG, it indicates the amount of mECG components that still exists in the output of the denoiser. It should be noted that the reduction of PM is only a necessary-but not sufficientmeasure for the algorithm success; since the PM might decrease due to an increase of noise or at a cost of losing the fECG. Therefore, a compliment measure is required, which assures the fidelity of the remaining components. This measure is proposed in what follows.
Similarity measure
The similarity measure (SM) is defined as a complement for the PM:
SM ∆ = tr E{y(t)x T s (t)} tr (E{y(t)y T (t)}) tr (E{x s (t)x T s (t)}) (21)
SM is the correlation coefficient between the denoised data and the original signal components, x s (t). By definition 0 ≤ SM ≤ 1. A SM value close to 1 indicates that the algorithm has preserved the non-mECG components (including the fECG) in its output.
Parameter selection
All the algorithms used for comparison have parameters that require optimization. The details of the parameter selection is studied in this section.
Extended Kalman filter parameters
For estimating the parameters of the Gaussian kernels used in the extended Kalman filter, the ensemble average of the mECG are extracted as a single beat average template. Next, the parameters are estimated by applying a nonlinear least squares error algorithm to fit the ECG template, using open-source packages available in OSET [START_REF] Sameni | The Open-Source Electrophysiological Toolbox (OSET)[END_REF]. The other parameters and covariance matrices are initialized following the methods developed in [START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF].
ANC parameters
The standard ANC and the modified multistage ANC are implemented using a 5-tap FIR filter (20 ms window length at a 250 Hz sampling frequency) with a step size equal to µ = 10 -6 . Both parameters were found as the optimal values, by searching over a grid of possible values in varying SINR. The maternal ECG reference, required for the ANC is selected directly from x m (t) in equation [START_REF] Fatemi | A robust framework for noninvasive extraction of fetal electrocardiogram signals[END_REF] during the generation of synthetic data. Since x m (t) is a pure mECG without other noise and interferences, each of its channels can play the role of the chest lead ECG required as reference.
Wavelet parameters
In [START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF][START_REF] Sameni | Online filtering using piecewise smoothness priors: Application to normal and abnormal electrocardiogram denoising[END_REF], a comprehensive study has been reported on more than 7000 combinations of wavelet parameters for ECG denoising. Herein, based on these studies, the Coiflets3 mother wavelet with six levels of signal decomposition, using the Stein's unbiased risk estimate (SURE) shrinkage rule, single level rescaling and a soft thresholding strategy is used as the optimal denoising setup for the wavelet-based ECG denoiser (cf. [START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF][START_REF] Sameni | Online filtering using piecewise smoothness priors: Application to normal and abnormal electrocardiogram denoising[END_REF] for a detailed discussion).
DEFL and ODEFL parameters
The optimum number of iterations, K, the number of channels to be denoised in each iteration, L, and the strategy used for denoising are critical (and applicationdependent) issues that highly influence the performance of DEFL and ODEFL. The parameter K, provides the capability of eliminating full-rank and possibly nonlinearly superposed noise, which is beyond the capabilities of conventional ICA techniques. The parameter L, may be considered as the effective number of dimensions of the signal and noise subspaces.
For typical software-based implementations, the parameters K and L can be dynamically optimized using signal-dependent measures calculated online. This results in variable values for these parameters depending on the signal quality and the ECG channels used during data collection. On the other hand, for clock-wise accurate software implementations (e.g. real-time embedded systems) or parallel hardware implementations (e.g. using FPGA), fixed values of K and L are preferred.
The denoising function G(•, •), used for signal and noise subspace separation, also influences the overall performance of both DEFL and ODEFL. In practice, all of these parameters should be tuned according to the application.
Herein, a Monte Carlo simulation was carried out to investigate the sensitivity of DEFL and ODEFL algorithms, with respect to the denoising function and the values of L and K. The performance was investigated using 700 simulated data, generated according to the scheme in Section 6.2, in different input SINRs, in the range of -35 dB to -5 dB in 5 dB steps. Fig. 2 shows the average SIR improvements versus K and L using four denoising strategies G(•, •). In the first strategy, which we call blanking DEFL, the first L channels of s(t) are simply set to zero (similar to hard-thresholding in wavelet denoising). In the second strategy, wavelet denoising was used as the denoiser using the optimal parameters explained in Section 7.3. In the third strategy, the single-channel extended Kalman filtering scheme proposed in [START_REF] Sameni | Modelbased Bayesian filtering of cardiac contaminants from biomedical recordings[END_REF][START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF] is used as the denoiser. In the fourth strategy, the single-channel template subtraction technique proposed in [START_REF] Martens | A robust fetal ECG detection method for abdominal recordings[END_REF] is used as the denoiser.
The results of optimizing the parameters of all methods are shown in Fig. 3. In Fig. 3(a), the SIR improvement versus different SINRs is calculated for the best values of K and L parameters. In Fig. 3(b), the average SIR improvement over the average of the whole values of K and L in the range of studied parameters is calculated versus different SINRs.
According to Figs. 2 and3(a), by setting appropriate values for L and K, blanking DEFL has better performance as compared to wavelet, template subtraction and Kalman denoising strategies, which is due to the fact that when the signal space dimensions are obtained, the algorithm completely removes all the noise space dimensions while it leaves the signal unchanged. In practice, the appropriate value of K can be estimated using some termination criterion such as the PM criterion. The optimal value of L can also be calculated using related methods for estimating the signal/noise dimensionality [START_REF] Nadakuditi | Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples[END_REF][START_REF] Lee | Nonlinear Dimensionality Reduction[END_REF]. For non-stationary data, K and L can also be updated in time 3 . According to Fig. 3, although blanking DEFL performs best for the suitable parameters, it is sensitive to the proper choice of K and L and its performance highly degrades in case of inappropriate parameters. On the other hand, wavelet denoiser, template subtraction and Kalman denoising strategies are more robust to the choice of parameters; since increasing K and L beyond their optimal values does not significantly degrade the SIR improvements. As a result, using denoising methods such as wavelets, template subtraction or Kalman filter in DEFL, instead of banking DEFL are more appropriate in practice.
From Fig. 3 it is also seen that the Kalman filter outperforms template subtraction and wavelet denoiser in The other parameters of ODEFL are the forgetting factors β and γ. These factors should be chosen according to the degree of data (non-)stationarity within the range [0,1]. In the studied database, the ECG signal and noise were both stationary. Hence, we chose β = γ=1, i.e., the algorithm does not forget the old samples.
ICA denoising parameters
The free parameter in ICA denoising is the number of mECG components (effective number of mECG dimen-sions) that should be removed after the source separation stage. For synthetic data, according to our prior knowledge, s m (t) is three-dimensional. Therefore, we set L=3. For real data, this choice was also empirically found to be the optimal value for the studied dataset, in order to eliminate the most dominant components of the mECG. In general, the number of mECG channels can be adaptively obtained during the denoising process by morphological similarity (the PM measure defined in 20), or by using the notion of effective number of dimensions [START_REF] Sameni | An Iterative Subspace Denoising Algorithm for Removing Electroencephalogram Ocular Artifacts[END_REF]. In this work, the mECG identification for both real and synthetic data is accomplished by computing the similarity measure defined in [START_REF] Amini | MR Artifact Reduction in the Simultaneous Acquisition of EEG and fMRI of Epileptic Patients[END_REF] between the maternal reference signal (chest lead ECG) and the different source channels extracted by ICA. The top L channels having the highest correlations are selected as the mECG components. These channels are set to zero and the remaining channels are back-projected to the original subspace. This strategy is rather similar to a single stage of the DEFL algorithm.
Results
Simulated data
The simulated data generation procedure was discussed in Section 6.2. For visual inspection, a typical 20 s length synthetic ECG with SINR of -20 dB, along with the corresponding denoised output (mECG removal) is shown in Fig. 4. It can be seen that the mECG is distributed in all the simulated channels. The denoised output indicates that the maternal ECG is removed in almost all channels without affecting the fetal ECG. The first 500 samples (1 s) of the denoised data, show the transient effect of the filter. The filter has reached steady state after this period.
For a quantitative evaluation, the proposed algorithm was compared with the benchmark methods using 1000 different ensembles of simulated data and noise, in different input SINRs. The average and standard deviation of SIR improvements, PM, and SM are shown in Fig. 5. Accordingly, DEFL outperforms all methods and is only slightly better than the ODEFL. The outperformance of DEFL as compared to ODEFL is reasonable, due to the offline (non-causal) and exact calculation of the covariance matrices used in DEFL. However, the difference is negligible as compared to the advantages of ODEFL for online and nonstationary applications. As shown, DEFL and ODEFL, which are based on prior knowledge of the ECG periodicity have outperformed ICA. This is due to the fact that DEFL and ODEFL can deal with situations that ICA assumptions are not satisfied. In fact, ICA algorithms despite their vast and effective applications have some intrinsic ambiguities due to their simplified assumptions. Typically, it is assumed that the number of independent sources is fixed and equal to the number of sensors. The signal mixture is considered instantaneous and time-invariant. However, these assumptions are not necessarily satisfied in practice. As a result, the performance of ICA degrades in presence of full-rank Gaussian noise and correlated/distributed sources [START_REF] Fatemi | A robust framework for noninvasive extraction of fetal electrocardiogram signals[END_REF], resulting in residual mECG within the fECG. Moreover, the ranking property of DEFL and ODEFL (contrary to the permutation ambiguity of ICA) helps the reliable and automatic detection of fECG/mECG signals in long recordings [START_REF] Fatemi | A robust framework for noninvasive extraction of fetal electrocardiogram signals[END_REF]; while for ICA it is necessary to have robust source identification methods, which identify the mECG among others components.
Overall, DEFL, ODEFL and ICA denoising outperform the other benchmarks, in both low and high SNR scenarios. This can be due to the fact that the ANC, wavelet, template subtraction and Kalman filtering schemes are all single-channel, while DEFL, ODEFL and ICA benefit from the spatial information within multiple channels to obtain higher SNR.
Among the single-channel methods, the performance of Kalman filter and template subtraction is similar in low SNR and outperforms other single-channel methods; while in high SNR the Kalman filter has superior performance. The reason is that depending on the signal quality, the Kalman filter dynamically tends towards the observations or the system's prior dynamics; i.e., when the data is too noisy, the Kalman filter tracks the prior dynamic model rather than relying on the observation. Therefore, in low SNR, the Kalman filter performance is identical to template subtraction. On the other hand, in high SNR the Kalman filter benefits from the information within the observations, making it better than template subtraction. where mPM and fPM are maternal and fetal PM, respectively. Accordingly, -100 ≤ OPM ≤ 100, where higher values of OPM are an indication of algorithm success in simultaneously removing the mECG and preserving the fECG. The average and standard deviation of the mPM and OPM are shown in Fig. 8 for the proposed and benchmark methods. We can see that the results on real data follow the same trend and order as the synthetic data results. The only exception is the ICA denoiser, which has inferior results for real data. This might be due to the fact that for real noisy data, mECG identification and estimation of L is difficult, resulting in a degraded performance.
Conclusion
In this paper, an online version of an iterative subspace denoising procedure proposed in [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF] was presented for removing maternal ECG from noninvasive signals recorded from the abdomen of a pregnant woman. The proposed method is rather generic and may be applied to other blind and semi-blind source separation applications, in which the signal and noise mixtures are not separable using conventional source separation and denoising techniques. It was shown that the proposed method outperforms the state of the art single channel denoising techniques, while it marginally performs as good as its offline version. It was further shown that DEFL and ODEFL algorithms which are based on the GEVD of only two second-order matrices, outperform classical ICA, which use more than two matrices containing the higher-order statistics of the observations. The outperformance can be related to the fact that DEFL and ODEFL can deal with situations in which some of the underlying assumptions of ICA are not satisfied. Moreover, DEFL and ODEFL benefit from the ranking property of GEVD for mECG detection; while ICA suffers from permutation and sign ambiguities, which require the utilization of a robust mECG identifier. As a result, the proposed method is less complicated and more reliable for long datasets, as compared with batch ICA techniques. The performance of ODEFL was investigated with different sets of parameters using different denoising strategies including simple blanking, wavelet denoising, template subtraction and Kalman denoising.
According to the hereby presented results and the former experiments reported in [START_REF] Sameni | A Nonlinear Bayesian Filtering Framework for ECG Denoising[END_REF], we conclude that for single channel data, the Kalman filter outperforms other ECG denoising schemes in different SINR scenarios, while the DEFL and ODEFL algorithms are better for multichannel data as they use inter-channel correlations, without having the mixing matrix of the data. Therefore, in future studies, the combination of the Kalman denoiser and ODEFL may result in superior results. Introducing an online method for automatic calculation of the algorithm parameters L, K, β and γ is also an interesting extension to the current work, which was partially studied in [START_REF] Sameni | An Iterative Subspace Denoising Algorithm for Removing Electroencephalogram Ocular Artifacts[END_REF]; but requires further investigation in future studies.
The performance of ODEFL is influenced by several parameters including the method used for online GEVD. In future studies, other online GEVD algorithms can be compared with incremental common spatial pattern, used in this work. Moreover, theoretical aspects of online GEVD and the convergence of DEFL and ODEFL should also be considered. A symmetric extension of the method for avoiding the problems of sequential source separation and error propagation is also interesting for practical applications.
In recent studies, the problem of fetal motion tracking using noninvasive ECG recordings has found significant interests [START_REF] Biglari | Fetal motion estimation from noninvasive cardiac signal recordings[END_REF]. In future studies, the hereby proposed techniques can be combined with these developments to obtain a unified fetal ECG extraction and motion tracking algorithm.
Fig. 3
3 Fig. 3 SIR improvement versus SINR using four denoising strategies
Fig. 2
2 Fig. 2 Sensitivity of the SIR improvement versus K and L parameters in four denoising schemes
Fig. 5 (
5 Fig. 5 (Top) SIR improvement, (middle) PM and (bottom) SM versus input SINRs. The PM and SM of DEFL and ODEFL have overlapped.
Fig. 7 A
7 Fig.7A typical data segment before (gray plots) and after (black plots) mECG cancellation. It is observed that the mECG is completely removed in DEFL and ODEFL methods with minimal effect on the fECG.
Fig. 8
8 Fig. 8 Output maternal PM and overall PM versus SNR in presence of additive noise
Note that for mixtures of signals with different origins and temporal characteristics, the projection (and back-projection) algorithms and the denoising scheme can generally be customized for each iterations, which is beyond the scope of the current study.
For other applications, one might prefer a finite impulse response (FIR) form, in which the samples do not have any effect beyond a finite window length.
According to our empirical results, for ECG signals, the update should be done over long temporal windows (tens of seconds and above) rather than short windows; otherwise the performance degrades.
The low performance of ANC, as mentioned before, can be related to the fact that the reference signal used in ANC (here the chest lead mECG) does not necessarily resemble the morphology of the mECG superposed over the abdominal leads, which significantly downgrades its performance.
Real data
The results of ODEFL on the DaIsy dataset are shown in Fig. 6. It is seen that after about 4 samples (160 ms), the algorithm has converged and the mECG is almost completely removed in the first channel; but it takes up to 500 samples (2 s) for all channels to converge. This is due to the sequential nature of the proposed ODEFL algorithm. Fig. 7 shows a closer view of the results over two successive ECG beats. It is seen that DEFL and ODEFL outperform the ANC, template subtraction, Kalman filter and ICA denoising. While DEFL and ODEFL have effectively removed the mECG, other methods have left some residual mECG or removed parts of the fECG.
For numerical evaluation of the proposed method on real data, we synthetically manipulate the real DaISy abdominal signals as follows [START_REF] Sameni | A Deflation Procedure for Subspace Decomposition[END_REF]:
where x 0 (t) is the original real data in Fig. 6, v(t) is Gaussian white noise, Λ = diag(λ 1 , . . . , λ N ) is a diagonal matrix, which controls the per channel SNR, G ∈ R N ×N is an arbitrary non-singular random matrix and x(t) is the new noisy signal. The signal x(t) is | 56,631 | [
"992048"
] | [
"487893"
] |
01480627 | en | [
"info",
"math"
] | 2024/03/04 23:41:46 | 2017 | https://hal.science/hal-01480627/file/1607.08077.pdf | Nikolay Vereshchagin
Alexander Shen
Algorithmic statistics: forty years later *
Keywords:
come L'archive ouverte pluridisciplinaire
Statistical models
Let us start with a (very rough) scheme. Imagine an experiment that produces some bit string x. We know nothing about the device that produced this data, and cannot repeat the experiment. Still we want to suggest some statistical model that fits the data ("explains" x in a plausible way). This model is a probability distribution on some finite set of binary strings containing x. What do we expect from a reasonable model?
There are, informally speaking, two main properties of a good model. First, the model should be "simple". If a model contains so many parameters that it is more complicated than the data itself, we would not consider it seriously. To make this requirement more formal, one can use the notion of Kolmogorov complexity. 2 Let us assume that measure P (used as a model) has finite support and rational values. Then P can be considered as a finite (constructive) object, so we can speak about Kolmogorov complexity of P . The requirement then says that complexity of P should be much smaller than the complexity of the data string x itself.
For example, if a data string x contains n bits, we may consider a model that corresponds to n independent fair coin tosses, i.e., the uniform distribution P on the set of all n-bit strings. Such a distribution is a constructive object that is completely determined by the value of n, so its complexity is O(log n), while the complexity of most n-bit strings is close to n (and therefore is much larger than the complexity of P , if n is large enough).
Still this simple model looks unacceptable if, for example, the sequence x consists of n zeros, or, more generally, if the frequency of ones in x deviates significantly from 1/2, or if zeros and ones alternate. This feeling was one of the motivations for the development of algorithmic randomness notions: why some bit sequences of length n look plausible as outcomes of n fair coin tosses while other do not, while all the n-bit sequences have the same probability 2 -n according to the model? This question does not have a clear answer in the classical probability theory, but the algorithmic approach to randomness says that plausible strings should be incompressible: the complexity of such a string (the minimal length of a program producing it) should be close to its length.
This answer works for a uniform distribution on n-bit strings; for arbitrary P it should be modified. It turns out that for arbitrary P we should compare the complexity of x not with its length but with the value (-log P (x)) (all logarithms are binary); if P is the uniform distribution on n-bit strings, the value of (-log P (x)) is n for all n-bit strings x. Namely, we consider the difference between (-log P (x)) and complexity of x as randomness deficiency of x with respect to P . We discuss the exact definition in the next section, but let us note here that this approach looks natural: different data strings require different models.
Disclaimer. The scheme above is oversimplified in many aspects. First, it rarely happens that we have no a priori information about the experiment that produced the data. Second, in many cases the experiment can be repeated (the same experimental device can be used again, or a similar device can be constructed). Also we often deal with a data stream: we are more interested, say, in a good prediction of oil prices for the next month than in a construction of model that fits well the prices in the past. All these aspects are ignored in our simplistic model; still it may serve as an example for more complicated cases. One should stress also that algorithmic statistics is more theoretical than practical: one of the reasons is that complexity is a non-computable function and is defined only asymptotically, up to a bounded additive term. Still the notions and results from this theory can be useful not only as philosophical foundations of statistics but as a guidelines when comparing statistical models in practice (see, for example, [START_REF] De Rooij | Approximating Rate-Distortion Graphs of Individual Data: Experiments in Lossy Compression and Denoising[END_REF]).
More practical approach to the same question is provided by machine learning that deals with the same problem (finding a good model for some data set) in the "real world". Unfortunately, currently there is a big gap between the algorithmic statistics and machine learning: the first one provides nice results about mathematical models that are quite far from practice (see the discussion about "standard models" below), while machine learning is a tool that sometimes works well without any theoretical reasons. There are some attempts to close this gap (by considering models from some class or resource-bounded versions of the notions), but much more remains to be done.
A historical remark. The principles of algorithmic statistics are often traced back to Occam's razor principle often stated as "Don't multiply postulations beyond necessity" or in a similar way. Poincare writes in his Science and Method' (Chapter 1,
The choice of facts) that "this economy of thought, this economy of effort, which is, according to Mach, the constant tendency of science, is at the same time a source of beauty and a practical advantage". Still the mathematical analysis of these ideas became possible only after a definition of algorithmic complexity was given in 1960s (by Solomonoff, Kolmogorov and then Chaitin): after that the connection between randomness and incompressibility (high complexity) became clear. The formal definition of (α, β)-stochasticity (see the next section) was given by Kolmogorov (the authors learned it from his talk given in 1981 [START_REF] Kolmogorov | Talk at the seminar at Moscow State University Mathematics Department (Logic Division)[END_REF], but most probably it was formulated earlier in 1970s; the definition appeared in print in [START_REF] Шень | Понятие (α, β)-стохастичности по Колмогорову и его свойства[END_REF]). For the other related approaches (the notions of logical depth and sophistication, minimal description length principle) see the discussion in the corresponding sections (see also [START_REF] Li | An Introduction to Kolmogorov Complexity and its Applications[END_REF]Chapter 5].) 2 (α, β)-stochasticity
Prefix complexity, a priori probability and randomness deficiency
Preparing for the precise definition of (α, β)-stochasticity, we need to fix the version of complexity used in this definition. There are several versions (plain and prefix complexities, different types of conditions), see [START_REF] Верещагин | Колмогоровская сложность и алгоритмическая случайность[END_REF]Chapter 6]. For most of the results the choice between these versions is not important, since the difference between the different versions is small (at most O(log n) for strings of length n), and we usually allow errors of logarithmic size in the statements. We will use the notion of conditional prefix complexity, usually denoted by K(x|c). Here x and c are finite objects; we measure the complexity of x when c is given. This complexity is defined as the length of the minimal prefix-free program that, given c, computes x. 3 The advantage of this definition is that it has an equivalent formulation in terms of a priori probability [START_REF] Верещагин | Колмогоровская сложность и алгоритмическая случайность[END_REF]Chapter 4]: if m(x|c) is the conditional a priori probability, i.e., the maximal lower semicomputable function of two arguments x and c such that x m(x|c) 1 for every c, then
K(x|c) = -log m(x|c) + O(1).
In particular, if a probability distribution P with finite support and rational values (we consider only distributions of this type) is considered as a condition, we may compare m with function (x, P ) → P (x) and conclude that m(x|P ) P (x) up to an O(1)-factor, so K(x|P )log P (x). So if we define the randomness deficiency as d(x|P ) =log P (x) -K(x|P ),
we get a non-negative (up to O(1) additive term) function. One may also explain in a different way why K(x|P )log P (x): this inequality is a reformulation of a standard result from information theory (Shannon-Fano code, Kraft inequality). Why do we define the deficiency in this way? The following proposition provides some additional motivation. for every P .
Here x is a binary string, and P is a probability distribution on binary strings with finite support and rational values. By lower semicomputable functions we mean functions that can be approximated from below by some algorithm (given x and P , the algorithm produces an increasing sequence of rational numbers that converges to d(x|P ); no bounds for the convergence speed are required). Then, for a given P , the function x → 2 d(x|P ) can be considered as a random variable on the probability space with distribution P . The requirement ( * ) says that its expectation is at most 1. In this way we guarantee (by Markov inequality) that only a P -small fraction of strings have large deficiency: the P -probability of the event d(x|P ) > c is at most 2 -c . It turns out that there exists a maximal function d satisfying ( * ) up to O(1) additive term, and our formula gives the expression for this function in terms of prefix complexity.
Proof. The proof uses standard arguments from Kolmogorov complexity theory. The function K(x|P ) is upper semicomputable, so d(x|P ) is lower semicomputable. We can also note that For the case where P is the uniform distribution on n-bit strings, using P as a condition is equivalent to using n as the condition, so
d(x|P ) = n -K(x|n)
in this case, and small deficiency means that complexity K(x|n) is close to the length n, so x is incompressible. 4
Definition of stochasticity
Definition 1. A string x is called (α, β)-stochastic if there exists some probability distribution P (with rational values and finite support) such that K(P ) α and d(x|P ) β.
By definition every (α, β)-stochastic string is (α , β )-stochastic for α α, β β. Sometimes we say informally that a string is "stochastic" meaning that it is (α, β)-stochastic for some reasonably small thresholds α and β (for example, one can consider α, β = O(log n) for n-bit strings).
Let us start with some simple remarks.
• Every simple string is stochastic. Indeed, if P is concentrated on x (singleton support), then K(P ) K(x) and d(x|P ) = 0 (in both cases with O(1)precision), so x is always (K(x) + O(1), O(1))-stochastic.
• On the other end of the spectrum: if P is a uniform distribution on n-bit strings, then K(P ) = O(log n), and most strings of length n have d(x|P ) = O(1), so most strings of length n are (O(log n), O(1))-stochastic. The same distribution also witnesses that every n-bit string is (O(log n), n + O(1))-stochastic. 4 Initially Kolmogorov suggested to consider n -C(x) as "randomness deficiency" in this case, where C stands for the plain (not prefix) complexity. One may also consider n -C(x|n). But all three deficiency functions mentioned are close to each other for strings x of length n; one can show that the difference between them is bounded by O(log d) where d is any of these three functions. The proof works by comparing the expectation and probability-bounded characterizations as explained in [START_REF] Bienvenu | Algorithmic tests and randomness with respect to a class of measures[END_REF].
• It is easy to construct stochastic strings that are between these two extreme cases. Let x be an incompressible string of length n. Consider the string x0 n (the first half is x, the second half is zero string). It is (O(log n), O(1))stochastic: let P be the uniform distribution on all the strings of length 2n whose second half contains only zeros.
• For every distribution P (with finite support and rational values, as usual) a random sampling according to P gives us a (K(P ), c)-stochastic string with probability at least 1 -2 -c . Indeed, the probability to get a string with deficiency greater than c is at most 2 -c (Markov inequality, see above).
After these observations one may ask whether non-stochastic strings exists at all and how they can be constructed? A non-stochastic string should have nonnegligible complexity (our first observation), but a standard way to get strings of high complexity, by coin tossing or other random experiment, can give only stochastic strings (our last observation).
We will see that non-stochastic strings do exist in the mathematical sense; however, the question whether they appear in the "real world", is philosophical. We will discuss both questions soon, but let us start with some mathematical results.
First of all let us note that with logarithmic precision we may restrict ourselves to uniform distributions on finite sets. Proposition 2. Let x be an (α, β)-stochastic string of length n. Then there exist a finite set A containing x such that K(A) α + O(log n) and d(x|U A ) β + O(log n), where U A is the uniform distribution on A.
Since K(A) = K(U A ) (with O(1)-precision, as usual), this proposition means that we may consider only uniform distributions in the definition of stochasticity, and get an equivalent (up to logarithmic change in the parameters) definition. According to this modified definition, a string x in (α, β)-stochastic if there exists a finite set A such that K(A) α and d(x|A) β, where d(x|A) is now defined as log #A -K(x|A). Kolmogorov originally proposed the definition in this form (but used plain complexity).
Proof. Let P be the (finite) distribution that exists due to the definition of (α, β)stochasticity of x. We may assume without loss of generality that β n (as we have seen, all strings of length n are (O(log n), n + O(1))-stochastic, so for β > n the statement is trivial). Consider the set A formed by all strings that have sufficiently large P -probability. Namely, let us choose minimal k such that 2 -k P (x) and consider the set A of all strings such that P (x) 2 -k . By construction A contains
x. The size of A is at most 2 k , andlog P (x) = k with O(1)-precision. According to our assumption, d(x|P
) = k -K(x|P ) n, so k = d(x|P ) + K(x|P ) O(n). Then K(x|A) K(x|P, k) K(x|P ) -O(log n),
K(A) K(P, k) K(P ) + O(log n)
for the same reasons.
Remark 1. Similar argument can be applied if P is a computable distribution (may be, with infinite support) computed by some program p, and we require K(p) α and log P (x)-K(x|p) β. So in this way we also get the same notion (with logarithmic precision). It is important, however, that program p computes the distribution P (given some point x and some precision ε > 0, it computes the probability of x with error at most ε). It is not enough for P to be an output distribution for a randomized algorithm p (in this case P is called the semimeasure lower semicomputed by p; note that the sum of probabilities may be strictly less than 1 since the computation may diverge with positive probability). Similarly, it is very important in the version with finite sets A (and uniform distributions on them) that the set A is considered as a finite object: A is simple if there is a short program that prints the list of all elements of A. If we allowed the set A to be presented by an algorithm that enumerates A (but never says explicitly that no more elements will appear), then situation would change drastically: for every string of complexity k the finite set S k of strings that have complexity at most k, would be a good explanation for x, so all objects would become stochastic.
Stochasticity conservation
We have defined stochasticity for binary strings. However, the same definition can be used for arbitrary finite (constructive) objects: pairs of strings, tuples of strings, finite sets of strings, graphs, etc. Indeed, complexity can be defined for all these objects as the complexity of their encodings; note that the difference in complexities for different encodings is at most O(1). The same can be done for finite sets of these objects (or probability distributions), so the definition of (α, β)-stochasticity makes sense.
One can also note that computable bijection preserves stochasticity (up to a constant that depends on the bijection, but not on the object). In fact, a stronger statement is true: every total computable mapping preserves stochasticity. For example, consider a stochastic pair of strings (x, y). Does it imply that x (or y) is stochastic? It is indeed the case: if P is a distribution on pairs that is a reasonable model for (x, y), then its projection (marginal distribution on the first components) should be a reasonable model for x. In fact, projection can be replaced by any total computable mapping. Proposition 3. Let F be a total computable mapping whose arguments and values are strings. If
x is (α, β)-stochastic, then F (x) is (α + O(1), β + O(1))-stochastic.
Here the constant in O(1) depends on F but not on x, α, β.
Proof. Let P be the distribution such that K(P ) α and d(x|P ) β; it exists according to the definition of stochasticity. Let Q = F (P ) be the image distribution. In other words, if ξ is a random variable with distribution P , then F (ξ) has distribution Q. It is easy to see that K(Q) K(P ) + O(1), where the constant depends only on F . Indeed, Q is determined by P and F in a computable way. It remains to show that d(F (x)|Q) d(x|P ) + O [START_REF] Adleman | Time, space and randomness[END_REF].
The easiest way to show this is to recall the characterization of deficiency as the maximal lower semicomputable function such that u 2 d(u|S) S(u) 1 for every distribution S. We may consider another function d defined as
d (u|S) = d(F (u)|F (S)) It is easy to see that u 2 d (u|S) S(u) = u 2 d(F (u)|F (S)) S(u) = v 2 d(v |F (S)) • [F (S)](v) 1
(in the second equality we group all the values of u with the same v = F (u)). Therefore the maximality of d guarantees that d (u|S) d(u|S) + O(1), so we get the required inequality.
This proof can be also rephrased using the definition of stochasticity with a priori probability. We need to show that for y = P (x) and Q = F (P ) we have
m(y |Q) Q(y) O(1) • m(x|P ) P (x) or m(F (x)|F (P )) • P (x) Q(F (x)) O(m(x|P )).
It remains to note that the left hand side is a lower semicomputable function of x and P whose sum over all x (for every P ) is at most 1. Indeed, if we group all terms with the same F (x), we get the sum y m(y |F (P )) 1, since the sum of P (x) over all x with F (x) = y equals Q(y).
Remark 2. In this proof it is important that we use the definition with distributions.
If we replace is with the definition with finite sets, the results remains true with logarithmic precision, but the argument becomes more complicated, since the image of the uniform distribution may not be a uniform distribution. So if a set A is a good model for x, we should not use F (A) as a model for F (x). Instead, we should look at the maximal k such that 2 k #F -1 (y), and consider the set of all y that have at least 2 k preimages in A.
Remark 3. It is important in Proposition 3 that F is a total function. If x is some non-stochastic object and x * is the shortest program for x, then x * is incompressible and therefore stochastic. Still the interpreter (decompressor) maps x * to x. We discuss the case of non-total F below, see Section 5.4. where O(1)-constant does not depend on F anymore.
Non-stochastic objects
Note that up to now we have not shown that non-stochastic objects exist at all. It is easy to show that they exist for rather large values of α and β (linearly growing with n).
Proposition 4 ([35]
). For some c and all n:
(1) if α + 2β < nc log n, then there exist n-bit strings that are not (α, β)stochastic;
(2) however, if α + β > n + c log n, then every n-bit string is (α, β)-stochastic.
Note that the term c log n allows us to use the definition with finite sets (i.e., uniform distributions on finite sets) instead of arbitrary finite distributions, since both versions are equivalent with O(log n)-precision.
Proof. The second part is obvious (and is added just for comparison): if α + β = n, then all n-bit strings can be split into 2 α groups of size 2 β each. Then the complexity of each group is α + O(log n), and the randomness deficiency of every string in the corresponding group is at most β + O(1). It is slightly bigger than the bounds we need, but we have reserve c log n, and α and β can be decreased, say, by (c/2) log n before using this argument.
The first part: Consider all finite sets A of strings that have complexity at most α and size at most 2 α+β . Since α + (α + β) < n, they cannot cover all n-bit strings. Consider then the first (say, in the lexicographical order) n-bit string u not covered by any of these sets. What is the complexity of u? To specify u, it is enough to give n, α, β and the program of size at most α (from the definition of Kolmogorov complexity) that has maximal running time among programs of that size. Then we can wait until this program terminates and look at the outputs of all programs of size at most α after the same number of steps, select sets of strings of size at most α + β, and take the first u not covered by these sets. So the complexity of u is at most α + O(log n) (the last term is needed to specify n, α, β). The same is true for conditional complexity with arbitrary condition, since it is bounded by the unconditional complexity. So the randomness deficiency of u in every set A of size 2 α+β is at least β -O(log n). We see that u is not (α, β -O(log n))-stochastic. Again the O(log n)-term can be compensated by O(log n)-change in β (we have c log n reserve for that).
Remark 5. There is a gap between lower and upper bounds provided by Proposition 4. As we will see later, the upper bound (2) is tight with O(log n)-precision, but we need more advanced technique (properties of two-part descriptions, Section 3) to prove this.
Proposition 4 shows that non-stochastic objects exist for rather large values of α and β (proportional to n). This, of course, is a mathematical existence result; it does not say anything about the possibility to observe non-stochastic objects in the "real world". As we have discussed, random sampling (from a simple distribution) may produce a non-stochastic object only with a negligible probability; total algorithmic transformations (defined by programs of small complexity) also cannot not create non-stochastic object from stochastic ones. What about non-total algorithmic transformations? As we have discussed in Remark 3, a non-total computable transformation may transform a stochastic object into a non-stochastic one, but does it happen with non-negligible probability?
Consider a randomized algorithm that outputs some string. It can be considered as a deterministic algorithm applied to random bit sequence (generated by the in-ternal coin of the algorithm). This deterministic algorithm may be non-total, so we cannot apply the previous result. Still, as the following result shows, randomized algorithms also generate non-stochastic objects only with small probability.
To make this statement formal, we consider the sum of m(x) over all nonstochastic x of length n. Since the a priori probability m(x) is the upper bound for the output distribution of any randomized algorithm, this implies the same bound (up to O(1)-factor) for every randomized algorithm. The following theorem gives an upper bound for this sum: Proposition 5 (see [START_REF] Muchnik | Mathematical metaphysics of randomness[END_REF], Section 10).
{ m(x) | x is a n-bit string that is not (α, α)-stochastic } 2 -α+O(log n)
for every n and α.
Proof. Consider the sum of m(x) over all strings of length n. This sum is some real number ω 1. Let ω be the number represented by first α bits in the binary representation of ω, minus 2 -α . We may assume that α O(n), otherwise all strings of length n are (α, α)-stochastic.
Now construct a probability distribution as follows. All terms in a sum for ω are lower semicomputable, so we can enumerate increasing lower bounds for them. When the sum of these lower bounds exceeds ω, we stop and get some measure P with finite support and rational values. Note that we have a measure, not a distribution, since the sum of P (x) for all x is less than 1 (it does not exceed ω). So we normalize P (by some factor) to get a distribution P proportional to P . The complexity of P is bounded by α + O(log n) (since P is determined by ω and n). Note that the difference between P (without normalization factor) and a priori probability m (the sum of differences over all strings of length n) is bounded by O(2 -α ). It remains to show that for m-most strings the distribution P is a good model.
Let us prove that the sum of a priori probabilities of all n-bit strings x that have d(x| P ) > α + c log n is bounded by O(2 -α ), if c is large enough. Indeed, for those strings we have
-log P (x) -K(x| P ) > α + c log n.
The complexity of P is bounded by α+O(log n) and therefore K(x) exceeds K(x| P ) at most by α+O(log n), solog P (x)-K(x) > 1 (or P (x) < m(x)/2) for those strings, if c is large enough (it should exceed the constants hidden in O(log n) notation). The difference 1 is enough for the estimate below, but we could have arbitrary constant or even logarithmic difference by choosing larger value of c.
Prefix complexity can be defined in terms of a priori probability, so we get log(m(x)/ P (x)) > 1 for all x that have deficiency exceeding α + c log n with respect to P . The same inequality is true for P instead of P , since P is smaller. So for all those x we have P (x) < m(x)/2, or (m(x)-P (x)) > m(x)/2. Recalling that the sum of m(x)-P (x) over all x of length n does not exceed O(2 -α ) by construction of ω, we conclude that the sum of m(x) over all strings of randomness deficiency (with respect to P ) exceeding α + c log n is at most O(2 -α ).
So we have shown that the sum of m(x) for all x of length n that are not
(α + O(log n), α + O(log n))-stochastic, does not exceed O(2 -α
). This differs from our claim only by O(log n)-change in α.
Bruno Bauwens noted that this argument can be modified to obtain a stronger result where (α, α)-stochasticity is replaced by (α + O(log n), O(log n))-stochasticity. Instead of one measure P , one should consider a family of measures. Let us approximate ω and look when the approximations cross the thresholds corresponding to k first bits of the binary expansion of ω. In this way we get P = P 1 + P 2 + . . . + P α , where P i has total weight at most 2 -i , and complexity at most i + O(log n). Let us show that all strings x where P (x) is close to m(x) (say, P (x) m(x)/2) are (α + O(log n), O(log n))-stochastic, namely, one of the measures P i multiplied by 2 i is a good explanations for them. Indeed, for such x and some i the value of P i (x) coincides with m(x) up to polynomial (in n) factor, since the sum of all P i is at least m(x)/2. On the other hand, m(x|2 i P i ) 2 i m(x) ≈ 2 i P i (x), since the complexity of 2 i P i is at most i + O(log n). Therefore the ratio m(x|P i )/(2 i P i (x)) is polynomially bounded, and the model 2 i P i has deficiency O(log n). This better bound also follows from the Levin's explanation, see below.
This result shows that non-stochastic objects rarely appear as outputs of randomized algorithms. There is an explanation of this phenomenon (that goes back to Levin): non-stochastic objects provide a lot of information about halting problem, and the probability of appearance of an object that has a lot of information about some sequence α, is small (for any fixed α). We discuss this argument below, see Section 4.6.
It is natural to ask the following general question. For a given string x, we may consider the set of all pairs (α, β) such that x is (α, β)-stochastic. By definition, this set is upwards-closed: a point in this set remains in it if we increase α or β, so there is some boundary curve that describes the trade-off between α and β. What curves could appear in this way? To get an answer (to characterizes all these curves with O(log n)-precision), we need some other technique, explained in the next section.
Two-part descriptions
Now we switch to another measure of the quality of a statistical model. It is important both for philosophical and technical reasons. The philosophical reason is that it corresponds to the so-called "minimal description length principle". The technical reason is that it is easier to deal with; in particular, we will use it to answer the question asked at the end of the previous section.
Optimality deficiency
Consider again some statistical model. Let P be a probability distribution (with finite support and rational values) on strings. Then we have K(x) K(P ) + K(x|P ) K(P ) + (-log P (x))
for arbitrary string x (with O(1)-precision). Here we use that (with O(1)-precision):
• K(x|P )log P (x), as we have mentioned;
• the complexity of the pair is bounded by the sum of complexities:
K(u, v) K(u) + K(v); • K(v) K(u, v) (in our case, K(x) KP (x, P )).
If P is a uniform distribution on some finite set A, this inequality can be explained as follows. We can specify x in two steps:
• first, we specify A;
• then we specify the ordinal number of x in A (in some natural ordering, say, the lexicographic one).
In this way we get K(x) K(A) + log #A for every element x of arbitrary finite set A. This inequality holds with O(1)-precision. If we replace the prefix complexity by the plain version, we can say that C(x) C(A) + log #A with precision O(log n) for every string x of length at most n: we may assume without loss of generality that both terms in the right hand side are at most n, otherwise the inequality is trivial.
The "quality" of a statistical model P for a string x can be measured by the difference between sides of this inequality: for a good model the "two-part description" should be almost minimal. We come to the following definition: Definition 2. The optimality deficiency of a distribution P considered as the model for a string x is the difference δ(x, P ) = (K(P ) + (-log P (x))) -K(x).
As we have seen, δ(x, P ) 0 with O(1)-precision. If P is a uniform distribution on a set A, the optimality deficiency δ(x, P ) will also be denoted by δ(x, A), and
δ(x, A) = (K(A) + log #A) -K(x).
The following proposition shows that we may restrict our attention to finite sets as models (with O(log n)-precision): Proposition 6. Let P be a distribution considered as a model for some string x of length n. Then there exists a finite set A such that
K(A) K(P ) + O(log n); log #A -log P (x) + O(1) ( * )
This proposition will be used in many arguments, since it is often easier to deal with sets as statistical models (instead of distributions). Note that the inequalities ( * ) evidently imply that δ(x, A) δ(x, P ) + O(log n), so arbitrary distribution P may be replaced by a uniform one (U A ) with a logarithmiconly change in the optimality deficiency.
Proof. We use the same construction as in Proposition 2. Let 2 -k be the maximal power of 2 such that 2 -k P (x), and let A = {x | P (x) 2 -k }. Then k = log P (x) + O(1). We may assume that k = O(n): if k is much bigger than n, then δ(x, P ) is also bigger than n (since the complexity of x is bounded by n + O(log n)), and in this case the statement is trivial (let A be the set of all n-bit strings). Now we see that that A is determined by P and k, so K(A) K(P ) + K(k) K(P ) + O(log n). Note also that #A 2 k , so log #Alog P (x) + O(1).
Let us note that in a more general setting [START_REF] Milovanov | Algorithmic statistic, prediction and machine learning[END_REF] where we consider several strings as outcomes of the repeated experiment (with independent trials) and look for a model that explains all of them, a similar result is not true: not every probability distribution can be transformed into a uniform one.
Optimality and randomness deficiencies
Now we have two "quality measures" for a statistical model P : the randomness deficiency d(x|P ) and the optimality deficiency δ(x, P ). They are related: Proof. By definition
d(x|P ) = -log P (x) -K(x|P ); δ(x, P ) = -log P (x) + K(P ) -K(x).
It remains to note that K(x) K(x, P ) K(P ) + K(x|P ) with O(1)-precision.
Could δ(x, P ) be significantly larger than d(x|P )? Look at the proof above: the second inequality K(x, P ) = K(P ) + K(x|P ) is an equality with logarithmic precision. Indeed, the exact formula (Levin-Gács formula for the complexity of a pair with O(1)-precision) is K(x, P ) = K(P ) + K(x|P, K(P )).
Here the term K(P ) in the condition changes the complexity by O(log K(P )), and we may ignore models P whose complexity is much greater than the complexity of x.
On the other hand, in the first inequality the difference between K(x, P ) and K(x) may be significant. This difference equals K(P |x) with logarithmic accuracy and, if it is large, then δ(x, P ) is much bigger than d(x|P ). The following example shows that this is possible. In this example we deal with sets as models.
Example 1. Consider an incompressible string x of length n, so K(x) = n (all equalities with logarithmic precision). A good model for this string is the set A of all n-bit strings. For this model we have #A = 2 n , K(A) = 0 and δ(x, A) = n+0-n = 0 (all equalities have logarithmic precision). So d(x|P ) = 0, too. Now we can change the model by excluding some other n-bit string. Consider a n-bit string y that is incompressible and independent of x: this means that K(x, y) = 2n. Let A be A \ {y}.
The set A contains x (since x and y are independent, y differs from x). Its complexity is n (since it determines y). The optimality deficiency is then n + nn = n, but the randomness deficiency is still small: d(x|A ) = log #A -K(x|A ) = nn = 0 (with logarithmic precision). To see why K(A |x) = n, note that x and y are independent, and the set A has the same information as (n, y).
One of the main results of this section (Theorem 3) clarifies the situation: it implies that if optimality deficiency of a model is significantly larger than its randomness deficiency, then this model can be improved and another model with better parameters can be found. More specifically, the complexity of the new model is smaller than the complexity of the original one while both the randomness deficiency and optimality deficiency of the new model are not worse than the randomness deficiency of the original one. This is one of the main results of algorithmic statistics, but first let us explore systematically the properties of two-part descriptions.
Trade-off between complexity and size of a model
It is convenient to consider only models that are sets (=uniform distribution on sets). We will call them descriptions. Note that by Propositions 2 and 6 this restriction does not matter much since we ignore logarithmic terms. For a given string x there are many different descriptions: we can have a simple large set containing x, and at the same time some more complicated, but smaller one. In this section we study the trade-off between these two parameters (complexity and size).
Definition 3. A finite set A is an (i * j)-description 5 of x if x ∈ A, complexity K(A)
is at most i, and log #A j. For a given x we consider the set P x of all pairs (i, j) such that x has some (i * j)-description; this set can be called the profile of x.
Informally speaking, an (i * j)-description for x consists of two parts: first we spend i bits to specify some finite set A and then j bits to specify x as an element of A.
What can be said about P x for a string x of length n and complexity k = K(x)? By definition, P x is closed upwards and contains the points (0, n) and (k, 0). Here we omit terms O(log n): more precisely, we have a (O(log n) * n)-description that consists of all strings of length n, and a ((k+O(1)) * 0)-description {x}. Moreover, the following proposition shows that we can move the information from the second part of the description into its first part (leaving the total length almost unchanged). In this way we make the set smaller (the price we pay is that its complexity increases).
Proposition 8 ([15, 13, 36]). Let x be a string and A be a finite set that contains x. Let s be a non-negative integer such that s log #A. Then there exists a finite set A containing x such that #A #A/2 s and K(A ) K(A) + s + O(log s).
Proof. List all the elements of A in some (say, lexicographic) order. Then we split the list into 2 s parts (first #A/2 s elements, next #A/2 s elements etc.; we omit evident precautions for the case when #A is not a multiple of 2 s ). Then let A be the part that contains x. It has the required size. To specify A , it is enough to specify A and the part number; the latter takes at most s bits. (The logarithmic term is needed to make the encoding of the part number self-delimiting.)
This statement can be illustrated graphically. As we have said, the set P x is "closed upwards" and contains with each point (i, j) all points on the right (with bigger i) and on the top (with bigger j). It contains points (0, n) and (K(x), 0); Proposition 8 says that we can also move down-right adding (s, -s) (with logarithmic precision). We will see that movement in the opposite direction is not always possible. So, having two-part descriptions with the same total length, we should prefer the one with bigger set (since it always can be converted into others, but not vice versa).
The boundary of P x is some curve connecting the points (0, n) and (k, 0). This curve (introduced by Kolmogorov in 1970s, see [START_REF] Kolmogorov | The complexity of algorithms and the objective definition of randomness. Summary of the talk presented April 16, 1974 at Moscow Mathematical Society. Успехи математических наук[END_REF]) never gets into the triangle i + j < K(x) and always goes down (when moving from left to right) with slope at least -1 or more. This picture raises a natural question: which boundary curves are possible and which are not? Is it possible, for example, that the boundary goes along the dotted line on Figure 1? The answer is positive: take a random string of desired complexity and add trailing zeros to achieve desired length. Then the point (0, K(x)) (the left end of the dotted line) corresponds to the set A of all strings of the same length having the same trailing zeros. We know that the boundary curve cannot go down slower than with slope -1 and that it lies above the line i + j = K(x), therefore it follows the dotted line (with logarithmic precision).
K(x) n K(x) P x complexity log-size
A more difficult question: is it possible that the boundary curve starts from (0, n), goes with the slope -1 to the very end and then goes down rapidly to (K(x), 0) (Figure 2, the solid line)? Such a string x, informally speaking, would have essentially only two types of statistical explanations: a set of all strings of length n (and its parts obtained by Proposition 8) and the exact description, the singleton {x}. It turns out that not only these two opposite cases are possible, but also all intermediate curves (provided they decrease with slope -1 or faster, and are simple enough), at least with logarithmic precision. More precisely, the following statement holds: Theorem 1 ( [START_REF] Vereshchagin | Kolmogorov's structure functions and model selection[END_REF]). Let k n be two integers and let t 0 > t 1 > . . . > t k be a strictly decreasing sequence of integers such that t 0 n and t k = 0; let m be the complexity of this sequence. Then there exists a string x of complexity k + O(log n) + O(m) and length n+O(log n)+O(m) for which the boundary curve of P x coincides with the line (0, t 0 )-(1, t 1 )-. . . -(k, t k ) with O(log n)+O(m) precision: the distance between the set P x and the set T = {(i, j)
K(x) n K(x) complexity log-size
| (i < k) ⇒ (j > t i )} is bounded by O(log n) + O(m).
(We say that the distance between two subsets P, Q ⊂ Z 2 is at most ε if P is contained in the ε-neighborhood of Q and vice versa.)
Proof. For every i in the range 0 . . . k we list all the sets of complexity at most i and size at most 2 t i . For a given i the union of all these sets is denoted by S i . It contains at most 2 i+t i elements. (Here and later we omit constant factors and factors polynomial in n when estimating cardinalities, since they correspond to O(log n) additive terms for lengths and complexities.) Since the sequence t i strictly decreases (this corresponds to slope -1 in the picture), the sums i+t i do not increase, therefore each S i has at most 2 t 0 2 n elements. The union of all S i therefore also has at most 2 n elements (up to a polynomial factor, see above). Therefore, we can find a string of length n (actually n + O(log n)) that does not belong to any S i . Let x be a first such string in some order (e.g., in the lexicographic order).
By construction, the set P x lies above the curve determined by t i . So we need to estimate the complexity of x and prove that P x follows the curve (i.e., that T is contained in the neighborhood of P x ).
Let us start with the upper bound for the complexity of x. The list of all objects of complexity at most k plus the full table of their complexities have complexity k + O(log k), since it is enough to know k and the number of terminating programs of length at most k. Except for this list, to specify x we need to know n and the sequence t 0 , . . . , t k , whose complexity is m.
The lower bound: the complexity of x cannot be less than k since all the singletons of this complexity were excluded (via S k ).
It remains to show that for every i k we can put x into a set A of complexity i (or slightly bigger) and size 2 t i (or slightly bigger). For this we enumerate a sequence of sets of correct size and show that one of the sets will have the required properties; if this sequence of sets is not very long, the complexity of its elements is bounded.
Here are the details.
We start by taking the first 2 t i strings of length n as our first set A. Then we start enumerating all finite sets of complexity at most j and of size at most 2 t j for all j = 0, . . . , k, and get an enumeration of all sets S j . Recall that all elements of all S j should be deleted (and the minimal remaining element should eventually be x). So, when a new set of complexity at most j and of size at most 2 t j appears, all its elements are included in S j and deleted. Until all elements of A are deleted, we have nothing to worry about, since A is covering the minimal remaining element. If (and when) all elements of A are deleted, we replace A by a new set that consists of first 2 t i undeleted (yet) strings of length n. Then we wait again until all the elements of this new A are deleted, if (and when) this happens, we take 2 t i first undeleted elements as new A, etc.
The construction guarantees the correct size of the sets and that one of them covers x (the minimal non-deleted element). It remains to estimate the complexity of the sets we construct in this way.
First, to start the process that generates these sets, we need to know the length n (actually something logarithmically close to n) and the sequence t 0 , . . . , t k . In total we need m + O(log n) bits. To specify each version of A, we need to add its version number. So we need to show that the number of different A's that appear in the process is at most 2 i or slightly bigger.
A new set A is created when all the elements of the old A are deleted. These changes can be split into two groups. Sometimes a new set of complexity j appears with j i. This can happen only O(2 i ) times since there are at most O(2 i ) sets of complexity at most i. So we may consider the other changes (excluding the first changes after each new large set was added). For those changes all the elements of A are gone due to elements of S j with j > i. We have at most 2 j+t j elements in S j . Since t j + j t i + i, the total number of deleted elements only slightly exceeds 2 t i +i , and each set A consists of 2 t i elements, so we get about 2 i changes of A. Remark 6. It is easy to modify the proof to get a string x of length exactly n. Indeed, we may consider slightly smaller bad sets: decreasing the logarithms of their sizes by O(log n), we can guarantee that the total number of elements in all bad sets is less than 2 n . Then there exists a string of length n that does not belong to bad sets. In this way the distance between T and P x may increase by O(log n), and this is acceptable.
Theorem 1 shows that the value of the complexity of x does not describe the properties of x fully; different strings of the same complexity x can have different boundary curves of P x . This curve can be considered as an "infinite-dimensional" characterization of x.
Strings x with minimal possible P x (Figure 2, the upper curve) may be called antistochastic. They have quite unexpected properties. For example, if we replace some bits of an antistochastic string x by stars (or some other symbols indicating erasures) leaving only K(x) non-erased bits, then the string x can be reconstructed from the resulting string x with logarithmic advice, i.e., K(x|x ) = O(log n). This and other properties of antistochastic strings were discovered in [START_REF] Milovanov | Some properties of antistochastic strings[END_REF].
Optimality and randomness deficiency
In this section we establish the connection between optimality and randomness deficiency. As we have seen, the optimality deficiency can be bigger than the randomness deficiency (for the same description), and the difference is δ(x, A)d(x|A) = K(A)+K(x|A)-K(x). The Levin-Gács formula for the complexity of pair (K(u, v) = K(u)+K(v |u) with logarithmic precision, for O(1)-precision one needs to add K(u) in the condition, but we ignore logarithmic size terms anyway) shows that the difference in question can be rewritten as
δ(x, A) -d(x|A) = K(A, x) -K(x) = K(A|x).
So if the difference between deficiencies for some (i * j)-description A of x is big, then K(A|x) is big. All the (i * j)-descriptions of x can be enumerated if x, i, and j are given. So the large value of K(A|x) for some (i * j)-description A means that there are many (i * j)-descriptions of x, otherwise A can be reconstructed from x by specifying i, j (requires O(log n) bits) and the ordinal number of A in the enumeration. We will prove that if there are many (i * j)-descriptions for some x, then there exist a description with better parameters. Now we explain this in more detail. Let us start with the following remark. Consider all strings that have (i * j)-descriptions for some fixed i and j. They can be enumerated in the following way: we enumerate all finite sets of complexity at most i, select those sets that have size at most 2 j , and include all elements of these sets into the enumeration. In this construction
• the complexity of the enumerating algorithm is logarithmic (it is enough to know i and j);
• we enumerate at most 2 i+j elements;
• the enumeration is divided into at most 2 i "portions" of size at most 2 j .
It is easy to see that any other enumeration process with these properties enumerates only objects that have (i * j)-descriptions (again with logarithmic precision). Indeed, each portion is a finite set that can be specified by its ordinal number and the enumeration algorithm, the first part requires i + O(log i) bits, the second is of logarithmic size according to our assumption.
Remark 7. The requirement about the portion size is redundant. Indeed, we can change the algorithm by splitting large portions into pieces of size 2 j (the last piece may be incomplete). This, of course, increases the number of portions, but if the total number of enumerated elements is at most 2 i+j , then this splitting adds at most 2 i pieces. This observation looks (and is) trivial, still it plays an important role in the proof of the following proposition.
Proposition 9. If a string x of length n has at least 2 k different (i, j)-descriptions, then x has some (i * (jk))-description and even some ((ik) * j)-description.
Again we omit logarithmic term: in fact one should write
((i + O(log n)) * (j - k + O(log n))
), etc. The word "even" in the statement refers to Proposition 8 that shows that indeed the second claim is stronger.
Proof. Consider the enumeration of all objects having (i * j)-descriptions in 2 i portions of size 2 j (we ignore logarithmic additive terms and respective polynomial factors) as explained above. After each portion (i.e., new (i * j)-description) appears, we count the number of descriptions for each enumerated object and select objects that have at least 2 k descriptions. Consider a new enumeration process that enumerates only these "rich" objects (rich = having many descriptions). We have at most 2 i+j-k rich objects (since they appear in the list of size 2 i+j with multiplicity 2 k ), enumerated in 2 i portions (new portion of rich objects may appear only when a new portion appears in the original enumeration). So we apply the observation above to conclude that all rich objects have (i * (jk))-descriptions.
To get the second (stronger) statement we need to decrease the number of portions (while not increasing too much the number of enumerated objects). This can be done using the following trick: when a new rich object (having 2 k descriptions) appears, we enumerate not only rich objects, but also "half-rich" objects, i.e., objects that currently have at least 2 k /2 descriptions. In this way we enumerate more objects but only twice more. At the same time, after we dumped all half-rich objects, we are sure that next 2 k /2 new (i * j)-descriptions will not create new rich objects, so the number of portions is divided by 2 k /2, as required.
Let us say more accurately how we deal with logarithmic terms. We may assume that i, j = O(n), otherwise the claim is trivial. Then we allow polynomial (in n) factors and O(log n) additive terms in all our considerations. Remark 8. If we unfold this construction, we see that new descriptions (of smaller complexity) are not selected from the original sequence of descriptions but constructed from scratch. In Section 6 we deal with much more complicated case where we restrict ourselves to descriptions from some class (say, Hamming balls). Then the proof given above does not work, since the description we construct is not a ball even if we start with ball descriptions. Still some other (much more ingenious) argument can be used to prove a similar result for the restricted case. Now we are ready to prove the promised results (see the discussion after Example 1).
Theorem 2. If a string x of length n is (α, β)-stochastic, then there exists some finite set B containing x such that K(B) α + O(log n) and δ(x, B) β + O(log n).
Proof. Since x is (α, β)-stochastic, there exists some finite set A such that K(A) α and d(x|A) β. Let i = K(A) and j = log #A, so A is an (i * j)-description of x. We may assume without loss of generality that both α and β (and therefore i and j) are O(n), otherwise the statement is trivial. The value δ(x, A) may exceed d(x|A), as we have discussed at the beginning of this section. So we assume that
k = δ(x, A) -d(x|A) > 0;
if not, we can let B = A. Then, as we have seen, K(A|x) k -O(log n), and there are at least 2 k-O(log n) different (i * j)-descriptions of x. According to Proposition 9, there exists some finite set B that is an
(i * (j -k + O(log n)))-description of x. Its optimality deficiency δ(x, B) is (k -O(log n))-smaller (compared to A) and therefore O(log n)-close to d(x|A).
In this argument we used the simple part of Proposition 9. Using the stronger statement about complexity decrease, we get the following result:
Theorem 3 ([45]). Let A be a finite set containing a string x of length n and let k = δ(x, A) -d(x|A). Then there is a finite set B containing x such that K(B) K(A) -k + O(log n) and δ(x, B) d(x|A) + O(log n). Proof. Indeed, if B is an ((i -k) * j)-description of x (up to logarithmic terms, as usual), then its optimality deficiency is again (k -O(log n))-smaller (compared to A) and therefore O(log n)-close to d(x|A).
Note that the statement of the theorem implies that d(x|B) d(x|A) + O(log n). Theorem 2 and Proposition 7 show that we can replace the randomness deficiency in the definition of (α, β)-stochastic strings by the optimality deficiency (with logarithmic precision). More specifically, for every string x of length n consider the sets
Q x = {(α, β) | x is (α, β)-stochastic}, and Qx = {(α, β) | there exists A x with K(A) α, δ(x, A) β)}.
Then these sets are at most O(log n) apart (each is contained in the O(log n)neighborhood of the other one). This remark, together with the existence of antistochastic strings of given complexity and length, allows us to improve the result about the existence of nonstochastic objects (Proposition 4).
Proposition 10 ([13, Theorem IV.2]). For some c and for all n: if α+β < n-c log n, there exist strings of length n that are not (α, β)-stochastic. Proof. Assume that integers n, α, β are given such that α + β < nc log n (where the constant c will be chosen later). Let x be an antistochastic string of length n that has complexity α + d where d is some positive number (see below about the choice of d). More precisely, for every given d there exists a string x whose complexity is α + d + O(log n), length is n + O(log n), and the set P x is O(log n)-close to the upper gray area (Figure 3).
Assume that x is (α, β)-stochastic. Then (Theorem 2) the string x has an (i * j)description with i α and i + j K(x) + β (with logarithmic precision). The set of pairs (i, j) satisfying these inequalities is shown as the lower gray area. We have to choose c in such a way that for some d these two gray are disjoint and even separated by a gap of logarithmic size (since they are known only with O(log n)-precision). Note first that for d = c log n with large enough c we guarantee the vertical gap (the vertical segments of the boundaries of two gray areas are far apart). Then we select c large enough to guarantee that the diagonal segments of the boundaries of two gray areas are far apart (α + β < n with logarithmic margin).
The transition from randomness deficiency to optimality deficiency (Theorem 2) has the following geometric interpretation. As usual, this statement is true with logarithmic accuracy: the distance between the image of the set Q x under this transformation and the set P x is claimed to be O(log n) for string x of length n.
Proof. As we have seen, we may use the optimality deficiency instead of randomness deficiency, i.e., use the set Qx in place of Q x . The preimage of the pair (i, j) under our affine transformation is the pair (i, i + j -K(x)). Hence we have to prove that a pair (i, j) is in P x if and only if the pair (i, i + j -K(x)) is in Qx . Note that K(A) = i and log #A = j is equivalent to K(A) = i and δ(x, A) = i + j -K(x) just by definition of δ(x, A). (See Figure 4: the optimality deficiency of a description A with K(A) = i and log #A = j is the vertical distance between (i, j) and the dotted line.)
But there is some technical problem: in the definition of P x we used inequalities K(A) i and log #A j, not the equalities K(A) = i and log #A = j. The same applies to the definition of Qx . So we have two sets that correspond to each other, but their -closures could be different. Obviously, K(A) i and log #A j imply K(A) i and K(A) + log #A -K(x) i + j -K(x), but not vice versa.
In other words, the set of pairs (K(A), log #A) satisfying the latter inequalities (see the right set on Figure 5) is bigger than the set of pairs (K(A), log #A) satisfying the former inequalities (see the left set on Figure 5). Now Proposition 8 helps: we may use it to convert any set with parameters from the right region into a set with with first component α.
K(A) log #A i j K(A) log #A i j Figure 5:
The left picture shows (for given i and j) the set of all pairs (K(A), log #A) such that K(A) i and log #A j; the right picture shows the pairs (K(A), log #A) such that K(A) i and δ(x, A) i + j -K(x).
parameters from the left region. β and δ(x, A) β are equivalent (with logarithmic accuracy). Indeed, the Example 1 shows that this is not true: the first inequality does not imply the second one in general case. However, Theorems 2 and 3 show that this can happen only for non-minimal descriptions (for which the description with smaller complexity and the same optimality deficiency) exists. Later we will see that all the minimal descriptions of the same (or almost the same) complexity have almost the same information. Moreover, if A and B are minimal descriptions and the complexity of A is less than that of B then C(A|B) is small. For the people with taste for philosophical speculations the meaning of Theorems 2 and 3 can be advertised as follows. Imagine several scientists that compete in providing a good explanation for some data x. Each explanation is a finite set A containing x together with a program p that computes A.
How should we compare different explanations? We want the randomness deficiency d(x|A) of x in A to be negligible (no features of x remain unexplained). Among these descriptions we want to find the simplest one (with the shortest p). That is, we look for a set A corresponding to the point where the bold dotted line on Fig. 4 touches the horizontal axis. (In fact, there is always some trade-off between the parameters, not the specific exact point where the curve touches the horizontal axis, but we want to keep the discussion simple though imprecise.)
However, this approach meets the following obstacle: we are unable to compute randomness deficiency d(x|A). Moreover, the inventor of the model A has no ways to convince us that the deficiency is indeed negligible if it is the case (the function d(x|A) is not even upper semicomputable). What could be done? Instead, we may look for an explanation with (almost) minimal sum log #A + |p| (minimum description length principle). Note that this quantity is known for competing explanation proposals. Theorems 2 and 3 provide the connection between these two approaches.
Returning to mathematical language, we have seen in this section that two approaches (based on (i * j)-descriptions and (α, β)-stochasticity) produce essentially the same curve, though in different coordinates. The other ways to get the same curve will be discussed in Sections 4 and 5.
Historical remarks
The idea to consider (i * j)-descriptions with optimal parameters can be traced back to Kolmogorov. There is a short record for his talk given in 1974 [START_REF] Kolmogorov | The complexity of algorithms and the objective definition of randomness. Summary of the talk presented April 16, 1974 at Moscow Mathematical Society. Успехи математических наук[END_REF]. Here is the (full) translation of this note:
For every constructive object x we may consider a function Φ x (k) of an integer argument k 0 defined as a logarithm of the minimal cardinality of a set of complexity at most k containing x. If x itself has a simple definition, then Φ x (1) is equal to one [a typo: cardinality equals 1, and logarithm equals 0] already for small k. If such a simple definition does not exist, x is "random" in the negative sense of the word "random". But x is positively "probabilistically random" only if the function Φ has a value Φ 0 for some relatively small k and then decreases approximately as
Φ(k) = Φ 0 -(k -k 0 ). [This corresponds to approximate (k 0 , 0)- stochasticity.]
Kolmogorov also gave a talk in 1974 [15]; the content of this talk was reported by Cover [10, Section 4, page 31]. Here l(p) stands for the length of a binary string p and |S| stands for the cardinality of a set S.
Kolmogorov's H k Function
Consider the function
H k : {0, 1} k → N , H k (x) = min p : l(p) k log |S|,
where the minimum is taken over all subsets S ⊆ {0, 1} n , such that x ∈ S, U (p) = S, l(p) k. This definition was introduces by Kolmogorov in a talk at the Information Symposium, Tallinn, Estonia, in 1974. Thus H k (x) is the log of the size of the smallest set containing x over all sets specifiable by a program of k or fewer bits. Of special interest is the value
k * (x) = min{k : H k (x) + k = K(x)}.
Note that log |S| is the maximal number of bits necessary to describe an arbitrary element x ∈ S. Thus a program for x can be written in two stages: "Use p to print the indicator function for S; the desired sequence is the ith sequence in a lexicographic ordering of the elements of this set". This program has length l(p) + log |S|, and k * (x) is the length of the shortest program p for which this 2-stage description is as short as the best 1-stage description p * . We observe that x must be maximally random with respect to S otherwise the 2-stage description could be improved, contradicting the minimality of K(x). Thus k * (x) and its associated program p constitute a minimal sufficient description for x.
. . .
Arguments can be provided to establish that k * (x) and its associated set S * describe all of the "structure" of x. The remaining details about x are conditionally maximally complex. Thus pp * * , the program for S * , plays the role of a sufficient statistic.
In both places Kolmogorov speaks about the place when the boundary curve of P x reaches its lower bound determined by the complexity of x.
Later the same ideas were rediscovered and popularized by many people. Koppel in [START_REF] Koppel | Complexity, Depth and Sophistication[END_REF] reformulates the definition using total algorithms. Instead of a finite set A he considered a total program P that terminates on all strings of some length. The two-part description of some x is then formed by this program P and the input D for this program that is mapped to x. In our terminology this corresponds to the set A of all values of P on the strings of the same length as D. He writes then [18, p. 1089]
Definition 3. The c-sophistication of a finite string S [is defined as] SOPH c (S) = min{|P | | ∃D s. t. (P, D) is a c-minimal description of α}.
There is a typo in this paper: S should be replaced by α (two times). Before in Definition 1 the description is called c-minimal if |P | + |D| H(α) + c (here P and D are the program and and its input, respectively, H stands for complexity).
Though this paper (as well as the subsequent papers [START_REF] Koppel | Structure, in The Universal Turing Machine: a Half-Century Survey[END_REF][START_REF] Koppel | An almost machine-independent theory of program-length complexity, sophistication, and induction[END_REF]) is not technically clear (e.g., it does not say what are the requirements for the algorithm U used in the definition, and in [START_REF] Koppel | Structure, in The Universal Turing Machine: a Half-Century Survey[END_REF][START_REF] Koppel | An almost machine-independent theory of program-length complexity, sophistication, and induction[END_REF] only universality is required, which is not enough: if U is not optimal, the definition does not make sense), the philosophic motivation for this notion is explained clearly [18, p. 1087]:
The total complexity of an object is defined as the size of its most concise description. The total complexity of an object can be large while its "meaningful" complexity is low; for example, a random object is by definition maximally complex but completely lacking in structure.
. . . The "static" approach to the formalization of meaningful complexity is "sophistication" defined and discussed by Koppel and Atlan [reference to unpublished paper "Program-length complexity, sophistication, and induction" is given, but later a paper of same authors [START_REF] Koppel | An almost machine-independent theory of program-length complexity, sophistication, and induction[END_REF] with a similar title appeared]. Sophistication is a generalization of the "H-function" or "minimal sufficient statistic" by Cover and Kolmogorov . . . The sophistication of an object in the size of that part of that object which describes its structure, i.e. the aggregate of its projectible properties.
One can also mention the formulation of "minimal description length" principle by Rissanen [START_REF] Rissanen | Modeling by shortest data description[END_REF]; the abstract of this paper says: "Estimates of both integer-valued structure parameters and real-valued system parameters may be obtained from a model based on the shortest data description principle"; here "integer-valued structure parameters" may correspond to the choice of a statistical hypothesis (description set) while "real-valued system parameters" may correspond to the choice of a specific element in this set. The author then says that "by finding the model which minimizes the description length one obtains estimates of both the integer-valued structure parameters and the real-valued system parameters".
We do not try here to follow the development of these and similar ideas. Let us mention only that the traces of the same ideas (though even more vague) could be found in 1960s in the classical papers of Solomonoff [START_REF] Solomonoff | A formal theory of inductive inference[END_REF][START_REF] Solomonoff | A formal theory of inductive inference. Part II. Applications of the Systems to Various Problems in Induction[END_REF] who tried to use shortest descriptions for inductive inference (and, as a side product, gave the definition of complexity later rediscovered by Kolmogorov [START_REF] Kolmogorov | Three Approaches to the Quantitative Definition of Information [Russian: Три подхода к определению понятия количество информации ] Problems of Information Transmission [Проблемы передачи информации[END_REF]). One may also mention a "minimum message length principle" that goes back to [START_REF] Wallace | An information measure for classification[END_REF]; the idea of two-part description is explained in [START_REF] Wallace | An information measure for classification[END_REF] as follows:
If the things are now classified then the measurements can be recorded by listing the following:
1. The class to which each thing belongs.
The average properties of each class.
3. The deviations of each thing from the average properties of its parent class.
If the things are found to be concentrated in a small area of the region of each class in the measurement space then the deviations will be small, and with reference to the average class properties most of the information about a thing is given by naming the class to which it belongs. In this case the information may be recorded much more briefly than if a classification had not been used. We suggest that the best classification is that which results in the briefest recording of all the attribute information.
Here the "class to which thing belongs" corresponds to a set (statistical model, description in our terminology); the authors say that if this set is small, then only few bits need to be added to the description of this set to get a full description of the thing in question.
The main technical results of this sections (Theorems 1, 2, and 3) are taken from [START_REF] Vereshchagin | Kolmogorov's structure functions and model selection[END_REF] (where some historical account is provided).
Bounded complexity lists
In this section we show one more classification of strings that turns out to be equivalent (up to coordinate change) to the previous ones: for a given string x and m C(x) we look how close x is to the end in the enumeration of all strings of complexity at most m. For technical reasons it is more convenient to use plain complexity C(x) instead of the prefix version K(x). As we have mentioned, the difference between them is only logarithmic, and we mainly ignore terms of that size.
Enumerating strings of complexity at most m
Consider some integer m, and all strings x of (plain) complexity at most m. Let Ω m be the number of those strings. The following properties of Ω m are well known and often used (see, e.g., [START_REF] Bienvenu | What Percentage of Programs Halt[END_REF]).
Proposition 11.
• Ω m = Θ(2 m ) (i.e., c 1 2 m Ω m c 2 2 m for some positive constants c 1 , c 2 and for all m;
• C(Ω m ) = m + O(1).
Proof. The number of strings of complexity at most m is bounded by the total number of programs of length at most m, which is O(2 m ). On the other hand, if Ω m is an (md)-bit number, we can specify a string of complexity greater than m using m Given m, we can enumerate all strings of complexity at most m. How many steps needs the enumeration algorithm to produce all of them? The answer is provided by the so-called busy beaver numbers; let us recall their definition in terms of Kolmogorov complexity (see [44, Note that B(m) can be undefined for small m (if there are no integers of complexity at most m) and that B(m + 1) B(m) for all m. For some m this inequality may not be strict. This happen, for example, if the optimal algorithm used to define Kolmogorov complexity is defined only on strings of, say, even lengths; this restriction does not prevent it from being optimal, but then B(2n) = B(2n + 1) for all n, since there are no objects of complexity exactly 2n + 1. However, for some constant c we have B(m + c) > B(m) for all m. Indeed, consider a program p of length at most m that prints B(m). Transform it to a program p that runs p and then adds 1 to the result. This program witnesses that C(B(m) + 1) m + c for some constant c. Hence B(m + c) B(m) + 1.
Now we define B (m) as follows. As we have said, the set of all strings of complexity at most m can be enumerated given m. Fix some enumeration algorithm A (with input m) and some computation model. Then let B (m) be the number of steps used by this algorithm to enumerate all the strings of complexity at most m. The next result says how many strings require long time to be enumerated.
Proposition 13. After B (ms) steps of the enumeration algorithm on input m there are 2 s+O(log m) strings that are not yet enumerated.
We assume that the algorithm enumerates strings (for every input m) without repetitions. Note also that here B can be replaced by B, since they differ at most by a constant change in the argument.
Proof. To make the notation simpler we omit O(1)-and O(log m)-terms in this argument. Given Ω m-s , we can determine B (ms). If we also know how many strings of complexity at most m appear after B (ms) steps, we can wait until that many strings appear and then find a string of complexity greater than m. If the number of remaining strings is smaller than 2 s-O(log m) , we get a prohibitively short description of this high complexity string.
On the other hand, let x be the last element that has been enumerated in B (m-s) steps. If there are significantly more than 2 s elements after x, say, at least 2 s+d for some d, we can split the enumeration in portions of size 2 s+d and wait until the portion containing x appears. By assumption this portion is full. The number N of steps needed to finish this portion is at least B (ms) . This number N and its successor N + 1 can be reconstructed from the portion number that contains about msd bits. Thus the complexity of
N + 1 is at most m -s -d + O(log m). Hence we have B(m -s -d + O(log m)) > N B (m -s).
By Proposition 12 we can replace B by B here:
B(m -s -d + O(log m)) > B(m -s).
(with some other constant in O-notation). Since B is a non-decreasing function, we get d = O(log m).
Ω-like numbers
G. Chaitin introduced the "Chaitin Ω-number" Ω = k m(k); it can also be defined as the probability of termination if the optimal prefix decompressor is applied to a random bit sequence (see [44, section 5.7]). 7 The numbers Ω n are finite versions of Chaitin's Ω-number. The information contained in Ω n increases as n increases; moreover, the following proposition is true. In this proposition we consider Ω n as a bit string (of length n + O(1)) identifying the number Ω n and its binary representation. Proof. This is essentially the reformulation of the previous statement (Proposition 13).
Run the algorithm that enumerates strings of complexity at most m. Knowing (Ω m ) k , we can wait until less than 2 m-k strings are left in the enumeration of strings of complexity at most m; we know that this happens after more than B(k) steps, and in this time we can enumerate all strings of complexity at most k and compute Ω k . (In this argument we ignore O(log m)-terms, as usual.)
Now the second inequality follows by the symmetry of information property. Indeed, since
C(Ω k ) = k+O(1) and C((Ω m ) k ) k+O(1), the inequality C(Ω k |(Ω m ) k ) = O(log m) implies the inequality C((Ω m ) k |Ω k ) = O(log m).
A direct argument is also easy. Knowing Ω k and k, we can find the list of all the strings of complexity at most k and the number B (k). Then we make B (k) steps in the enumeration of the list of strings of complexity at most m. Proposition 13 then guarantees that at that moment Ω m is known with error about 2 m-k , so the first k bits of Ω m can be reconstructed with small advice (of logarithmic size; we omit terms of that size in the argument).
There is a more direct connection with Chaitin's Ω-number: one can show that the number Ω m is O(log m)-equivalent to the m-bit prefix of Chaitin's Ω-number. Since in this survey we restrict ourselves to finite objects, we do not go into details of the proof here, see [44, section 5.7.7].
Position in the list is well defined
We discussed how much time is needed to enumerate all strings of complexity at most m and how many strings remain not enumerated before this time. Now we want to study which strings remain not enumerated.
More precisely, let x be some string of complexity at most m, so x appears in the enumeration of all strings of complexity at most m. How close x is to the end, that is, how many strings are enumerated after x? The answer depends on the enumeration, but only slightly, as the following proposition shows.
Proposition 15. Let A and B be algorithms that both for any given m enumerate (without repetitions) the set of strings of complexity at most m. Let x be some string and let a x and b x the number of strings that appear after x in A-and B-enumerations.
Then | log a x -log b x | = O(log m).
We may also assume that A and B are algorithms of complexity O(log m) without input that enumerate strings of complexity at most m.
Proof. Assume that a x is small: log a x k. Why log b x cannot be much larger than k? Given the first mlog b x bits of Ω m and B, we can compute a finite set of strings B that contains x and consists only of strings of complexity at most m. Then we can wait until all strings from B appear in A-enumeration. After then at most 2 k strings are left, and we need k bits to count them. In this way we can describe Ω m by mlog b x + k + O(log m) bits; however, Proposition 11 says that
C(Ω m ) = m + O(1). Hence log b x k + O(log m).
The other inequality is proven by a symmetric argument.
In this theorem A and B enumerate exactly the same strings (though in different order). However, the complexity function is essentially defined with O(1)-precision only: different optimal programming languages lead to different versions. Let C and C be two (plain) complexity functions; then C(x) C(x) + c for some c and for all x. Then the list of all x with C(x) m is contained in the list of all x with C(x) m + c. The same argument shows that the number of elements after x in the first list cannot be much larger than the number of elements after x in the second list. The reverse inequality is not guaranteed, however, even for the same version of complexity (small increase in the complexity bound may significantly increase the number of strings after x in the list). We will return to this question in Section 4.4, but let us note first that some increase is guaranteed.
Proposition 16. If for a string x there are at least 2 s elements after x in the enumeration of all strings of complexity at most m, then for every d 0 there are at least 2 s+d-O(log m) strings after x in the enumeration of all strings of complexity at most m + d.
Proof. Essentially the same argument works here: if there are much less than 2 s+d strings after x in the bigger list, then this bigger list can be determined by 2 m-s bits needed to cover x in the smaller list and less than s + d bits needed to count the elements in the bigger list that follow the last covered element.
The last proposition can be restated in the following way. Let us fix some complexity function and and some algorithm that, given m, enumerates all strings of complexity at most m. Then, for a given string x, consider the function that maps every m C(x) to the logarithm of the number of strings after x in the enumeration with input m. Proposition 16 says that d-increase in the argument leads at least to (d -O(log m))-increase of this function (but the latter increase could be much bigger). As we will see, this function is closely related to the set P x (and therefore Q x ): it is one more representation of the same boundary curve.
The relation to P x
To explain the relation, consider the following procedure for a given binary string x. For every m C(x) draw the line i + j = m on (i, j)-plane. Then draw the point on this line with second coordinate s where s is the logarithm of the number of elements after x in the enumeration of all strings of complexity at most m. Mark also all points on this line on the right of (=below) this point. Doing this for different m, we get a set (Figure 6). Proposition 16 guarantees that this set is upward closed with logarithmic precision: if some point (i, j) belongs to this set, then the point (i, j + d) is in O(log(i + j))-neighborhood of this set. This implies that the point (i + d, j) is also in the neighborhood, since our set is closed by construction in the direction (1, -1).
K(x) n m K(x) s P x
Figure 6: For each m between K(x) and n (length of x) we count elements after x in the list of strings having complexity at most m; assuming there is about 2 s of them, we draw point (ms, s) and get a point on some curve. This curve turns out to be the boundary of P x (with logarithmic precision).
It turns out that this set coincides with P x (Definition 3) with O(log n)-precision for a string x of length n (this means, as usual, that each of the two sets is contained in the O(log n)-neighborhood of the other one): Theorem 5. Let x be a string of length n. If x has a (i * j)-description then x is at least 2 j-O(log n) -far from the end of (i + j + O(log n))-list. Conversely, if there are at least 2 j elements that follow x in the (i + j)-list then x has a ((i + O(log n)) * j)description.
Proof. We need to verify two things. First, assuming that x has a (i * j)-description, we need to show that it is at least 2 j -far from the end of (i + j)-list. (With error terms: in (i + j + O(log n))-list there are at least 2 j-O(log n) elements after x.) Indeed, knowing some (i * j)-description A for x, we can wait until all the elements of A appear in (i + j)-list (as usual, we omit O(log n)-term: all elements of A have complexity at most i + j + O(log n), so we should consider (i + j + O(log n))-list to be sure that it contains all elements of A). In particular, x has appeared at that moment. If there are (significantly) less than 2 j elements after x, then we can encode the number of remaining elements by (significantly) less than j bits, and together with the description of A we get less than i + j bits to describe Ω i+j , which is impossible.
Second, assume that there are at least 2 j elements that follow x in the (i + j)list. Then, splitting this list into 2 j -portions, we get at most 2 i full portions, and x is covered by one of them. Each portion has complexity at most i and log-size at most j, so we get an (i * j)-description for x. (As usual, logarithmic terms are omitted.) Now we can reformulate the properties of stochastic and antistochastic objects. Every object of complexity k appears in the list of objects of complexity at most k for all k > k. Each stochastic object is far from the end of these lists (except, may be, for some k -lists with k very close to k). Each antistochastic object of length n is maximally close to the end of all k -lists with k < n (there are about 2 k -k objects after x), except, may be, for some k -lists with k very close to n. When k becomes greater than n, then even antistochastic strings are far from the end of the k -list. What we have said is just the description of the corresponding curves (Figure 2) using Theorem 5.
Standard descriptions
The lists of objects of bounded complexity provide a natural class of descriptions. Consider some m and the number Ω m of strings of complexity at most m. This number can be represented in binary:
Ω m = 2 a + 2 b + . . . ,
where a > b > . . .. The list itself then can be split into pieces of size 2 a , 2 b ,. . . , and these pieces can be considered as description of corresponding objects. In this way for each string x and for each m C(x) we get some description on x, a piece than contains x. Descriptions obtained in this way will be called standard descriptions. Note that for a given x we have many standard descriptions (depending on the choice of m). One should have in mind also that the class of standard descriptions depends on the choice of the complexity function and the enumeration algorithm, and we assume in the sequel that they are fixed.
The following results show that standard descriptions are in a sense universal. First let us note that the standard descriptions have parameters close to the boundary curve of P x (more precisely, to the boundary curve of the set constructed in the previous section that is close to P x ). 8Proposition 17. Consider the standard description A of size 2 j obtained from the list of all strings of complexity at most m. Then C(A) = mj + O(log m), and the number of elements in the list that follow the elements of A is 2 j+O(log m) . This statement says that parameters of A are close to the point on the line i + j = m considered in the previous section (Figure 6). Proof. To specify A, it is enough to know the first m-j bits of Ω m (and m itself). The complexity of A cannot be much smaller, since knowing A and the j least significant bits of Ω m we can reconstruct Ω m .
The number of elements that follow A cannot exceed 2 j (it is a sum of smaller powers of 2); it cannot be significantly less since it determines Ω m together with the first mj bits of Ω. (In other words, since Ω m is an incompressible string of length m, it cannot have more that O(log m) zeros in a row.) This result does not imply that every point on the boundary of P x is close to parameters of some standard description. If some part of the boundary has slope -1, we cannot guarantee that there are standard descriptions along this part. For example, consider the list of strings of complexity at most m; the maximal complexity of strings in this list is mc for some c = O(1); if we take first string of this complexity, there are 2 m+O (1) strings after it, so the corresponding point is close to the vertical axis, and due to Proposition 16 all other standard descriptions of x are also close to the vertical axis. However, descriptions with parameters close to arbitrary points on the boundary of P x can be obtained from standard descriptions by chopping them into smaller parts, as in Proposition 8. In that shopping it is natural to use the order in which the strings were enumerated. In other words, chop the list of strings of complexity at most m into portions of size 2 j . Consider all the full portions (of size exactly 2 j ) obtained in this way (they are parts of standard descriptions of bigger size). Descriptions obtained in this way are "universal" in the following sense: if a pair (i, j) is on the boundary of P x then there is a set A x of this type of complexity i + O(log(i + j)) and log-cardinality j + O(log(i + j)).
The following result says more: for every description A for x there is a "better" standard description that is simple given A (note that d 0 in the following proposition and that optimality deficiency of B does not exceed that of A up to logarithmic term). have the same left half as x, the second one consists of all n-bit strings that have the same right half. Both have the same parameters: complexity n/2 and log-size n/2, so they both correspond to the same point on the boundary of P x . Still the information in these two descriptions is different (left and right halves of a random string are independent).
These results sound as good news. Let us recall our original goal: to formalize what is a good statistical model. It seems that we are making some progress. Indeed, for a given x we consider the boundary curve P x and look at the place when it first touches the lower bound i + j = C(x); after that it stays near this bound. In other terms, we consider models with negligible optimality deficiency, and select among them the model with minimal complexity. Giving a formal definitions, we need to fix some threshold ε. Then we say that a set A is a ε-sufficient statistic if δ(x, A) < ε, and may choose the simplest one among them and call it the minimal ε-sufficient statistic. If the curve goes down fast on the left of this point, we see that all the descriptions with parameters corresponding to minimal sufficient statistic are equivalent to each other.
Trying to relate these notion to practice, we may consider the following example. Imagine that we have digitized some very old recording and got some bit string x. There is a lot of dust and scratches on the recording, so the originally recorded signal is distorted by some random noise. Then our string x has a two-part description: the first part specifies the original recording and the noise parameters (intensity, spectrum, etc.) and the second part specifies the noise exactly. May be, the first part is the minimal sufficient statistic and therefore sound restoration (and lossy compression in general) is a special case of the problem of finding a minimal sufficient statistic? The uniqueness result above (saying that all the minimal sufficient statistics contain the same information under some conditions) seem to support this view: different good models for the same object contain the same explanation.
Still the following observation (that easily follows from what we know) destroys this impression completely. Proposition 19. Let B be some standard description of complexity i obtained from the list of all strings of complexity at most m. Then B is O(log m)-equivalent to Ω i . This looks like a failure. Imagine that we wanted to understand the nature of some data string x; finally we succeed and find a description for x of reasonable complexity and negligible randomness and optimality deficiencies (and all the good properties we dreamed of). But Proposition 19 says that the information contained in this description is more related to the computability theory than to specific properties of x. Recalling the construction, we see that the corresponding standard description is determined by some prefix of some Ω-number, and is an interval in the enumeration of objects of bounded complexity. So if we start with two old recordings, we may get the same information, which is not what we expect from a restoration procedure. Of course, there is still a chance that some Ω-number was recorded and therefore the restoration process indeed should provide the information about it, but this looks like a very special case that hardly should happen for any practical situation.
What could we do with this? First, we could just relax and be satisfied that we now understand much better the situation with possible descriptions for x. We know that every x is characterized by some curve that has several equivalent definitions (in terms of stochasticity, randomness deficiency, position in the enumeration as well as time-bounded complexity, see Section 5 below). We know that standard descriptions cover the parts of the curve where it goes down fast, and to cover the parts where the slope is -1 one may use standard descriptions and their pieces; all these descriptions are simple given x. When curve goes down fast, the description is essentially unique (all the descriptions with the same parameters contain the same information, equivalent to the corresponding Ω-number); this is not true on parts with slope -1. So, even if this curve is of no philosophical importance, we have a lot of technical information about possible models.
The other approach is to go farther and consider only models from some class (Section 6), or add some additional conditions and look for "strong models" (Section 7).
Non-stochastic objects revisited
Now we can explain in a different way why the probability of obtaining a nonstochastic object in a random process is negligible (Proposition 5). This explanation uses the notion of mutual information from algorithmic information theory. The mutual information in two strings x and y is defined as
I(x : y) = C(x) -C(x|y) = C(y) -C(y |x) = C(x) + C(y) -C(x, y);
all three expressions are O(log n)-close if x and y are strings of length n (see, e.g., [START_REF] Верещагин | Колмогоровская сложность и алгоритмическая случайность[END_REF]Chapter 2]).
Consider an arbitrary string x of length n; let k be the complexity of x. Consider the list of all objects of complexity at most k, and the standard description A for x obtained from this list. If A is large, then x is stochastic; if A is small, then x contains a lot of information about Ω k and Ω n .
More precisely, let us assume that A has size 2 k-s (i.e., is 2 s times smaller than it could be). Then (recall Proposition 17) the complexity of A is s + O(log k), since we can construct A knowing k and the first s bits of Ω k (before the bit that corresponds to A). So we get (s + O(log k)) * (ks)-description with optimality deficiency O(log k).
On the other hand, knowing x and k, we can find the ordinal number of x in the enumeration, so we know Ω k with error at most 2 k-s , so C(Ω k |x) ks + O(log k), and I(x
: Ω k ) s -O(log k) (recall that C(Ω k ) = k + O(1)).
In the last statement we may replace Ω k by Ω n (where n is the length of x): we know from Proposition 14 that Ω k is simple given Ω n , so if condition Ω k decreases complexity of x by almost s bits, the same is true for condition Ω n .
Comparing arbitrary i n with this s (it can be larger than s or smaller than s), we get the following result: Proposition 20. Let x be a string of length n. For every i n
• either x is (i + O(log n), O(log n))-stochastic, • or I(x : Ω n ) i -O(log n).
Now we may use the following (simple and general) observation: for every string u the probability to generate (by a randomized algorithm) an object that contains a lot of information about u is negligible: Proposition 21. For every string u and for every number d, we have
{m(x) | K(x) -K(x|u) d} 2 -d .
In this proposition the sum is taken over all strings x that have the given property (have a large mutual information with u). Note that we have chosen the representation of mutual information that makes the proposition easy (in particular, we have used prefix complexity). As we mentioned, other definitions differ only by O(log n) if we consider strings x and u of length at most n, and logarithmic accuracy is enough for our purposes.
{ m(x) | x is a n-bit string that is not (α, O(log n))-stochastic } 2 -α+O(log n)
for every α.
The improvement here is the better upper bound for the randomness deficiency: O(log n) instead of α + O(log n).
Historical comments
The relation between busy beaver numbers and Kolmogorov complexity was pointed out in [START_REF] Gács | On the relation between descriptional complexity and algorithmic probability[END_REF] (see Section 2.1). The enumerations of all objects of bounded complexity and their relation to stochasticity were studied in [START_REF] Gács | Algorithmic statistics[END_REF] (see Section III, E).
Computational and logical depth
In this section we reformulate the results of the previous one in terms of boundedtime Kolmogorov complexity and discuss the various notions of computational and logical depth that appeared in the literature. (The impatient reader may skip this section; it is not technically used in the sequel).
Bounded-time Kolmogorov complexity
The usual definition of Kolmogorov complexity of x as the minimal length l(p) of a program p that produces x does not take into account the running time of the program p: it may happen that the minimal program for x requires a lot of time to produce x while other programs produce x faster but are longer (for example, program "print x" is rather fast). To analyze this trade-off, the following definition is used. This definition was mentioned already in the first paper by Kolmogorov [START_REF] Kolmogorov | Three Approaches to the Quantitative Definition of Information [Russian: Три подхода к определению понятия количество информации ] Problems of Information Transmission [Проблемы передачи информации[END_REF]:
Our approach has one important drawback: it does not take into account the efforts needed to transform the program p and object x [the description and the condition] to the object y [whose complexity is defined]. With appropriate definitions, one may prove mathematical results that could be interpreted as the existence of an object x that has simple programs (has very small complexity K(x)) but all short programs that produce x require an unrealistically long computation. In another paper I plan to study the dependence of the program complexity K t (x) on the difficulty t of its transformation into x. Then the complexity K(x) (as defined earlier) reappears as the minimum value of K t (x) if we remove restrictions on t.
Kolmogorov never published a paper he speaks about, and this definition is less studied than the definition without time bounds, for several reasons. First, the definition is machine-dependent: we need to decide what computation model is used to count the number of steps. For example, we may consider one-tape Turing machines, or multi-tape Turing machine, or some other computational model. The computation time depends on this choice, though not drastically (e.g., a multitape machine can be replaced with a one-tape machine with quadratic increase in time, and most popular models are polynomially related this observation is used when we argue that the class P of polynomial-time computable functions is well defined).
Second, the basic result that makes the Kolmogorov complexity theory possible is the Solomonoff-Kolmogorov theorem saying that there exists an optimal algorithm D that makes the complexity function minimal up to O(1) additive term. Now we need to take into account the time bound, and get the following (not so nice) result.
Proposition 23. There exists an optimal algorithm D for time-bounded complexity in the following sense: for every other algorithm D there exists a constant c and a polynomial q such that C t D (x) C q(t) D (x) + c for all strings x and integers t.
In this result, by "algorithm" we may mean a k-tape Turing machine, where k is an arbitrary fixed number. However, the claim remains true even when k is not fixed, i.e., we may allow D to have more tapes than D has.
The proof remains essentially the same: we choose some simple self-delimiting encoding of binary strings p → p and some universal algorithm U (•, •) and then let
D(px) = U (p, x)
Then the proof follows the standard scheme; the only thing we need to note is that the decoding of p runs in polynomial time (which is true for most natural ways of self-delimiting encoding) and that the universal algorithm simulation overhead is polynomial (which is also true for most natural constructions of universal algorithms).
A similar result is true for conditional decompressors, so the conditional timebounded complexity can be defined as well.
For Turing machines with fixed number of tapes the statement is true for some linear polynomial q(n) = O(n). For the proof we need to consider a universal machine U that simulates other machines efficiently: it should move the program along the tape, so the overhead is bounded by a factor that depends on the size of the program and not on the size of the input or computation time. 9Let t(n) be an arbitrary total computable function with integer arguments and values; then the function
x → C t(l(x)) D (x)
is a computable upper bound for the complexity C(x) (defined with the same D; recall that l(x) stands for the length of x). Replacing the function t(•) by a bigger function, we get a smaller computable upper bound. An easy observation: in this way we can match every computable upper bound for Kolmogorov complexity.
Proposition 24. Let C(x) be some total computable upper bound for Kolmogorov complexity function based on the optimal algorithm D from Proposition 23. Then there exists a computable function t such that C t(l(x)) D (x) C(x) for every x.
Proof. Given a number n, we wait until every string x of length at most n gets a program that has complexity at most C(x), and let t(n) be the maximal number of steps used by these programs.
So the choice of a computable time bound is essentially equivalent to the choice of a computable total upper bound for Kolmogorov complexity.
In the sequel we assume that some optimal (in the sense of Proposition 23) D is fixed and omit the subscript D in C t D (•). Similar notation C t (•|•) is used for conditional time-bounded complexity.
Trade-off between time and complexity
We use the extremely fast growing sequence B(0), B(1), . . . as a scale for measuring time. This sequence grows faster than any computable function (since the complexity of t(n) for any computable t is at most log n+O(1), we have B(log n+O(1)) t(n)). In this scale it does not matter whether we use time or space as the resource measure: they differ at most by an exponential function, and 1)) for every computable f ). So we are in the realm of general computability theory even if we technically speak about computational complexity, and the problems related to the unsolved P=NP question disappear.
2 B(n) B(n + O(1)) (in general, f (B(n)) B(n + O(
Let x be a string of length n and complexity k. Consider the time-bounded complexity C t (x) as a function of t. (The optimal algorithm from Proposition 23 is fixed, so we do not mention it in the notation.) It is a decreasing function of t. For small values of t the complexity C t (x) is bounded by n + O(1) where n stands for the length of x. Indeed, the program that prints x has size n + O(1) and works rather fast. Formally speaking, C t (x) n + O(1) for t = B(O(log n)). As t increases, the value of C t (x) decreases and reaches k = C(x) as t → ∞. It is guaranteed to happen for t = B(k + O(1)), since the computation time for the shortest program for x is determined by this program.
We can draw a curve that reflects this trade-off using B-scale for the time axis. Namely, consider the graph of the function
i → C B(i) (x) -C(x)
and the set of points above this graph, i.e., the set
D x = {(i, j) | C B(i) (x) -C(x) j}.
Theorem 6 ( [START_REF] Bauwens | Computability in statistical hypotheses testing, and characterizations of independence and directed influences in time series using Kolmogorov complexity[END_REF][START_REF] Antunes | Sophistication vs. Logical Depth, Theory of Computing Systems[END_REF]). The set D x coincides with the set Q x with O(log n)-precision for a string x of length n.
Recall that the set Q x consists of pairs (α, β) such that x is (α, β)-stochastic (see p. 25).
Proof. As we know from Theorem 4, the sets P x and Q x are related by an affine transformation (see Figure 4). Taking this transformation into account, we need to prove two statements:
• if there exists an (i * j)-description A for x, then
C B(i+O(log n)) (x) i + j + O(log n); • if C B(i) (x) i + j, then there exist an ((i + O(log n)) * (j + O(log n)))-description for x.
Both statements are easy to prove using the tools from the previous section. Indeed, assume that x has an (i * j)-description A. All elements of A have complexity at most i + j + O(log n). Knowing A and this complexity, we can find the minimal t such that C t (x ) i + j + O(log n) for all x from A. This t can be computed from A, which has complexity i, and an O(log n)-bit advice (the value of complexity). Hence t B(i + O(log n)) and C t (x) i + j + O(log n), as required.
The converse: assume that C B(i) (x) i + j. Consider all the strings x that satisfy this inequality. There are at most O(2 i+j ) such strings. Thus we only need to show that given i and j we are able to enumerate all those strings in at most O(2 i ) portions.
One can get a list of all those strings x if B(i) is given, but we cannot compute B(i) given i. Recall that B(i) is the maximal integer that has complexity at most i; new candidates for B(i) may appear at most 2 i times. The candidates increase with time; when this happens, we get a new portion of strings that satisfy the inequality C B(i) (x) i+j. So we have at most O(2 i+j ) objects including x that are enumerated in at most 2 i portions, and this implies that x has an ((i + O(log n)) * j)-description. Indeed, we make all portions of size at most 2 j by splitting larger portions into pieces. The number of portions increases at most by O(2 i ), so it remains O(2 i ). Each portion (including the one that contains x) has then complexity at most i + O(log n) since it can be computed with logarithmic advice from its ordinal number.
This theorem shows that the results about the existence of non-stochastic objects can be considered as the "mathematical results that could be interpreted as the existence of an object x that has simple programs (has very small complexity K(x)) but all short programs that produce x require an unrealistically long computation" mentioned by Kolmogorov (see the quotation above), and the algorithmic statistics can be interpreted as an implementation of Kolmogorov's plan "to study the dependence of the program complexity K t (x) on the difficulty t of its transformation into x", at least for the simple case of (unrealistically) large values of t.
Historical comments
Section 5 has title "logical and computational depth" but we have not defined these notions yet. The name "logical depth" was introduced by C. Bennett in [START_REF] Bennett | Logical Depth and Physical Complexity, in The Universal Turing Machine: a Half-Century Survey[END_REF]. He explains the motivation as follows: Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g., the human body, the digits of π) contain internal evidence of a nontrivial causal history. . . . We propose depth as a formal measure of value. From the earliest days of information theory it has been appreciated that information per se is not a good measure of message value. For example, a typical sequence of coin tosses has high information content but little value; an ephemeris, giving the positions of the moon and the planets every day for a hundred years, has no more information than the equations of motion and initial conditions from which it was calculated, but saves its owner the effort of recalculating these positions. The value of a message thus appears to reside not in its information (its absolutely unpredictable parts), nor in its obvious redundancy (verbatim repetitions, unequal digit frequencies), but rather is what might be called its buried redundancy parts predictable only with difficulty, things the receiver could in principle have figured out without being told, but only at considerable cost in money, time, or computation. In other words, the value of a message is the amount of mathematical or other work plausibly done by its originator, which its receiver is saved from having to repeat.
Trying to formalize this intuition, Bennett suggests the following possible definitions: Tentative Definition 0.1: A string's depth might be defined as the execution time of its minimal program. This notion is not robust (it depends on the specific choice of the optimal machine used in the definition of complexity). So Bennett considers another version: Tentative Definition 0.2: A string's depth at significance level s [might] be defined as the time required to compute the string by a program no more than s bits larger than the minimal program.
We see that Definition 0.2 consider the same trade-off as in Theorem 6, but in reversed coordinates (time as a function of difference between time-bounded and limit complexities). Bennett is still not satisfied by this definition, for the following reason:
This proposed definition solves the stability problem, but is unsatisfactory in the way it treats multiple programs of the same length. Intuitively, 2 k distinct (n + k)-bit programs that compute same output ought to be accorded the same weight as one n-bit program . . .
In other language, he suggests to consider a priori probability instead of complexity:
Tentative Definition 0.3: A string's depth at significance level s might be defined as the time t required for the string's time-bounded algorithmic probability P t (x) to rise to within a factor 2 -s of its asymptotic timeunbounded value P (x).
Here P t (x) is understood as a total weight of all self-delimiting programs that produce x in time at most t (each program of length s has weight 2 -s ). For our case (when we consider busy beaver numbers as time scale) the exponential time increase needed to switch from a priori probability to prefix complexity does not matter. Still Bennett is interested in more reasonable time bounds (recall that in his informal explanation a polynomially computable sequence of π-digits was an example of a deep sequence!), and prefers a priori probability approach. Moreover, he finds a nice reformulation of this definition (almost equivalent one) in terms of complexity: Although Definition 0.3 satisfactorily captures the informal notion of depth, we propose a slightly stronger definition for the technical reason that it appears to yield a stronger slow growth property . . . Definition 1 (Depth of Finite Strings): Let x and w be strings [probably w is a typo: it is not mentioned later] and s a significance parameter. A string's depth at significance level s, denoted D s (x), will be defined as Here p * is a shortest self-delimiting program for p, so its length |p * | equals K(p).
min{T (p) : (|p| -|p * | < s) ∧ (U (p) = x)},
Actually, this Definition 1 has a different underlying intuition than all the previous ones: a string x is deep if all programs that compute x in a reasonable time, are compressible. Note the before we required a different thing: that all programs that compute x in a reasonable time are much longer than the minimal one. This is a weaker requirement: one may imagine a long incompressible program that computes x fast. This intuition is explained in the abstract of the paper [START_REF] Bennett | Logical Depth and Physical Complexity, in The Universal Turing Machine: a Half-Century Survey[END_REF] as follows:
[We define] an object's "logical depth" as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random.
Bennett then proves a statement (called Lemma 3 in his paper) that shows that his Definition 1 is almost equivalent to Tentative Definition 0.3 : the time remains exactly the same, while s changes at most logarithmically (in fact, at most by K(s)). So if we use Bennett's notion of depth (any of them, except for the first one mentioned) with busy beaver time scale, we get the same curve as in our definition.
A natural question arises: is there a direct proof that the output of an incompressible program with not too large running time is stochastic? In fact, yes, and one can prove a more general statement: the output of a stochastic program with reasonable running time is stochastic (see Section 5.4); note that stochasticity is a weaker condition than incompressibility.
Let us mention also the notion of computational depth introduced in [START_REF] Antunes | Computational depth[END_REF]. There are several versions mentioned in this paper; the first one exchanges coordinates in the Bennett's tentative definition 0.2 (reproduced in [START_REF] Antunes | Computational depth[END_REF] as Definition 2.5). The authors write: "The first notion of computational depth we propose is the difference between a time-bounded Kolmogorov complexity and traditional Kolmogorov complexity" (Definition 3.1, where time bound is some function of input length). The other notions of computation depth are more subtle (they use distinguishing complexity or Levin complexity involving the logarithm of the computation time).
The connections between computational/logical depth and sophistication were anticipated for a long time; for example, Koppel writes in [START_REF] Koppel | Structure, in The Universal Turing Machine: a Half-Century Survey[END_REF]:
. . . The "dynamic" approach to the formalization of meaningful complexity is "depth" defined and discussed by Bennett [START_REF] Adleman | Time, space and randomness[END_REF]. [Reference to an unpublished paper "On the logical 'depth' of sequences and their reducibilities to incompressible sequences".] The depth of an object is the running-time of its most concise description. Since it is reasonable to assume that an object has been generated by its most concise description, the depth of an object can be thought of as a measure of its evolvedness.
Although sophistication is measured in integers [not clear what in meant here: sophistication of S is also a function c → SOP H c (S)] and depth is measured in functions, it is not difficult to translate to a common range.
Strangely, the direct connection between the most basic versions of these notions (Theorem 6) seems to be noticed only recently in [6, Section 3], and [START_REF] Antunes | Sophistication vs. Logical Depth, Theory of Computing Systems[END_REF].
Why so many equivalent definitions?
We have shown several equivalent (with logarithmic precision and up to affine transformation) ways to defined the same curve:
• (α, β)-stochasticity (Section 2);
• two-part descriptions and optimality deficiency, the set P x (Section 3);
• position in the enumeration of objects of bounded complexity (Section 4);
• logical/computational depth (resource-bounded complexity, Section 5).
One can add to this list a characterization in terms of split enumeration (Section 3.4): the existence of (i * j)-description for x is equivalent (with logarithmic precision) to the existence of a simple enumeration of at most 2 i+j objects in at most 2 i portions (see Remark 7, p. 23, and the discussion before it).
Why do we need so many equivalent definitions of the same curve? First, this shows that this curve is really fundamental almost as fundamental characterization of an object x as its complexity. As Koppel writes in [START_REF] Koppel | Complexity, Depth and Sophistication[END_REF], speaking about (some versions of) sophistication and depth:
One way of demonstrating the naturalness of a concept is by proving the equivalence of a variety of prime facie different formalizations . . . . It is hoped that the proof of the equivalence of two approaches to meaningful complexity, one using static resources (program size) and the other using dynamic resources (time), will demonstrate not only the naturalness of the concept but also the correctness of the specifications used in each formalization to ensure robustness and generality.
Another, more technical reason: different results about stochasticity use different equivalent definitions, and a statement that looks quite mysterious for one of them may become almost obvious for another. Let us give two examples of this type (the first one is stochasticity conservation when random noise is added, the second one is a direct proof of Bennett's characterization mentioned above). The first example is the following proposition from [START_REF] Vereshchagin | Algorithmic Minimal Sufficient Statistics: a New Approach[END_REF] (though the proof there is different). Proposition 25. Let x be some binary string, and let y be another string ("noise") that is conditionally random with respect to x, i.e., C(y |x) ≈ l(y). Then the pair (x, y) has the same stochasticity profile as x: the sets Q x and Q (x,y) are logarithmically close to each other.
Before giving a proof sketch, let us mention that an interesting special case of this proposition is obtained if we consider a string u and its description X with small randomness deficiency: d(u|X) ≈ 0. Let y be the ordinal number of u in X. Then the small randomness deficiency guarantees that y is conditionally random with respect to x. Then the pair (X, y) has the same stochasticity profile as X. Since this pair is mapped to u by a simple total computable function, we conclude (Proposition 3) that the stochasticity profile of X is contained in the stochasticity profile of u (more precisely, in its O(log n + d(u|X))-neighborhood). (More simple and direct proof of this statement goes as follows: if U is a description for X that has small complexity and optimality deficiency, we can take the union of all elements of U that have approximately the same cardinality as X; one can verify easily that this union also has small complexity and optimality deficiency as a description for u.)
The full statement of Proposition 25 would introduce some bound for the difference l(y) -C(y |x) that is allowed to appear in the final estimate for the distance between sets. Recall also that we can speak about profiles of arbitrary finite objects, in particular, pairs of strings, using some natural encoding (Section 2.3).
Proof sketch. Using the depth characterization of stochasticity profile, we need to show that
C B(i) (x, y) -C(x, y) ≈ C B(i) (x) -C(x).
Here "approximately" means that these two quantities may differ by a logarithmic term, and also we are allowed to add logarithmic terms to i (see below what does it mean). The natural idea is to rewrite this equality as
C B(i) (x, y) -C B(i) (x) ≈ C(x, y) -C(x).
The right hand side is equal to C(y |x) (with logarithmic precision) due to Kolmogorov-Levin formula for the complexity of a pair (see, e.g., [START_REF] Верещагин | Колмогоровская сложность и алгоритмическая случайность[END_REF]Chapter 2]), and C(y |x) equals l(y), as y is random and independent of x. Thus it suffices to show that the left hand side also equals l(y). To this end we can prove a version of Kolmogorov-Levin formula for bounded complexity and show that the left hand side equals to C B(i) (y |x). Again, since y is random and independent of x, C BB(i) (y |x) equals l(y). This plan needs clarification. First of all, let us explain which version of Kolmogorov-Levin formula for bounded complexity we need. (Essentially it was published by Longpré in [START_REF] Longpré | Resource bounded Kolmogorov complexity, a link between computational complexity and information theory[END_REF] though the statement was obscured by considering time bound as a function of the input length.)
The equality C(x, y) = C(x) + C(y |x) should be considered as two inequalities, and each one should be treated separately.
Lemma. 1. There exist some constant c and some polynomial p(•, •) such that C p(n,t) (x, y) C t (x) + C t (y |x) + c log n for all n and t and for all strings x and y of length at most n.
2. There exist some constant c and some polynomial p(•, •) such that C p(2 n ,t) (x) + C p(2 n ,t) (y |x) C t (x, y) + c log n for all n and t and for all strings x and y of length at most n.
Proof of the lemma. The proof of this time-bounded version is obtained by a straightforward analysis of the time requirements in the standard proof. The first part says that if there is some program p that produces x in time t, and some program q that produces y from x in time t, then the pair (p, q) can be considered as a program that produces (x, y) in time poly(t, n) and has length l(p) + l(q) + O(log n) (we may assume without loss of generality that p and q have length O(n), otherwise we replace them by shorter fast programs).
The other direction is more complicated. Assume that C t (x, y) = m. We have to count for a given x the number of strings y such that C t (x, y ) m. These strings (y is one of them) can be enumerated in time poly(2 n , t), so if there are 2 s of them, then C poly(2 n ,t) (y |x) s + O(log n) (the program witnessing this inequality is the ordinal number of y in the enumeration plus O(log n) bits of auxiliary information. Note that we do not need to specify t in advance, we enumerate y in order of increasing time, and y is among first 2 s enumerated strings.
On the other hand, there are at most 2 m-s+O (1) strings x for which this number (of different y such that C t (x , y ) m) is at least 2 s-1 , and these strings also could be enumerated in time poly(2 n , t), so C poly(2 n ,t) (x) ms + O(log n) (again we do not need to specify t, we just increase gradually the time bound). When these two inequalities are added, s disappears and we get the desired inequality.
Of course, the exponent in the lemma is disappointing (for space bound it is not needed, by the way), but since we measure time in busy beaver units, it is not a problem for us: indeed, poly(2 n , B(i)) B(i + O(log n)), and we allow logarithmic change in the argument anyway. Now we should apply this lemma, but first we need to give a full statement of what we want to prove. There are two parts (as in the lemma):
• for every i there exists j i + O(log n) such that
C B(j) (x, y) -C(x, y) C B(i) (x) -C(x) + ε + O(log n)
for all strings x and y of length at most n such that C(y |x) l(y)ε;
• for every i there exists j i + O(log n) such that
C B(j) (x) -C(x) C B(i) (x, y) -C(x, y) + O(log n)
for all strings x and y of length at most n;
Both statements easily follow from the lemma. Let us start with the second statement where the hard direction of the lemma is used. As planned, we rewrite the inequality as
C B(j) (x) + C(y |x) C B(i) (x, y) + O(log n)
using the unbounded formula. Our lemma guarantees that
C B(j) (x) + C B(j) (y |x) C B(i) (x, y) + O(log n)
for some j i + O(log n), and it remains to note that C(y |x) C B(j) (y |x). For the other direction the argument is similar: we rewrite the inequality as
C B(j) (x, y) C(y |x) + C B(i) (x) + O(log n)
and note that C(y |x) l(y)ε C B(i) (y |x)ε, assuming that B(i) is greater than the time needed to print y from its literary description (otherwise the statement is trivial). So the lemma again can be used (in the simple direction).
This proof used the depth representation of the stochasticity curve; in other cases some other representation are more convenient. Our second example is the change in stochasticity profile when a simple algorithmic transformation is applied. We have seen (Section 2.3) that a total mapping with a short program preserves stochasticity, and noted that for non-total mapping it is not the case (Remark 3, p. 11). However, if the time needed to perform the transformation is bounded, we can get some bound (first proven by A. Milovanov in a different way): Proposition 26. Let F be a computable mapping whose arguments and values are strings. If some n-bit string x is (α, β)-stochastic, and F (x) is computed in time
B(i) for some i, then F (x) is (max(α, i) + O(log n), β + O(log n))-stochastic. (The constant in O(log n)-notation depends on F but not on n, x, α, β.)
Proof sketch. Let us denote F (x) by y. By assumption there exist a (α * (C(x)α + β))-description of x (recall the definition with optimality deficiency; we omit logarithmic terms as usual). So there exists a simple enumeration of at most 2 C(x)+β objects x in at most 2 α portions that includes x. Let us count x in this enumeration such that F (x ) = y and the computation uses time at most B(i); assume there are 2 s of them. Then we can enumerate all y's that have at least 2 s preimages in time B(i), in 2 α + 2 i portions. Indeed, new portions appear in two cases: (1) a new portion appears in the original enumeration; (2) candidate for B(i) increases. The first event happens at most 2 α times, the second at most 2 i times. The total number of y's enumerated is 2 C(x)+β-s ; it remains to note that C(x)s C(y). Indeed, C(x) C(y) + C(x|y), and C(x|y) s, since we can enumerate all the preimages of y in the order of increasing time, and x is determined by s-bit ordinal number of x in this enumeration.
A special case of this proposition is Bennett's observation: if some d-incompressible program p produces x in time B(i), then p is (0, d)-stochastic, and p is mapped to x by the interpreter (decompressor) in time B(i), so x is (0 + i, d)-stochastic. (For simplicity we omit all the logarithmic terms in this argument, as well as in the previous proof sketch.) Remark 10. One can combine Remark 4 (page 11) with Proposition 26 and show that if a program F of complexity at most j is applied to an (α, β)-stochastic string x of length n and the computation terminates in time B(i), then F (x) is (max(i, α)+ j + O(log n), β + j + O(log n))-stochastic, where the constant in O(log n) notation is absolute (does not depend on F ). To show this, one may consider the pair (x, F ); it is easy to show (this can be done in different ways using different characterizations of the stochasticity curve) that this pair is (α+j +O(log n), β +j +O(log n))-stochastic.
Let us note also that there are some results in algorithmic information theory that are true for stochastic objects but are false or unknown without this assumption. We will discuss (without proofs) two examples of this type. The first is Epstein-Levin theorem saying that for a stochastic set A its total a priori probability is close to the maximum a priori probability of A's elements; see [START_REF] Muchnik | Game arguments in computability theory and algorithmic information theory[END_REF] for details. Here the result is (obviously) false without stochasticity assumption.
In the next example [START_REF] Muchnik | Stability of properties of Kolmogorov complexity under relativization[END_REF] the stochasticity assumption is used in the proof, and it is not known whether the statement remains true without it: for every triple of strings (x, y, z) of length at most n there exists a string z such that
• C(x|z) = C(x|z ) + O(log n), • C(y |z) = C(y |z ) + O(log n), • C(x, y |z) = C(x, y |z ) + O(log n); • C(z ) I((x, y) : z) + O(log n), assuming that (x, y) is (O(log n), O(log n))-stochastic.
This proposition is related to the following open question on "irrelevant oracles": assume that the mutual information between (x, y) and some z is negligible. Can an oracle z (an "irrelevant oracle") change substantially natural properties of the pair (x, y) formulated in terms of Kolmogorov complexity? For instance, can such an oracle z allow us to extract some common information of x and y? In [START_REF] Muchnik | Stability of properties of Kolmogorov complexity under relativization[END_REF] a negative answer to the latter question is given, but only for stochastic pairs (x, y).
6 Descriptions of restricted type
Families of descriptions
In this section we consider the restricted case: the sets (considered as descriptions, or statistical hypotheses) are taken from some family A that is fixed in advance. 10(Elements of A are finite sets of binary strings.) Informally speaking, this means that we have some a priori information about the black box that produces a given string: this string is obtained by a random choice in one of the A-sets, but we do not know in which one.
Before we had no restrictions (the family A was the family of all finite sets). It turns out that the results obtained so far can be extended (sometimes with weaker bounds) to other families that satisfy some natural conditions. Let us formulate these conditions.
(1) The family A is enumerable. This means that there exists an algorithm that prints elements of A as lists, with some separators (saying where one element of A ends and another one begins).
(2) For every n the family A contains the set B n of all n-bit strings.
(3) There exists some polynomial p with the following property: for every A ∈ A, for every natural n and for every natural c < #A the set of all n-bit strings in A can be covered by at most p(n) • #A/c sets of cardinality at most c from A.
The last condition is a replacement for splitting: in general, we cannot split a set A ∈ A into pieces from A, but at least we can cover a set A ∈ A by smaller elements of A (of size at most c) with polynomial overhead in the number of pieces, compared to the required minimum #A/c (more precisely, we have to cover only n-bit elements of A).
We assume that some family A that has properties (1)-( 3) is fixed. For a string x we denote by P A
x the set of pairs (i, j) such that x has (i * j)-description that belongs to A. The set P A
x is a subset of P x defined earlier; the bigger A is, the bigger is P A x . The full set P x is P A
x for the family A that contains all finite sets. For every string x the set P A
x has properties close to the properties of P x proved earlier.
Proposition 27. For every string x of length n the following is true:
1. The set P A
x contains a pair that is O(log n)-close to (0, n).
The set P A
x contains a pair that is O(1)-close to (C(x), 0).
3. The adaptation of Proposition 8 is true: if (i, j) ∈ P A x , then (i+k+O(log n), jk) also belongs to P A
x for every k j. (Recall that n is the length of x.)
Proof. 1. The property [START_REF] Antunes | Sophistication vs. Logical Depth, Theory of Computing Systems[END_REF] guarantees that the family A contains the set B n that is an (O(log n) * n)-description of x.
2. The property (3) applied to c = 1 and A = B n says that every singleton belongs to A, therefore each string has ((C(x) + O(1)) * 0)-description.
3. Assume that x has (i * j)-description A ∈ A. For a given k we enumerate A until we find a family of p(n)2 k sets of size 2 -k #A (or less) in A that covers all strings of length n in A. Such a family exists due to (3), and p is the polynomial from (3). The complexity of the set that covers x does not exceed i + k + O(log n + log k), since this set is determined by A, n, k and the ordinal number of the set in the cover. We may assume without loss of generality that k n, otherwise {x} can be used as ((i + k + O(log n)) * (jk))-description of x. So the term O(log k) can be omitted.
For example, we may consider the family that consists of all "cylinders": for every n and for every string u of length at most n we consider the set of all n-bit strings that have prefix u. Obviously the family of all such sets (for all n and u) satisfies the conditions (1)- [START_REF] Antunes | Sophistication revisited[END_REF].
We may also fix some bits of a string (not necessarily forming a prefix). That is, for every string z in ternary alphabet {0, 1, * } we consider the set of all bit strings that can be obtained from z by replacing stars with some bits. This set contains 2 k strings, if u has k stars. The conditions (1)-( 3) are fulfilled for this larger family, too.
A more interesting example is the family A formed by all balls in Hamming sense, i.e., the sets B y,r = {x | l(x) = l(y), d(x, y) r}. Here l(u) is the length of binary string u, and d(x, y) is the Hamming distance between two strings x and y of the same length. The parameter r is called the radius of the ball, and y is its center. Informally speaking, this means that the experimental data were obtained by changing at most r bits in some string y (and all possible changes are equally probable). This assumption could be reasonable if some string y is sent via an unreliable channel. Both parameters y and r are not known to us in advance.
It turns out that the family of Hamming balls satisfies the conditions (1)-( 3). This is not completely obvious. For example, these conditions imply that for every n and for every r n the set B n of n-bit strings can be covered by poly(n)2 n /V Hamming balls of radius r, where V stands for the cardinality of such a ball (i.e., V = n 0 + . . . + n r ), and p is some polynomial. This can be shown by a probabilistic argument: take N balls of radius r whose centers are randomly chosen in B n . For a given x ∈ B n the probability that x is not covered by any of these balls equals (1 -V /2 n ) N < e -V N/2 n . For N = n ln 2 • 2 n /V this upper bound is 2 -n , so for this N the probability to leave some x uncovered is less than 1. A similar argument can be used to prove (1)-(3) in the general case.
Proposition 28 ([46]). The family of all Hamming balls satisfies conditions (1)-( 3) above.
Proof sketch. Let A be a ball of radius a and let c be a number less than #A. We need to cover A by balls of cardinality c or less, using almost minimal number of balls, close to the lower bound #A/c up to a polynomial factor. Let us make some observations.
(1) The set of all n-bit strings can be covered by two balls of radius n/2. So we can assume without loss of generality that a n/2, otherwise we can apply the probabilistic argument above.
(2) Clearly the radius of covering balls should be maximal possible (to keep cardinality less than c); for this radius the cardinality of the ball equals c up to polynomial factors, since the size of the ball increases at most by factor n + 1 when its radius increases by 1.
(3) It is enough to cover spheres instead of balls (since every ball is a union of polynomially many spheres); it is also enough to consider the case when the radius of the sphere that we want to cover (a) is bigger than the radius of the covering ball (b), otherwise one ball is enough.
(4) We will cover a-sphere by randomly chosen b-balls whose centers are uniformly taken at some distance f from the center of a-sphere. (See below about the choice of f .) We use the same probabilistic argument as before (for the set of all strings). It is enough to show that for a b-ball whose center is at that distance, the polynomial fraction of points belong to a-sphere. Instead of b-balls we may consider b-spheres, the cardinality ratio is polynomial.
(5) It remains to choose some f with the following property: if the center of a bsphere S is at a distance f from the center of a-sphere T , then the polynomial fraction of S-points belong to T . One can compute a suitable f explicitly. In probabilistic terms we just change f /n-fraction of bits and then change random b/n fraction of bits. The expected fraction of twice changed bits is, therefore, about (f /n)(b/n), and the total fraction of changed bits is about f /n + b/n -2(f /n)(b/n). So we need to write an equation saying that this expression is a/n and the find the solution f . (Then one can perform the required estimate for binomial coefficients.) However, one can avoid computations with the following probabilistic argument: start with b changed bits, and then change all the bits one by one in a random order. At the end we hat nb changed bits, and a is somewhere in between, so there is a moment where the number of changed bits is exactly a. And if the union of n events covers the entire probability space, one of these events has probability at least 1/n. When a family A is fixed, a natural question arises: does the restriction on models (when we consider only models in A) changes the set P x ? Is it possible that a string has good models in general, but not in the restricted class? The answer is positive for the class of Hamming balls, as the following proposition shows.
Proposition 29. Consider the family A that consists of all Hamming balls. For some positive ε and for all sufficiently large n there exists a string x of length n such that the distance between P A
x and P x exceeds εn.
Proof sketch. Fix some α in (0, 1/2) and let V be the cardinality of the Hamming ball of radius αn. Find a set E of cardinality N = 2 n /V such that every Hamming ball of radius αn contains at most n points from E. This property is related to list decoding in the coding theory. The existence of such a set can be proved by a probabilistic argument: N randomly chosen n-bit strings have this property with positive probability. Indeed, the probability of a random point to be in E is an inverse of the number of points, so the distribution is close to Poisson distribution with parameter 1, and tails decrease much faster that 2 -n needed.
Since E with this property can be found by an exhaustive search, we can assume that C(E) = O(log n) and ignore the complexity of E (as well as other O(log n) terms) in the sequel. Let x be a random element in E, i.e., a string x ∈ E of complexity about log #E. The complexity of a ball A of radius αn that contains x is at least C(x), since knowing such a ball and an ordinal number of x in A ∩ E, we can find x. Therefore x does not have (log #E, log V )-descriptions in A. On the other hand, x does have (0, log #E)-description if we do not require the description to be in A; the set E is such a description. The point (log #E, log V ) is above the line C(A) + log #A = log #E, so P A
x is significantly smaller than P x . This construction gives a stochastic x (E is the corresponding model) that becomes maximally non-stochastic if we restrict ourselves to Hamming balls as descriptions (Figure 7).
Possible shapes of boundary curve
Our next goal is to extend some results proven for non-restricted descriptions to the restricted case. Let A be a family that has properties (1)-( 3). We prove a version of Theorem 1 where the precision (unfortunately) is significantly worse: O( √ n log n) instead of O(log n). Note that with this precision the term O(m) (proportional to the complexity of the curve) that appeared in Theorem 1 is not needed. Indeed, if we draw the curve on the cell paper with cell size √ n or larger, then it touches only O( √ n) cells, so it is determined by O( √ n) bits with O( √ n)-precision, and we may assume without loss of generality that the complexity of the curve is O( √ n).
Theorem 7 ( [START_REF] Vereshchagin | Rate Distortion and Denoising of Individual Data Using Kolmogorov Complexity[END_REF]). Let k n be two integers and let t 0 > t 1 > . . . > t k be a strictly decreasing sequence of integers such that t 0 n and t k = 0.. Then there exists a string x of complexity k + O( √ n log n) and length n + O(log n) for which the distance between P A
x and T = {(i, j)
| (i k) ⇒ (j t i )} is at most O( √ n log n).
We will see later (Theorem 8) that for every x the boundary curve of P A
x goes down at least with slope -1, as for the unrestricted case, so this theorem describes all possible shapes of the boundary curve.
Proof. The proof is similar to the proof of Theorem 1. Let us recall this proof first. We consider the string x that is the lexicographically first string (of suitable length n ) that is not covered by any "bad" set, i.e., by any set of complexity at most i and size at most 2 j , where the pair (i, j) is at the boundary of the set T . The length n is chosen in such a way that the total number of strings in all bad sets is strictly less than 2 n . On the other hand, we need "good sets" that cover x. For every boundary point (i, j) we construct a set A i,j that contains x, has complexity close to i and size 2 j . The set A i,j is constructed in several attempts. Initially A i,j is the set of lexicographically first 2 j strings of length n . Then we enumerate bad sets and delete all their elements from A i,j . At some step A i,j may become empty; then we refill it with 2 j lexicographically first strings that are not in the bad sets (at the moment). By construction the final A i,j contains the first x that is not in bad sets (since it is the case all the time). And the set A i,j can be described by the number of changes (plus some small information describing the process as a whole and the value of j). So it is crucial to have an upper bound for the number of changes. How do we get this bound? We note that when A i,j becomes empty, it is refilled again, and all the new elements should be covered by bad sets before the new change could happen. Two types of bad sets may appear: "small" ones (of size less than 2 j ) and "large ones" (of size at least 2 j ). The slope of the boundary line for T guarantees that the total number of elements in all small bad sets does not exceed 2 i+j (up to a poly(n)factor), so they may make A i,j empty only 2 i times. And the number of large bad sets is O(2 i ), since the complexity of each is bounded by i. (More precisely, we count separately the number of changes for A i,j that are first changes after a large bad set appears, and the number of other changes.)
Can we use the same argument in our new situation? We can generate bad sets as before and have the same bounds for their sizes and the total number of their elements. So the length n of x can be the same (in fact, almost the same, as we will need now that the union of all bad sets is less than half of all strings of length n , see below). Note that we now may enumerate only bad sets in A, since A is enumerable, but we do not even need this restriction. What we cannot do is to let A i,j to be the set of the first non-deleted elements: we need A i,j to be a set from A.
So we now go in the other direction. Instead of choosing x first and then finding suitable "good" A i,j that contain x, we construct the sets A i,j ∈ A that change in time in such a way that (1) their intersection always contains some non-deleted element (an element that is not yet covered by bad sets); (2) each A i,j has not too many versions. The non-deleted element in their intersection (in the final state) is then chosen as x.
Unfortunately, we cannot do this for all points (i, j) along the boundary curve. (This explains the loss of precision in the statement of the theorem.) Instead, we construct "good" sets only for some values of j. These values go down from n to 0 with step √ n log n. We select N = n/ log n points (i 1 , j 1 ), . . . , (i N , j N ) on the boundary of T ; the first coordinates i 1 , . . . , i N form a non-decreasing sequence, and the second coordinates j 1 , . . . , j N split the range n . . . 0 into (almost) equal intervals (j 1 = n, j N = 0). Then we construct good sets of sizes at most 2 j 1 , . . . , 2 j N , and denote them by A 1 , . . . , A N . All these sets belong to the family A. We also let A 0 to be the set of all strings of length n = n + O(log n); the choice of the constant in O(log n) will be discussed later.
Let us first describe the construction of A 1 , . . . , A N assuming that the set of deleted elements is fixed. (Then we discuss what to do when more elements are deleted.) We construct A s inductively (first A 1 , then A 2 etc.). As we have said, #A s 2 js (in particular, A N is a singleton), and we keep track of the ratio
(the number of non-deleted strings in A 0 ∩ A 1 ∩ . . . ∩ A s )/2 js .
For s = 0 this ratio is at least 1/2; this is obtained by a suitable choice of n (the union of all bad sets should cover at most half of all n -bit strings). When constructing the next A s , we ensure that this ratio decreases only by poly(n)-factor. How? Assume that A s-1 is already constructed; its size is at most 2 j s-1 . The condition (3) for A guarantees that A s-1 can be covered by A-sets of size at most 2 js , and we need about 2 j s-1 -js covering sets (up to poly(n)-factor). Now we let A s be the covering set that contains maximal number of non-deleted elements in A 0 ∩ . . . ∩ A s-1 . The ratio can decrease only by the same poly(n)-factor. In this way we get
(the number of non-deleted strings in A 0 ∩ A 1 ∩ . . . ∩ A s ) α -s 2 js /2,
where α stands for the poly(n)-factor mentioned above. 11Up to now we assumed that the set of deleted elements is fixed. What happens when more strings are deleted? The number of the non-deleted in A 0 ∩ . . . ∩ A s can decrease, and at some point and for some s can become less than the declared threshold ν s = α -s 2 js /2. Then we can find minimal s where this happens, and rebuild all the sets A s , A s+1 , . . . (for A s the threshold is not crossed due to the minimality of s). In this way we update the sets A s from time to time, replacing them (and all the consequent ones) by new versions when needed.
The problem with this construction is that the number of updates (different versions of each A s ) can be too big. Imagine that after an update some element is deleted, and the threshold is crossed again. Then a new update is necessary, and after this update next deletion can trigger a new update, etc. To keep the number of updates reasonable, we agree that after the update for all the new sets A l (starting from A s ) the number of non-deleted elements in A 0 ∩ . . . ∩ A l is twice bigger than the threshold ν l = α -l 2 j l /2. This can be achieved if we make the factor α twice bigger: since for A s-1 we have not crossed the threshold, for A s we can guarantee the inequality with additional factor 2. Now let us prove the bound for the number of updates for some A s . These updates can be of two types: first, when A s itself starts the update (being the minimal s where the threshold is crossed); second, when the update is induced by one of the previous sets. Let us estimate the number of the updates of the first type. This update happens when the number of non-deleted elements (that was at least 2ν s immediately after the previous update of any kind) becomes less than ν s . This means that at least ν s elements were deleted. How can this happen? One possibility is that a new bad set of complexity at most i s ("large bad set") appears after the last update. This can happen at most O(2 is ) times, since there is at most O(2 i ) objects of complexity at most i. The other possibility is the accumulation of elements deleted due to "small" bad sets, of complexity at least i s and of size at most 2 js . The total number of such elements is bounded by nO(2 is+js ), since the sum i l + j l may only decrease as l, increases. So the number of updates of A s not caused by large bad sets is bounded by
nO(2 is+js )/ν s = O(n2 is+js ) α -s 2 js = O(nα s 2 is ) = 2 is+N O(log n) = 2 is+O( √ n log n)
(recall that s N , α = poly(n), and N ≈ n/ log n). This bound remains valid if we take into account the induced updates (when the threshold is crossed for the preceding sets: there are at most N n these sets, and additional factor n is absorbed by O-notation).
We conclude that all the versions of A s have complexity at most i s + O( √ n log n), since each of them can be described by the version number plus the parameters of the generating process (we need to know n and the boundary curve, whose complexity is O( √ n) according to our assumption, see the discussion before the statement of the theorem). The same is true for the final version. It remains to take x in the intersection of the final sets A s . (Recall that A N is a singleton, so final A N is {x}.) Indeed, by construction this x has no bad (i * j)-descriptions where (i, j) is on the boundary of T . On the other hand, x has good descriptions that are O( √ n log n)close to this boundary and whose vertical coordinates are √ n log n-apart. (Recall that the slope of the boundary guarantees that horizontal distance is less than the vertical distance.) Therefore the position of the boundary curve for P A
x is determined with precision O( √ n log n), as required. 12Remark 11. In this proof we may use bad sets not only from A. Therefore, the set P x is also close to T (and the same is true for for every family B that contains A). It would be interesting to find out what are the possible combinations of P x and P A
x ; as we have seen, it may happen that P x is maximal and P A x is minimal, but this does not say anything about other possible combinations.
For the case of Hamming balls the statement of Theorem 7 has a natural interpretation. To find a simple ball of radius r that contains a given string x is the same as to find a simple string in a radius r ball centered at x. So this theorem show the possible behavior of the "approximation complexity" function
r → min{C(x ) | d(x, x ) r}
where d is Hamming distance. One should only rescale the vertical axis replacing the log-sizes of Hamming balls by their radii. The connection is described by the Shannon entropy function: a ball in B n of radius r has log-size about nH(r/n) for r n/2, and has almost full size for r n/2. For example, error correcting codes (in classical sense, or with list decoding) are example of strings where this function is almost a constant for small values of r: it is almost as easy to approximate a codeword as give it precisely (due to the possibility of error correction).
Randomness and optimality deficiencies: restricted case
Not all the results proved for unrestricted descriptions have natural counterparts in the restricted case. For example, one hardly can relate the set P A
x with bounded-time complexity (is completely unclear how A could enter the picture). Still some results remain valid (but new and much more complicated proofs are needed). This is the case for Proposition 8 and 9.
Let again A be the class of descriptions that satisfies requirements (1)-(3).
Theorem 8 ([46]
).
• If a string x of length n has an (i * j)-description in A, then it has
((i + d + O(log n)) * (j -d + O(log n)))-description in A for every d j.
• Assume that x is a string of length n that has at least
2 k different (i * j)- descriptions in A. Then it has ((i -k + O(log n)) * (j + O(log n))-description in A.
below) or if there exists a string x that belongs to 2 k sets of the first player but does not belong to any marked set. Since this is a finite game with full information, one of the players has a winning strategy. We claim that the second player can win. If it is not the case, the first player has a winning strategy. We get a contradiction by showing that the second player has a probabilistic strategy that wins with positive probability against any strategy of the first player. So we assume that some (deterministic) strategy of the first player is fixed, and consider the following simple probabilistic strategy: every set A presented by the first player is marked with probability p = 2 -k (n + 1) ln 2.
The expected number of marked sets is p2 i = 2 i-k (n + 1) ln 2. By Chebyshev's inequality, the number of marked set exceeds the expectation by a factor 2 with probability less than 1/2. So it is enough to show that the second bad case (after some move there exists x that belongs to 2 k sets of the first player but does not belong to any marked set) happens with probability at most 1/2.
For that, it is enough to show that for every fixed x the probability of this bad event is at most 2 -(n+1) , and then use the union bound. The intuitive explanation is simple: if x belongs to 2 k sets, the second player had (at least) 2 k chances to mark a set containing x (when these 2 k sets were presented by the first player), and the probability to miss all these chances is at most (1p) 2 k ; the choice of p guarantees that this probability is less than 1/2 -(n+1) . Indeed, using the bound
(1-1/x) x < 1/e, it is easy to show that (1 -p) 2 k < e -(n+1) ln 2 = 2 -(n+1) .
The pedantic reader would say that this argument is not formally correct, since the behavior of the first player (and the moment when next set containing x is produced) depends on the moves of the second player, so we do not have independent events with probability 1-p each (as it is assumed in the computation). 13 The formal argument considers for each t the event R t : "after some move of the second player the string x belongs to at least t sets provided by the first player, but does not belong to any marked set". Then we prove by induction (over t) that the probability of R t does not exceed (1-p) t . Indeed, it is easy to see that R t in a union of several disjoint subsets (depending on the events happening until the first player provides t + 1 sets containing x), and R t+1 is obtained by taking a (1p)-fraction in each of them.
Constructive proof. We consider the same game, but now allow more sets to be marked (replacing the bound 2 i-k+1 (n + 1) ln 2 by a bigger bound 2 i-k i 2 ln 2) and also allow the second player to mark sets that were produced earlier (not necessarily at the current move of the first player). The explicit winning strategy for the second player performs in parallel ik + log i substrategies (indexed by the numbers log(2 k /i), . . . , i). The substrategy number s wakes up once in 2 s moves (when the number of moves made by the first player is a multiple of 2 s ). It considers a family S that consists of 2 s last sets produced by the first player, and the set T that consists of all strings x covered by at least 2 k /i sets from S. Then it selects and marks some elements in S in such a way that all x ∈ T are covered by one of the selected sets. It is done by a greedy algorithm: first take a set from S that covers maximal part of T , then the set that covers maximal number of non-covered elements, etc. How many steps do we need to cover the entire T ? Let us show that (i/2 k )n2 s ln 2 steps are enough. Indeed, every element of T is covered by at least 2 k /i sets from S. Therefore, some set from S covers at least #T 2 k /(i2 s ) elements, i.e., 2 k-s /i-fraction of T . At the next step the non-covered part is multiplied by (1 -2 k-s /i) again, and after in2 s-k ln 2 steps the number of non-covered elements is bounded by #T (1 -2 k-s /i) in2 s-k ln 2 < 2 n (1/e) n ln 2 = 1, therefore all elements of T are covered. (Instead of a greedy algorithm one may use a probabilistic argument and show that randomly chosen in2 s-k ln 2 sets from S cover T with positive probability; however, our goal is to construct an explicit strategy.)
Anyway, the number of sets selected by a substrategy number s, does not exceed in2 s-k (ln 2)2 i-s = in2 i-k ln 2, and we get at most i 2 n2 i-k ln 2 for all substrategies. It remains to prove that after each move of the second player every string x that belongs to 2 k or more sets of the first player, also belongs to some selected set. For tth move we consider the binary representation of t: t = 2 s 1 + 2 s 2 + . . . , where s 1 > s 2 > . . . Since x does not belong to the sets selected by substrategies with numbers s 1 , s 2 , . . ., the multiplicity of x among the first 2 s 1 sets is less than 2 k /i, the multiplicity of x among the next 2 s 2 sets is also less than 2 k /i, etc. For those j with 2 s j < 2 k /i the multiplicity of x among the respective portion of 2 s j sets is obviously less than 2 k /i. Therefore, we conclude that the total multiplicity of x is less that i • 2 k /i = 2 k sets of the first player and the second player does not need to care about x. This finishes the explicit construction of the winning strategy. Now we can assume without loss of generality that the winning strategy has complexity at most O(log(n + k + i + j)). (In the probabilistic argument we have proved the existence of a winning strategy, but then we can perform the exhaustive search until we find one; the first strategy found will have small complexity.) Then we use this simple strategy to play with the enumeration of all A-sets of complexity less than i and size 2 j (or less). The selected sets can be described by their ordinal number (among the selected sets), so their complexity is bounded by ik (with logarithmic precision). Every string that has 2 k different (i * j)-descriptions in A, will also have one among the selected sets, and that is what we need.
As before (for the unrestricted case), this result implies that descriptions with minimal parameters are simple with respect to the data string: This gives us the same corollaries as in the unrestricted case:
Corollary. Let A be a family of finite sets that satisfies the conditions (1)-(3). Then for every string x of length n three statements • there exists a set A ∈ A of complexity at most α with d(x|A) β;
• there exists a set A ∈ A of complexity at most α with δ(x, A) β;
• the point (α, C(x)α + β) belongs to P A x are equivalent with logarithmic precision (the constants before the logarithms depend on the choice of the set A).
If we are interested in the uniform statements true for every enumerable family A, the same arguments prove the following result: 7 Strong models
Information in minimal descriptions
A possible way to bring the theory in accordance to our intuition is to change the definition of "having the same information". Although we have not given that definition explicitly, we have adopted so far the following viewpoint: x and y have the same (or almost the same) information if both conditional complexities C(x|y), C(y |x) are small. If only one complexity, say C(x|y), is small, we said that all (or almost all) information contained in x is present in y. Now we will adopt a more restricted viewpoint and say that x and y have the same information if there are short total (everywhere defined) programs mapping x to y and vice versa. From this viewpoint we cannot say anymore that a string x and its shortest program x * have the same information: for example, x may be non-stochastic while x * is always stochastic, so there is no short total program that maps x * to x because of Proposition 3. 14 Let us mention that if x and y have the same information in this new sense, then there exists a simple computable bijection that maps x to y (so they have the same properties if the property is defined in the computability language), see [START_REF] Muchnik | Game interpretation of Kolmogorov complexity[END_REF] for the proof.
Formally, let us define the total conditional complexity with respect to a computable function D of two arguments, as (Note that D is not required to be total, but we consider only p such that D(p, y ) is defined for all y .)
There is a computable function D such that CT D is minimal up to an additive constant. Fixing any such D we obtain the total conditional complexity CT(x|y).
In other way, we may define CT(x|y) as the minimal plain complexity of a total program that maps y to x.
We will think that y has all (or almost all) the information from x if CT(x|y) is negligible. Formally, we write x (We need p to be total, as otherwise we cannot produce the list of B-elements from the list of A-elements and p.) 7.2 An attempt to separate "good" models from "bad" ones Now we have more fine-grained classification of descriptions and can try to distinguish between descriptions that were equivalent in the former sense. For example, consider a string xy where y is random conditionally to x. Let A be a model for xy consisting of all extensions of x (of the same length). This model looks good (in particular, it has negligible optimality deficiency). On the other hand, we may consider a standard model B for xy of the same (or smaller) complexity. It also has negligible optimality deficiency but looks unnatural. In this section we are interested in the following question: how can we formally distinguish good models like A from bad models like B? We will see that at least for some strings u the value CT(A|u) can be used to distinguish between good and bad models for u. (Indeed, in our example CT(A|xy) is small, while CT(B |xy) can be large.) Definition 5. A set A
x is an ε-strong model (or statistic) for a string x if CT(A|x) ε.
For instance, the model A discussed above is an O(log n)-strong model for x. On the other hand, we will see later that, if y is chosen appropriately, then no standardbdescription B of the same complexity and log-cardinality as A is an εstrong model for x, even for ε = Ω(n).
Strong models satisfy an analog of Proposition 8 (the same proof works):
Proposition 32. Let x be a string and A be an ε-strong model for x. Let i be a non-negative integer such that i log #A. Then there exists an ε + O(log i)-strong model A for x such that #A #A/2 i and C(A ) C(A) + i + O(log i).
To take into account the strength of models, we may consider the set P x (ε) = {(i, j) | x has an ε-strong (i * j)-description}.
Obviously, we have P x (ε) ⊂ P x = P x (n + O(1))
for all strings x of length n and for all ε.
If the set P x (ε) is not much smaller than P x for a reasonably small ε, we will say that x is a "normal" string and otherwise we call x "strange". More precisely, a string x is called (ε, δ)-normal if P x is in δ-neighborhood of P x (ε). Otherwise, x is called (ε, δ)-strange.
It turns out that there are √ n log n, O(log n)-normal strings with any given set P x that satisfies the conditions of Theorem 1. On the other hand, there are Ω(n), Ω(n)strange strings of length n. We are going to state these facts accurately.
Theorem 10 ( [START_REF] Milovanov | Algorithmic Statistics: Normal Objects and Universal Models, Computer Science -Theory and Applications[END_REF]). Let k n be two integers and let t 0 > t 1 > . . . > t k be a strictly decreasing sequence of integers such that t 0 n and t k = 0. Then there exists a string x of complexity k + O( √ n log n) and length n + O(log n) for which the distance between both sets P x and P x (O(log n)) and the set T = {(i, j) | (i k) ⇒ (j t i )} is at most O( √ n log n). x (O(log n)) as well. As the set P x (O(log n)) includes the latter set and is included in P x , all the three sets are close to the set P x (O(log n)) as well.
The next theorem [START_REF] Vereshchagin | Algorithmic Minimal Sufficient Statistics: a New Approach[END_REF] shows that "strange" strings do exist. 15Theorem 11. Assume that natural numbers k, n, ε satisfy the inequalities O(1) k n. Then there is a string x of length n and complexity k + O(log n) such that the sets P x and P x (k) are O(log n)-close to the sets shown on Fig. 8.
Let k = n/2 in Theorem 11. Then the sets P x and P x (n/2) are almost n/2-apart, since the point (0, n/2) is in the O(log n)-neighborhood of P x while all points from P x (n/2) are (n/2 -O(log n))-apart from (0, n/2) (in l 1 -norm). Thus the string x is (n/2, n/2 -O(log n))-strange.
It turns out that for minimal models the converse is true as well. A model A for x is called (δ, κ)-minimal if there is no model B for x with C(B) C(A)δ and δ(x, B) δ(x, A) + κ.
Theorem 13 ([26]). For some value κ = O(log n) the following holds. Assume that A is an ε-sufficient statistic for an (ε, ε)-normal string x of length n. Assume also that A is a (δ, ε + κ)-minimal model for x. Then A is (O((δ + ε
+ log n) √ n), O((δ + ε + log n) √ n))-normal.
The next theorem states that the total conditional complexity of any strong, sufficient and minimal statistic for x conditioned by any other sufficient statistic for x is negligible. Theorem 14 ([41]). For some value κ = O(log n) the following holds. Assume that A, B are ε-sufficient statistics for a string x of length n. Assume also that A is an ε-strong and a (δ, ε + κ)-minimal statistic for x. Then CT(A|B) = O(ε + δ + log n).
This theorem can be interpreted as follows: assume that we have removed some noise from a given data string x by finding its description B with negligible optimality deficiency. Let A be any "ultimately denoised" model for x, i.e., a minimal model for x with negligible optimality deficiency. Then C(A|B) is negligible, as we have seen before. Hence to obtain the "ultimately denoised" model for x we do not need x: any such model can be obtained from B by a short program. Theorem 14 shows that any such strong model A can be obtained from B by a short total program.
Open questions
1. Is the minimal strong sufficient statistic unique (up to ε-equivalence). More specifically, assume that A, B are ε-strong, ε-sufficient statistics for a string x of length n. Assume further that both A, B are δ, c(ε + δ + log n)-minimal models for x. Is it true that CT(A|B), CT (B |A) are small in this case? 2. A similar question, but this time we do not assume that B is minimal. Is it true that CT(A|B) is small? (An affirmative answer to this question obviously implies the affirmative answer to the previous one.)
Note that if, in these two questions, we replace total conditional complexity with the plain conditional complexity then the answers are positive and moreover, we do not need to assume that A, B are ε-strong (see Proposition 18 and the last two paragraphs on Page 41).
3. (Merging strong sufficient statistics.) Assume that A, B are strong sufficient statistics for x that have small intersection compared to the cardinality of at least one of them. Then it is natural to conjecture that there is a strong sufficient statistic D for x of larger cardinality (=of smaller complexity) that is simple given both A, B. Formally, is it true (for some constant c) that if A, B are ε-strong ε-sufficient statistics for x, then there is a cε-strong cε-sufficient statistic D for x with log #D log #A + log #Blog #(A ∩ B)c(ε + log n) and CT(D |A), CT(D |B) at most c(ε + log n)? (A motivating example: let x be a random string of length n, let A consist of all strings of length n that have the same prefix of length n/2 as x, and let B consist of all strings of length n that have the same bits with numbers n/4 + 1, . . . , 3n/4 as x.
In this case it is natural to let D consist of all strings of length n that have the same bits n/4 + 1, . . . , n/2 as x, so that log #D = log #A + log #Blog #(A ∩ B).)
Proposition 1 .
1 The function d(x|P ) is (up to O(1)-additive term) the maximal lower semicomputable function of two arguments x and P such that x 2 d(x|P ) • P (x) 1 ( * )
x 2 d
2 (x|P ) • P (x) = x m(x|P ) P (x) • P (x) = x m(x|P ) 1, so the deficiency function satisfies ( * ). To prove the maximality, consider an arbitrary function d (x|P ) that is lower semicomputable and satisfies ( * ). Then consider a function m(x|P ) = 2 d (x|P ) • P (x) (the function equals 0 if x is not in the support of P ). Then m is lower semicomputable, x m(x|P ) 1 for every P , so m(x|P ) m(x|P ) up to O(1)factor; this implies that d (x|P ) d(x|P ) + O(1).
since A is determined by P, k, and the additional information in k is O(log k) = O(log n) since k = O(n) by our assumption. So the deficiency may increase only by O(log n) when we replace P by U A , and
Remark 4 .
4 A similar argument shows that d(F (x)|F (P )) d(x|P ) + K(F ) + O(1) (for total F ), so both O(1)-bounds in Proposition 3 may be replaced by K(F ) + O(1)
Proposition 7 .
7 d(x|P ) δ(x, P ) with O(1)-precision.
Figure 1 :
1 Figure 1: The set P x and its boundary curve
Figure 2 :
2 Figure 2: Two opposite possibilities for a boundary curve
Figure 3 :
3 Figure 3: Non-stochastic strings revisited. Left gray area corresponds to descriptions A with K(A) α and δ(x, A) β.
Theorem 4 . 6 KFigure 4 :
464 Figure 4: The set P x and the boundary of the set Q x (bold dotted line); on every vertical line two intervals have the same length.
Remark 9 .
9 Let us stress again that Theorem 2 claims only that the existence of a set A x with K(A) α and d(x|A) β is equivalent to the existence of a set B x with K(B) α and δ(x|A) β (with logarithmic accuracy). The theorem does not claim that for every set A x with complexity at most α the inequalities d(x|A)
d + O(log d) bits: first we specify d in a self-delimiting manner using O(log d) bits, and then append Ω m in binary. This information allows us to reconstruct d, then m and Ω m , then enumerate strings of complexity at most m until we have Ω m of them (so all strings of complexity at most m are enumerated), and then take the first string x m that has not been enumerated. As m < C(x m ) md + O(log d), the value of d is bounded by a constant and hence Ω m is an (m -O(1))-bit number. In this argument the binary representation of Ω m can be replaced by its program, so C(Ω m ) m-O(1). The upper bound m+O(1) is obvious, since Ω m = O(2 m ).
section 1.2.2] for details). By definition, the number B(m) is the maximal integer of complexity at most m. It is not hard to see that C(B(m)) = m + O(1). Indeed, C(B(m)) m by definition. On the other hand, the complexity of the next number B(m) + 1 is greater than m and at the same time is bounded by C(B(m)) + O(1).
Proposition 12 .
12 The numbers B(m) and B (m) coincide up to O(1)-change in m. More precisely, we have B (m) B(m + c), B(m) B (m + c) for some c and for all m.Proof. To find B (m), it is enough to know m-bit binary string that represents Ω m (this string also determines m). Therefore C(B (m)) m+c for some constant c. As B(m+c) is the largest number of complexity m+c or less, we have B (m) B(m+c).On the other hand, if some integer N exceeding both m and B (m) is given, we can run the enumeration algorithm A within N steps for each input smaller than N . Consider the first string that has not been enumerated. Its complexity is greater than m, so C(N ) > mc for some constant c. Thus the complexity of every number N starting from max{m, B (m)} is greater than mc, which means that max{m, B (m)} > B(mc). It remains to note that for all large enough m we have m B(mc), as the complexity of m is O(log m). Thus for all large enough m the number B (m) (and not m) must be bigger than B(mc). Replacing here m by m + c and increasing the constant c if needed, we conclude that B (m + c) > B(m) for all m.A similar argument shows that B(n) coincides (up to O(1)-change in the argument) with the maximal computation time of the universal decompressor (from the definition of plain Kolmogorov complexity) on inputs of size at most m, see[44, section 1.2.2]
Proposition 14 .
14 Assume that k m. Consider the string (Ω m ) k consisting of the first k bits of Ω m . It is O(log m)-equivalent to Ω k : both conditional complexities C(Ω k |(Ω m ) k ) and C((Ω m ) k |Ω k ) are O(log m).
Proposition 18 .
18 Let A be an (i * j)-description of a string x of length n. Then there exists a standard description B that has parameters C(B) id + O(log n) and log #B j + d + O(log n) for some d 0, and is simple given A, i.e., C(B |A) = O(log n).
Proof.
Recall the definition of prefix complexity: K(x) =log m(x), andK(x|u) = log m(x|u). So K(x) -K(x|u) d implies m(x) 2 -d m(x|u), and it remains to note that x m(x|u) 1 for every u.
Propositions 20 and 21
21 immediately imply the following improved version of Proposition 5 (page 13): Proposition 22.
Definition 4 .
4 Let D be some algorithm; its input and output are binary strings. For a string x and integer t, define C t D = min{l(p) : D produces x on input p in at most t steps}, the time-bounded Kolmogorov complexity of x with time bound t with respect to D.
the least time required to compute it by a s-incompressible program.
Figure 7 :
7 Figure 7: Theorem 8 can be used (together with the argument above) to show that the border of the set P A x (shown in gray) consists of a vertical segment C(A) = nlog V , log #A log V , and the segment of slope -1 defined by C(A) + log #A = n, log V log #A. The set P x contains also the hatched part.
Theorem 9 (
9 [START_REF] Vereshchagin | Rate Distortion and Denoising of Individual Data Using Kolmogorov Complexity[END_REF]). Let A be an enumerable family of finite sets. If a string x of length n has(i * j)-description A ∈ A such that C(A|x) k, then x has a ((ik + O(log n)) * (j + O(log n)))-description in A. If the family A satisfies the condition (3), then x has also a ((i + O(log n)) * (jk + O(log n)))-description in A.
Proposition 30 .
30 Let A be an arbitrary family of finite sets enumerated by some program p. Then for every x of length n the statements • there exists a set A ∈ A such that d(x|A) β; • there exists a set A ∈ A such that δ(x, A) β are equivalent up to O(C(p)+log C(A)+log n+log log #A)-change in the parameters.
CT D (x|y) = min{l(p) | D(p, y) = x, and D(p, y ) is defined for all y }.
ε→
y if CT(y |x) ε and we call x and y ε-equivalent and write x ε ↔ y, if both CT(y |x) and CT(x|y) are at most ε. Proposition 31. If x ε ↔ y then the sets P x and P y are in O(ε) neighborhood of each other. Proof. Indeed, if A is an (i * j)-description of x and p is a total program witnessing x ε ↔ y, then the set B = {D(p, x ) | x ∈ A} is an ((i + O(ε)) * j)-description of y.
Proof.
Consider the family A of all cylinders, i.e., the family of all the sets {ur | l(r) = m} for different strings u and natural numbers m. Sets from this family have the following feature: if A x then A is an O(log n)-strong model for x. Hence for all strings x we have P A x = P A x (O(log n)). By Theorem 7 and Remark 11 there is a string x of length n + O(log n) and complexity k + O( √ n log n) such that all sets P x , P A x , T are O( √ n log n)-close to each other. Hence all the three sets are close to the set P A
We assume that the reader is familiar with basic notions of algorithmic information theory (complexity, a priory probability). See[START_REF] Shen | Around Kolmogorov complexity: Basic Notions and Results, Measures of Complexity[END_REF] for a concise introduction, and[START_REF] Li | An Introduction to Kolmogorov Complexity and its Applications[END_REF][START_REF] Верещагин | Колмогоровская сложность и алгоритмическая случайность[END_REF] for more details.
We do not go into details here, but let us mention one common misunderstanding: the set of programs should be prefix-free for each c, but these sets may differ for different c and the union is not required to be prefix-free.
This notation may look strange; however, we speak so often about finite sets of complexity at most i and cardinality at most 2 j that we decided to introduce some short name and notation for them.
Technically speaking, this holds only for α K(x). For α > K(x) both sets contain all pairs
This number depends on the choice of the prefix decompressor, so it is not a specific number but a class of numbers. The elements of this class can be equivalently characterized as random lower semicomputable reals in [0, 1], see[44, section 5.7].
In general, if two sets X and Y in N 2 are close to each other (each is contained in the small neighborhood of the other one), this does not imply that their boundaries are close. It may happen that one set has a small "hole" and the other does not, so the boundary of the first set has points that are far from the boundary of the second one. However, in our case both sets are closed by construction in two different directions, and this implies that the boundaries are also close.
This observation motivates Levin's version of complexity (Kt, see [21, Section 1.3, p. 21]) where the program size and logarithm of the computation time are added: linear overhead in computation time matches the constant overhead in the program size. However, this is a different approach and we do not use the Levin's notion of time bounded complexity in this survey.
One can also consider some class of probability distributions, but we restrict our attention to sets (uniform distributions).
Note that for the values of s close to N the right-hand side can be less than 1; the inequality then claims just the existence of non-deleted elements. The induction step is still possible: non-deleted element is contained in one of the covering sets.
Now we see why N was chosen to be n/ log n: the bigger N is, the more points on the curve
we have, but then the number of versions of the good sets and their complexity increases, so we have some trade-off. The chosen value of n balances these two sources of errors.
The same problem appears if we observe a sequence of independent coin tossings with probability of success p, select some trials (before they are actually performed, based on the information obtained so far), and ask for the probability of the event "t first selected trials were all unsuccessful". This probability does not exceed (1p) t ; it can be smaller if the total number of selected trials is less than t with positive probability. This scheme was considered by von Mises when he defined random sequences using selection rules, so it should be familiar to algorithmic randomness people.
It is worth to mention that on the other hand, for every string x there is an almost minimal program for x that can be obtained from x by a simple total algorithm[START_REF] Vereshchagin | Algorithmic Minimal Sufficient Statistics: a New Approach[END_REF] Theorem 17].
In this section we omit some proofs; see the original papers and the arxiv version of this paper.
Acknowledgments
We are grateful to several people who contributed and/or carefully read preliminary versions of this survey, in particular, to B. Bauwens, P. Gács, A. Milovanov, G. Novikov, A. Romashchenko, P. Vitányi, and to all participants of Kolmogorov seminar in Moscow State University and ESCAPE group in LIRMM. We are also grateful to an anonymous referee for correcting several mistakes.
The work was in part funded by RFBR according to the research project grant 16-01-00362-a (N.V.) and by RaCAF ANR-15-CE40-0016-01 grant (A.S.)
† N. Vereshchagin is with Moscow State University, National Research University Higher School of Economics and Yandex
Proof. If A has strings of length different from n, remove all those strings. In this way A becomes (i * j)-description for x with slightly larger i than before the removal and the same or smaller j. Now all the elements of A have complexity at most m = i + j + O(log j) = i + j + O(log n), where the latter inequality holds, as after removal we have j n. Consider the list of all strings of complexity at most m and the standard description B of x obtained from this list. As we know from Proposition 17, the sum of the parameters of this description is close to m (and therefore to i+j). We need to show that the size of B is large, at least 2 j-O(log n) (recall that d in the statement should be positive). Why is this the case? Consider elements that appear after the last element of A in the list. There are at least 2 j-O(log n) of them, otherwise the total number of elements in the list could be described in much less than m bits (that number can be specified by m, A and the number of elements after the last element of A). Therefore there are at least 2 j-O(log n) elements in the list that appear after x, so B cannot be small. Why B is simple given A? Denote the size of B by 2 j . Given A and m, we can find the last element of A, call it x , in the list of strings of complexity at most m. Chop the list into portions of size 2 j . Then B is the last complete portion. If B contains x , we can find B from m, j , and x as the complete portion containing x . Otherwise, x appears in the list after all the elements from B. In this case we can find B from m and x as the last complete portion before x . Thus in any case we are able to find B from m, j , and x plus one extra bit.
For the same reason every standard description B of some x is simple given x (and this is not a surprise, since we know that all optimal descriptions of x are simple given x, see Proposition 9).
Proposition 18 has the following corollary which we formulate in an informal way. Let A be some (i * j)-description with parameters on the boundary of P x . Assume that on the left of this point the boundary curve decreases fast (with slope less than -1). Then in Proposition 18 the value of d is small, otherwise the point (id, j + d) would be far from P x . So the complexities of A and the standard description B are close to each other. We know also that A is simple given B, therefore B is also simple given A, and A and B have the same information (have small conditional complexities in both directions).
If we have two different descriptions A, A with approximately the same parameters on the boundary of P x , and the curve decreases fast on the left of the corresponding boundary point, the same argument shows that A and A have the same information. Note that the condition about the slope is important: if the point is on the segment with slope -1, the situation changes. For example, consider a random n-bit string x and two its descriptions. The first one consists of all n-bit strings that In fact, the second part uses only condition (1); it says that A is enumerable. The first part uses also [START_REF] Antunes | Sophistication revisited[END_REF]. It can be combined with the second part to show that x has also
Though theorem 8 looks like a technical statement, it has important consequences; it implies that the two approaches based on randomness and optimality deficiencies remain equivalent in the case of bounded class of descriptions. The proof technique can be also used to prove Epstein-Levin theorem [START_REF] Epstein | Sets have simple members[END_REF], as explained in [START_REF] Muchnik | Game arguments in computability theory and algorithmic information theory[END_REF]; similar technique was used by A. Milovanov in [START_REF] Milovanov | Algorithmic statistic, prediction and machine learning[END_REF] where a common model for several strings is considered.
Proof. The first part is easy: having some (i * j)-description for x, we can search for a covering by the sets of right size that exists due to condition (3); since A is enumerable, we can do it algorithmically until we find this covering. Then we select the first set in the covering that contains x; the bound for the complexity of this set is guaranteed by the size of the covering.
The proof of the second statement is much more interesting. In fact, there are two different proofs: one uses a probabilistic existence argument and the second is more explicit. But both of them start in the same way.
Let us enumerate all (i * j)-descriptions from A, i.e., all finite sets that belong to A, have cardinality at most 2 j and complexity at most i. For a fixed n, we start a selection process: some of the generated descriptions are marked (=selected) immediately after their generation. This process should satisfy the following requirements:
(1) at any moment every n-bit string x that has at least 2 k descriptions (among enumerated ones) belongs to one of the marked descriptions; (2) the total number of marked sets does not exceed 2 i-k p(n) for some polynomial p. Note that for i n or j n the statement is trivial, so we may assume that i, j (and therefore k) do not exceed n; this explains why the polynomial depends only on n.
If we have such a strategy (of logarithmic complexity), then the marked set containing x will be the required description of complexity ik + O(log n) and logsize j. Indeed, this marked set can be specified by its ordinal number in the list of marked sets, and this ordinal number has ik + O(log n) bits.
So we need to construct a selection strategy of logarithmic complexity. We present two proofs: a probabilistic one and an explicit construction.
Probabilistic proof. First we consider a finite game that corresponds to our situation. Two players alternate, each makes 2 i moves. At each move the first player presents some set of n-bit strings, and the second player replies saying whether it marks this set or not. The second player loses if after some moves the number of marked sets exceeds 2 i-k+1 (n + 1) ln 2 (this specific value follows from the argument Recall that we have introduced the notion of a strong model to separate good models from bad ones. Indeed, there are some results that justify this approach. The following theorem by Milovanov (see [START_REF] Milovanov | Algorithmic Statistics: Normal Objects and Universal Models, Computer Science -Theory and Applications[END_REF] for the proof) states, roughly speaking, that there exist a string x of length n and a strong model A for x such that the parameters (complexity, log-cardinality) of every strong standard model B for x are Ω(n)-far from those of A.
Theorem 12. For all k there is a string x of length n = 4k whose profile P x is O(log n)-close to the gray set shown on Fig. 9 such that • there is an O(log n)-strong model A for x with complexity k + O(log n) and log-cardinality 2k (that model witnesses the point (k, 2k) on the border of P x ), but
• for every m C(x) and for every simple enumeration of strings of complexity at most m the standard model B for x obtained from that enumeration is either not strong for x or its parameters are far from the point (k, 2k). More specifically, if B is an ε-strong model for x obtained from an enumeration provided by some program q, then C(q
Properties of strong models
Once we have decided that non-strong descriptions are bad, it is natural to restrict ourselves to strong descriptions with negligible randomness deficiency (and hence negligible optimality deficiency). Consider some n-bit string x. Assume that A is an ε-strong description of x and the randomness deficiency of x in A is at most ε. Let u be the ordinal number of x in A with respect to some fixed order. Then CT(x|A, u) = O(1) and CT(A, u|x) ε + O(1) (the latter inequality holds since CT(A|x) ε and u can be easily found when x and A are known). As u is random and independent of A (with precision ε; Assume that A is an ε-strong model for x with negligible randomness deficiency and ε; for simplicity we ignore these negligible quantities in the sequel. Assume that A is normal in the sense described above. Then the string x is normal as well. Indeed, for every pair (i, j) ∈ P x with i C(A) the pair (i, jlog #A) is in P A (Proposition 25; note that x is equivalent to (A, u) and u is random with condition A) and hence there is a strong (i * (jlog #A))-description B for A. Consider the "lifting" of B, that is, the union of all sets from B that have approximately the same size as A. It is a strong (i * j)-description for x.
It remains to consider pairs (i, j) ∈ P x where i C(A). Then i + j C(A) + log #A = C(x). Hence the subset of A consisting of all strings x whose ordinal number in A has the same i -C(A) leading bits as the ordinal number of x, is a strong (i * j)-description for x. | 182,912 | [
"12768"
] | [
"53306",
"394760"
] |
01480776 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480776/file/978-3-642-45065-5_10_Chapter.pdf | Zhiming Shen
email: [email protected]
Zhe Zhang
Andrzej Kochut
Alexei Karve
Han Chen
Minkyong Kim
email: [email protected]
Hui Lei
email: [email protected]
Nicholas Fuller
email: [email protected]
Ibm T J Watson
VMAR: Optimizing I/O Performance and Resource Utilization in the Cloud
A key enabler for standardized cloud services is the encapsulation of software and data into VM images. With the rapid evolution of the cloud ecosystem, the number of VM images is growing at high speed. These images, each containing gigabytes or tens of gigabytes of data, create heavy disk and network I/O workloads in cloud data centers. Because these images contain identical or similar OS, middleware, and applications, there are plenty of data blocks with duplicate content among the VM images. However, current deduplication techniques cannot efficiently capitalize on this content similarity due to their warmup delay, resource overhead and algorithmic complexity. We propose an instant, non-intrusive, and lightweight I/O optimization layer tailored for the cloud: V irtual M achine I/O Access Redirection (VMAR). VMAR generates a block translation map at VM image creation / capture time, and uses it to redirect accesses for identical blocks to the same filesystem address before they reach the OS. This greatly enhances the cache hit ratio of VM I/O requests and leads to up to 55% performance gains in instantiating VM operating systems (48% on average), and up to 45% gain in loading application stacks (38% on average). It also reduces the I/O resource consumption by as much as 70%.
Introduction
The economies of scale of cloud computing, which differentiates it from transitional IT services, comes from the capability to elastically multiplex different workloads on a shared pool of physical computing resources. This elasticity is driven by the standardization of workloads into moveable and shareable components. To date, virtual machine images are the de facto form of standard templates for cloud workloads. Typically, a cloud environment provides a set of "golden master" images containing the operating system and popular middleware and application software components. Cloud administrators and users start with these images and create their own images by installing additional components. Through this process, a hierarchy of deviations of VM images emerges. For example, in [START_REF] Peng | Virtual machine image distribution network for cloud data centers[END_REF], Peng et al. have studied a library of 355 VM images and constructed a hierarchical structure of images based on OS and applications, where the majority of images contain Linux with variation only on minor versions (i.e., v5.X).
Today's production cloud environments are facing an explosion of VM images, each containing gigabytes or tens of gigabytes of data. As of August 2011 Amazon Elastic Compute Cloud (EC2) has 6521 public VM images [START_REF] Watson | Elastic Compute Cloud (EC2)[END_REF] (data on private EC2 VM images is unavailable). Storing and transferring these images introduces heavy disk and network I/O workloads on storage and compute/hypervisor servers. On the other hand, the evolutionary nature of the VM "ecosystem" determines that different VM images are likely to contain identical chunks of data. It has been reported that a VM repository from a production cloud environment contains around 70% redundant data chunks [START_REF] Jayaram | An empirical analysis of similarity in virtual machine images[END_REF]. This has indicated rich opportunities to deduplicate the storage and I/O of VM images. To exploit this content redundancy, storage deduplication techniques have been actively studied and widely used [10-13, 21, 26, 31]. As illustrated in Figure 1, storage deduplication mostly works on the block device layer and merges data blocks with identical content. The scope of storage deduplication is mainly to save storage capacity rather than to optimize the performance and resource consumption of I/O operations. As a matter of fact, most of them cause various degrees of overhead to both write and read operations.
On the other hand, memory deduplication techniques [START_REF] Arcangeli | Increasing memory density by using KSM[END_REF][START_REF] Bugnion | Disco: running commodity operating systems on scalable multiprocessors[END_REF][START_REF] Kim | XHive: Efficient Cooperative Caching for Virtual Machines[END_REF][START_REF] Koller | Deduplication: Utilizing content similarity to improve I/O performance[END_REF][START_REF] Waldspurger | Memory Resource Management in VMware ESX Server[END_REF][START_REF] Wood | Memory buddies: exploiting page sharing for smart colocation in virtualized data centers[END_REF] save memory space by scanning the memory space and compressing identical pages. They also reduce the I/O bandwidth consumption by improving cache hit ratio. However, existing memory deduplication methods suffer from 2 fundamental drawbacks when applied to VM I/O optimization. First, savings can only be achieved after a "warm-up" period where similar data chunks are brought in the memory and become eligible for merging. Second, the merging process, including content identification, page table modification, as well as the copy-on-write logic (triggered when a shared page is updated), requires complex programs and competes with primary applications for computing resources.
As an alternative, this paper proposes VMAR, an instant, non-intrusive, and lightweight I/O optimization method tailored for cloud environments. VMAR is based on the idea of V irtual M achine I/O Access Redirection. It is a lightweight extension to the virtualization layer that can be easily deployed into the cloud incrementally, and does not need any modification to the guest OS or application stack. Compared to existing deduplication and I/O optimization methods, VMAR has two key distinctions. redirects VM read accesses for identical blocks to the same filesystem address above the hypervisor Virtual Filesystem (VFS) layer, which is the entry point for all file I/O requests into the OS. Since I/O operations are merged from the upstream instead of on the storage layer, each VM has a much higher chance to hit the file system page cache, which is already "warmed up" by its peers. The reduction of warmup phase is critical to cloud user experience, especially in development and test environments where VMs are short-lived.
We have implemented VMAR as a QEMU image driver. Our evaluation shows that in I/O-intensive settings VMAR reduces VM boot time by 39 ∼ 55% (48% on average) and application loading time by 24 ∼ 45% (38% on average). It also saves up to 70% of I/O traffic and memory cache usage.
The reminder of this article is organized as follows. Section 2 provides a background of VM image I/O. Section 3 details the design and implementation of VMAR. Section 4 presents the evaluation results. Section 5 surveys related work in storage and memory deduplication. Finally, section 6 concludes the paper.
Background
Most virtualization technologies present to VMs a virtual disk interface to emulate real hard disks (also known asVM image). Virtual disks typically appear as regular files on the hypervisor host (i.e., image files). I/O requests received at virtual disks are translated by the virtualization driver to regular file I/O requests to the image files.
Due to the large amount and size of VM images, it is impossible to store all image files on every hypervisor host. A typical cloud environment has a shared image storage system, which has a unified name space and is accessible by each hypervisor host. One commonly used architecture is to set up the shared storage system on a separate cluster from the hypervisor hosts, and connect the storage and hypervisor clusters via a storage area network. Another emerging scheme is to form a distributed storage system by aggregating the locally attached disks of hypervisor hosts [START_REF] Gupta | GPFS-SNC: An enterprise storage framework for virtual-machine clouds[END_REF]. In either scenario, when a VM is to be started on a hypervisor host, the majority of its image data is likely to be located remotely. Figure 2 illustrates different combinations of virtual disk configurations. First, VM images can be stored in different formats. The most straightforward option is the raw format, where I/O requests to the virtual disk are served via a simple block-to-block address mapping. In order to support multiple VMs running on the same base image, copy-on-write techniques have been widely used, where a local snapshot is created for each VM to store all modified data blocks. The underlying image files remain unchanged until new images are captured. As shown in Figure 2, there are different copy-on-write schemes, including Qcow2 [START_REF]The QCOW2 Image Format[END_REF], dmsnapshot, FVD [START_REF] Tang | FVD: a high-performance virtual machine image format for cloud[END_REF], VirtualBox VDI [START_REF]Virtualbox vdi image storage[END_REF], VMware VMDK [START_REF]Virtual machine disk format (VMDK[END_REF], and so forth.
Network
The second dimension of virtual disk configuration is how VM images are accessed. One way is to pre-copy the entire image from the image storage to the local file system of the target hypervisor before starting up a VM instance. Since a typical VM image file contains multiple gigabytes, or even tens of gigabytes of data, it may take a long time to start up a VM instance under this scheme. To overcome this problem, an alternative method is to fetch parts of a VM image from the storage system on-demand. Under the on-demand configuration, image data may need to be fetched from the remote storage during runtime, causing extra delay. However, as shown in [START_REF] Chen | Empirical study of application runtime performance using on-demand streaming virtual disks in the cloud[END_REF], the runtime performance degradation is very limited. Therefore, in the rest of the paper, we have focused on applying VMAR on top of the on-demand configuration.
Design and Implementation
Hash-based Block Map Generation
The block map generator of VMAR uses 4 KB blocks as the base unit. Each data block is identified by its hash value as the fingerprint. In capturing the content similarities among VM images, we leverage the concept of metadata clusters proposed in [START_REF] Kochut | Leveraging Local Image Redundancy for Efficient Virtual Machine Provisioning[END_REF]. Each cluster represents the set of blocks that are common across a subset of images. The main benefit of using clusters in VMAR is that they greatly facilitate the search of all VM images having content overlaps with a given image. Therefore, when an image is modified or deleted from the image repository, it is easy to identify entries in the block map that should be updated. For completeness we first briefly describe the concept of metadata clusters. Consider a simple example of three images: Image-0, Image-1 and Image-2 as shown in Fig. 4. In this illustration, CL-001, CL-010, CL-100 are singleton clusters, containing the blocks only from Image-0, 1 and 2, respectively. For example, block with hash G is unique to Image-0. CL-011 is the cluster with blocks from Image-0 and 1, which have hash values E and F. We use subscripts to denote identical blocks within an image. For example, hash value C appears in Image-0 3 times, as C 1 , C 2 and C 3 .
When a new image is added to the library, the system computes the SHA1 hash for each block and compare it against existing clusters. Then each cluster is divided into two new clusters: one that contains the new block and another one that doesn't. The hash values in the new images that do not belong to the any current clusters are put into a new singleton cluster. A certain hash value can appear in multiple images. The block mapping protocol should be consistent and and ensure all requests for identical blocks are redirected to the same address. For this purpose we always use the image with the smallest sequence ID as the mapping destination. Alternative consistent mapping protocols can be considered as future work -for instance, the least fragmented image [START_REF] Liang | STEP: Sequentiality and Thrashing Detection Based Prefetching to Improve Performance of Networked Storage Servers[END_REF] or the most used image can be used as the target. These optimizations can potentially improve I/O sequentiality. Fig. 5 illustrates the meta-data of cluster CL-111. Cluster CL-111 contains two hash values that are shared by all three images, so the accesses to any block belonging to CL-111 should be redirected to Image-0, which has the smallest ID. It is possible that there are multiple blocks having the same hash value in Image-0, such as the blocks with hash value C. In this case, we always map them to the block with the smallest block number. For example, in the illustrated case, any block with hash value C will be mapped to block 0 in Image-0. Given the hash value of a block, we can quickly identify the target image and block we should map by looking up the hash table in each cluster. Fig. 6 shows the map for Image-1 and the cluster meta-data we use to construct the map.
The method update map in Fig. 7 is executed when a VM image is updated. It search for all other images having blocks pointing to this image with the cluster function update block(s, block) hash prev: previous hash value of the block; hash new: new hash value of the block; add s to update list; let c = find cluster from hash(hash prev); remove block in the block list of hash prev for image s in c; if the updated block list becomes empty: move the entry of hash prev in c to the corresponding cluster; if the minimal image ID containing hash prev is changed:
for each image t that contains hash prev do: add t to update list; end for; let c ′ = find cluster from hash(hash new); if c ′ = None: add block into singleton of s; else:
add block to c ′ ; if s does not contain c ′ : move the entry of hash new in c ′ to the corresponding cluster; if the minimal image ID containing hash new is changed:
for each image t that contains hash new do: add t to update list; end for; for each image t in update list do:
re-construct the block map for image t; end for; end function; data structure, and consequently update the map entries. A hash value can also be moved to another cluster if the ownership is changed due to the update.
Finally, to illustrate the offline computational overhead for creation of clusters and map, that is a one time cost to prepare the image library for redirection, we have run an experiment on a VM with 2.2 GHz cpu and 16 GB memory. We have used an image library with 84 images with total size of 1.5 TB. The images were a mix of Windows and Linux images of varying sizes ranging between 4 GB and 100 GB (used in a production Cloud). This image library resulted in creation of 453 clusters. The total time to create the clusters and mappings for all images was 15 minutes.
I/O Deduplication through Access Redirection
Figure 8 illustrates the overall architecture of VMAR's access redirection mechanism. The VMAR image serves as the backing file of the Qcow2 image. When a read request R is received by the QEMU virtual I/O driver, the copy-on-write logic in Qcow2 first checks whether it is for base image data or VM private/dirty data. If R is for VM private/dirty data, Qcow2 forwards the request to a local copy-on-write file. If R is for base image data, the Qcow2 driver forwards the request to the backing image. In both cases, R is translated as a regular file request which is handled by the VFS layer of the host OS. Unless the file is opened in direct I/O mode, R will be checked against the host page cache before being sent to the host hard disk drive.
The VMAR image driver implements address translation and access redirection. When a read request R is received, VMAR looks up the block map introduced in Section 3.1 to find the destination addresses of the requested blocks. If the requested blocks belong to different base images, or are noncontinuous in the same base image, then R is broken down into multiple smaller "descendant" requests. The descendant requests are sent to the corresponding base images. Upon the completion of all descendant requests, the VMAR driver returns the whole buffer back to the Qcow2 driver.
The descendant requests are issued concurrently to maximize throughput. We leverage the asynchronous I/O threadpool in the KVM hypervisor to issue concurrent requests. To serve a request R, the application's buffer is divided into multiple regions and a set of I/O vectors are created. Each I/O vector represents a region of the buffer and fills the region with the fetched data. A counter for the application buffer keeps track of the number of issued and completed descendant requests. The last callback of the descendant request will return the buffer back to the application.
VMAR updates the inode numbers of the descendant requests of R to the destination / redirected base image files before sending them to the host OS VFS layer and checked against the page cache. If the corresponding blocks in the destination files have been read into the page cache by other VMs, the new requests will hit the cache as "free riders". As discussed in Section 3.1, if a block appears in multiple images, the block map entry always points the image with the smallest ID. Therefore, all requests for the same content are always redirected to the same destination address, which increases the chance of "free riding".
VMAR redirects accesses to VM images, but not to private/dirty data. The reason is twofold. First, the data generated during runtime has a much smaller chance to be shared than that of the data in the base images, which contain operating systems, libraries and application binaries. Second, deduplication of private/dirty data incurs significant overhead because the content of each newly generated block has to be hashed and compared to existing blocks during runtime.
Block Map Optimizations
Block map size reduction A straightforward method to support redirection lookup is to create a block-to-block map. Based on the offset of the requested block in the source image, we can calculate the position of its entry in the block map directly. Each map entry has two attributes: {ID target , Block target }. The lookup of block-to-block map is fast. However, the map size will grow linearly with the image size. For example, Figure 9 shows that the map size for a 32 GB image can grow up to 64 MB before optimization. To reduce the map size and increase the scalability, we merge the map entries for the blocks that are continuous in the source image, and are also mapped continuously into the same target image. Since they are mapped continuously, we can use a single entry with four attributes to represent all of them: {offset source , length, ID target , offset target }. Note that the length that each entry represents may be different. Thus, the lookup of the map requires checking whether a given block number falls into the range represented by an entry.
To further reduce the map size, we also eliminate map entries for zero blocks. If a block cannot find a corresponding entry in the map, it is a zero block. In this case, the VMAR driver simply uses memset to create a zero-filled memory buffer. This saves the time and bandwidth overheads of a full memory copy.
Figure 9 shows that after optimization, the map size for VMAR is reduced significantly (mostly under 5 MB). In the VM images we have worked on, many continuous clusters have been detected. This is because the common sharing granularity between pairs of VM images is the files stored on their virtual disks. For example, the ram-disk file of the kernel, application binaries and libraries. Figure 10 presents the cumulative percentage of the the number of blocks represented in a single map entry. Map entries containing more than 64 blocks cover around 75% of the blocks. Some "big" map entry covers a significant portion of blocks. For example, map entries with a size more than 2,045 blocks covers around 25% of blocks.
Block map lookup optimization After the above optimization for the map size, each map entry represents different lengths. Thus, we cannot perform a simple calculation to get the position of the desired map entry. A linear search is inefficient. Note that the block map is sorted according to the source block offset. So we adopt binary search as the basic lookup strategy.
Since we still have many entries in the map, the depth of the binary search is typically high. So we have applied two mechanisms to further reduce the lookup time. First, we create an index to divide a large map into equal-sized sections. Each index entry has two pointers pointing the first and the last entry in the map that covers the corresponding section. Since the sections are equal-sized, given a block offset we can directly calculate the corresponding index entry. From the index entry, we can get the range within which we should perform binary search. This mechanism reduces search depth significantly. Second, to avoid searching to the maximum depth for zero blocks, we use a bloom filter to quickly identify them. Figure 11 shows the average search depth during the VM instantiation and application loading stage. We can see that our optimization mechanisms reduce the average search depth from 18.6 to 0.2.
Evaluation
Experiment setup
We have implemented VMAR based on QEMU-KVM 0.14.0, and conducted the experiments using two physical hosts. Each host has two Intel Xeon E5649 processors (12 MB L3 Cache, 2.53 GHz) with 12 hyper-threading physical cores (24 logical cores in total), 64 GB memory, and gigabit network connection. The hosts run Red Hat Enterprise Linux Server (RHEL) release 6.1 with kernel 2.6.32 and libvirt 0.8.7. One host serves as the image repository and the other one is the compute node on which the VMs will be created. The compute node accesses the images repository using the iSCSI protocol.
To drive the experiments, we have obtained a random subset of 40 images from a production enterprise cloud. The size of the images ranges from 4 GB to over 100 GB. The VMs are instantiated using libvirt. Each VM is configured with two CPU cores, 2 GB memory, bridged network and disk access through virtio in the Qcow2 format. 23 of the images run RHEL 5.5, and 17 of them run SUSE Linux Enterprise Server 11.
The impact of VMAR on the VM instantiation performance is assessed by starting VMs from the images and measuring the time it takes before the VMs can be accessed from the network. This emulates the service response time that a customer perceives for provisioning new VMs in an Infrastructure as a Service (IaaS) cloud. In each image, we have added a simple script to send a special network packet right after the network is initialized. Most time is spent on booting up the OS and startup services. A daemon on the compute node waits for the packet sent by our script and records the timing.
After VM instantiation, another time-consuming step in cloud workload deployment is to load the application software stack into the VM memory space. This can take even longer in complex enterprise workloads, where a software installation (e.g., database management system) contains hundreds of megabytes or gigabytes of data. Due to the lack of semantic information on the production images, we added four additional images into the repository. On each image, we installed IBM DB2 database software version X and WebSphere Application Server (WAS) version Y , where X ∈ {9.0, 9.1} and Y ∈ {7.0.0.17, 7.0.0.19}1 . These images run RHEL 6.0 and use the same VM configuration as other images. We have measured the application software loading time in the four images, while instantiating other images as a background workload.
As discussed in section 2, our evaluation uses the on-demand policy as the baseline configuration, where VM images stay in the storage server and the compute node obtains required blocks through the iSCSI protocol. Besides VMAR we have also included lessfs [START_REF] Koutoupis | Data deduplication with Linux[END_REF] and KSM [START_REF] Arcangeli | Increasing memory density by using KSM[END_REF] in the evaluation, which are widely used storage and memory deduplication mechanisms for Linux. Therefore, the rest of this section compares 4 configurations to the baseline: 1) VMAR used to start VMs on compute node; 2) lessfs used on storage server to store VM images; 3) KSM used on compute node to merge memory pages (KSM is triggered only when the system is under memory pressure, therefore only evaluated in such settings); 4) lessfs (on storage server) +VMAR (on compute node). The first 3 configurations represent the typical usage of the individual optimization techniques. The fourth configuration explores using VMAR on top of storage deduplication to save both storage and I/O resources.
In our experiments, the arrival of VM instantiation commands follows a Poisson distribution. Different Poisson arrival rates have been used to emulate various levels of I/O workload. Each experiment is repeated three times and average values are reported with the standard deviation as error bars.
Experiment results
This section shows the experiment results, including an analysis of content similarity in the VM image repository we use, the results for VM instantiation and application loading, and the overhead of VMAR.
Similarity in the image repository
We first analyze the content similarity among our 40 images. In this analysis, we only consider non-zero data blocks. Figure 12(a) shows the CDF of the number of duplicated blocks in the entire repository of 40 images. More than 60% of the blocks are duplicated at least twice, and 10% of the blocks are duplicated more than eight times. This verifies the intuition that duplicated blocks are common in the VM image repositories of production clouds. A block can be duplicated within the same image, or across different images. Figure 12(b) shows the CDF of of the number of times that a block appears in different images. More than 50% of the blocks are shared by at least two images. Around 25% of the blocks are shared by more than three images. Therefore, opportunities are rich for VMAR to deduplicate accesses to identical blocks.
VM instantiation Figure 13 shows the performance and resource consumption of VM instantiation when different numbers of VMs are booted. In this experiment, a new VM is provisioned every five seconds on average. During the VM instantiation phase, the majority of the I/O workload is to load the OS into the VM's memory, causing few data re-accesses within a single VM. Therefore, under the baseline configuration, almost every read request goes through the network and the disk, and the data block eventually enters the memory cache of the compute node. As shown in Figures 13(b) and 13(c), the amount of I/O traffic and memory cache space usage are roughly the same, both increasing almost linearly with the number of VMs. Consequently, as shown in Figure 13(a), the average time it takes for a VM to boot up is over 100 seconds. The boot time increases when more VMs are booted, causing the disk and the network to be more congested. With VMAR, each VM benefits from the data blocks brought into the hypervisor's memory page cache by other VMs that are booted earlier. Therefore, the average boot time is significantly reduced (by 39 ∼ 55%). Moreover, the average boot time with VMAR decreases when more VMs are booted and the cache is "warmer". VMAR also reduces I/O traffic and memory consumption by 63 ∼ 68%, by trimming unnecessary disk and network accesses up in the memory cache. More importantly, the I/O traffic grows at a much slower rate than the baseline because the amount of "unique" content in every incoming VM image drops quickly as the hypervisor hosts more images. This is a critical benefit in resource overcommitted cloud environments.
With lessfs, the I/O traffic and memory cache usage are about the same as the baseline. This is because lessfs compresses data on the block storage layer, which is below VFS and thus doesn't change cache hit/miss events or the number of disk I/O requests. The VM boot time is worse than the baseline, mainly because it runs in the user space (based on FUSE ), and incurs high context switch overhead. Deduplication techniques implemented in the kernel could have smaller overhead, but similar to lessfs, they will not improve filesystem cache performance and utilization. When lessfs+VMAR is used, the majority of I/O requests hit the page cache of the compute node without reaching to lessfs at all. This improves the boot time results. However, the degree of performance improvement (7% on average) is much lower than the saving in I/O traffic (66.6% on average). This is because both VMAR and lessfs break sequential I/O patterns, thereby exaggerating the context switch and disk seek overhead. To mitigate this issue, replica selection optimizations similar to [START_REF] Koller | Deduplication: Utilizing content similarity to improve I/O performance[END_REF] can be investigated as interesting future work. Figure 14 presents the performance and resource consumption of VM instantiation under different VM arrival rates, while the total number of instantiated VMs is fixed at 30. Figure 14(a) shows the average boot time when a new VM is provisioned every {10 -5 -1} seconds on average. Since higher VM arrival rates lead to more severe I/O contentions, the average boot time with the baseline scheme increases quickly. In contrast, with the help of VMAR, a lot of disk accesses from the VMs hit the memory cache and return directly without triggering any real device access. Therefore, in comparison to the baseline, the average boot time with VMAR is much lower, and increases slowly with the arrival rate. Figure 14(b) shows that the VM arrival rate does not significantly affect the total amount of I/O traffic2 . This confirms that the increase in boot time under baseline is due to the increased I/O contention, which is mitigated by VMAR. Finally, it can be observed that the overhead of lessfs grows fast with the level of I/O contention. Figure 15 presents the performance and I/O traffic of VM instantiation with different available memory sizes on the host. In this experiment, the number of VMs is set to 30 and the arrival rate is set to 0.2. From previous experiments, which uses all 64 GB memory, we observe that the memory usage of the host during runtime is around 11 GB, 4 GB of which is for caching. Thus, we test the scenarios where the available memory size is 9 GB and 11 GB respectively. Under all configurations, the instantiation time is insensitive to memory pressure, and the reason is twofold. First, VMAR has consolidated the I/O traffic and only requires very small amount of memory (∼1.5 GB) to cache all I/O requests, which can be satisfied even under high memory pressure. Second, without VMAR, data re-access rate is very low, which diminishes the benefit of abundant memory. The page sharing counter of KSM indicates that it saves ∼ 3.5 GB of memory by compressing similar pages. However, because the saving is achieved after the data blocks are loaded into memory, it incurs almost the same amount of I/O traffic as baseline, and therefore does not lead to notable performance improvement. Application loading Figures 16,[START_REF] Kochut | Leveraging Local Image Redundancy for Efficient Virtual Machine Provisioning[END_REF][START_REF] Koller | Deduplication: Utilizing content similarity to improve I/O performance[END_REF] show the results of application loading performance and I/O traffic. Again, in Figure 16, {20 -30 -40} VMs are booted with a fixed arrival rate of 0.2; in Figure 17, the number of VMs is set to 30, and {10 -5 -1} VMs are booted every second; in Figure 18,[START_REF] Zhang | Small is big: functionally partitioned file caching in virtualized environments[END_REF] VMs are booted at a rate of 0.2, under different memory pressures. As discussed above, we replace 4 of the production images with 4 new images installed with different versions of IBM DB2 and WAS, and only measure the application loading time of the 4 images. Other VMs serve as the background workload.
Loading enterprise applications is an I/O intensive workload, where a large number of application binaries and libraries are read into the memory. The 4 images we measure contain different versions of the same application stack, and thus share a lot of data blocks. Therefore, the results demonstrate a similar trend as that of the VM instantiation experiments. With the help of VMAR, the average load time and I/O traffic are much lower, and increase at a much slower pace with resource contention than the baseline. The lessfs scheme still causes significant overhead to I/O performance. When the system is under memory pressure, KSM is not able to reduce I/O traffic or improve performance.
Compared to lessfs, the lessfs+VMAR configuration improves application loading time by 15% on average, which is more than twice the improvement in VM instantiation time (7%). This is because in the application loading workload, data sharing among images is in large sequential chunks, which enables VMAR to redirect large I/O requests without creating too many descendants. Runtime overhead VMAR intercepts each read request to the VM image and incurs additional processing (address translation and redirection). We test this overhead with both random and sequential I/O by issuing dd commands within a VM, with 1 MB block size and direct I/O mode. For random I/O, 3,000 nonzero blocks are read on random locations of the virtual disk. For sequential I/O, a 350 MB non-zero file is read. To eliminate the impact of other factors, the benchmarks run twice when the VM is idle. After the first run, all the data has been brought into the host page cache. We measure the runtime of the second run, which only copies the data from the host memory. Figure 19 shows the runtime normalized to using a raw image. The result shows that the overhead of VMAR is within 5% for sequential I/O and negligible for random I/O.
Related Work
This section surveys existing efforts on I/O resource optimization by leveraging data content similarities in various workload scenarios.
Deduplicated Storage and File Systems
Deduplication for Backup Data Due to the explosive generation of digital data, deduplication techniques have been widely used to reduce the storage capacity in backup and archival systems. In general, storage deduplication techniques break each dataset (file or object) into smaller chunks, compare the content of each chunk, and merge chunks with the same content. Much research effort has been made to enhance the effectiveness and efficiency of these operations [START_REF] Dong | Tradeoffs in scalable data routing for deduplication clusters[END_REF][START_REF] Dubnicki | HYDRAstor: a Scalable Secondary Storage[END_REF][START_REF] Guo | Building a high-performance deduplication system[END_REF][START_REF] Meyer | A study of practical deduplication[END_REF][START_REF] Zhu | Avoiding the disk bottleneck in the data domain deduplication file system[END_REF]. For instance, Zhu et al. [START_REF] Zhu | Avoiding the disk bottleneck in the data domain deduplication file system[END_REF] have proposed three techniques to improve the deduplication throughput, which improve the content identification performance, deduplicated storage layout, and metadata cache management respectively. Meyer et al. [START_REF] Meyer | A study of practical deduplication[END_REF] have provided the insight that deduplication on the whole-file level can achieve about 3 4 of the space savings of block-level deduplication, while significantly reducing disk fragmentations.
Deduplication for Primary Data Many recent papers have focused on the deduplication of primary data, namely datasets supporting runtime I/O requests [START_REF] El-Shimi | Primary Data Deduplication Large Scale Study and System Design[END_REF][START_REF] Koller | Deduplication: Utilizing content similarity to improve I/O performance[END_REF][START_REF] Koutoupis | Data deduplication with Linux[END_REF][START_REF] Ng | Live deduplication storage of virtual machine images in an open-source cloud[END_REF][START_REF] Srinivasan | Latency-aware, Inline Data Deduplication for Primary Storage[END_REF]. They tackle the problem of I/O latency caused by deduplication from different angles. In [START_REF] El-Shimi | Primary Data Deduplication Large Scale Study and System Design[END_REF], a study has been presented to analyze the file-level and chunk-level deduplication approaches using the dataset of primary data collected from Windows servers. Based on the findings, a deduplication system has been developed, where data scanning and compression are performed offline without interfering with file write operations. Ng et. al have proposed optimized metadata management schemes for inline deduplication of VM images [START_REF] Ng | Live deduplication storage of virtual machine images in an open-source cloud[END_REF]. iDedup [START_REF] Srinivasan | Latency-aware, Inline Data Deduplication for Primary Storage[END_REF] has used a minimum sequence threshold to determine whether to deduplicate a group of blocks, and thereby preserving the spatial locality in the disk layout. DEDE [START_REF] Clements | Decentralized deduplication in SAN cluster file systems[END_REF] focuses on distributing the workload of duplicate detection to the cluster of compute notes. It also demonstrates that the VM instantiation time can be significantly improved by improving the storage array cache hit rate.
Memory Deduplication
Many techniques have been proposed to leverage the similarities among processes or VMs running on a physical server and reduce their memory usage. Back in 1997, Disco [START_REF] Bugnion | Disco: running commodity operating systems on scalable multiprocessors[END_REF] has introduced page sharing in NUMA multiprocessors. More recently, VMware ESX Server [START_REF] Waldspurger | Memory Resource Management in VMware ESX Server[END_REF] has proposed content-based page sharing, in which pages with identical content can be shared by modifying the page table supporting the VMs. When a shared page is modified, the copy-on-write logic is triggered and a private copy of the page is created. Many optimizations have been proposed to reduce the memory scanning overhead and increase sharing opportunities [START_REF] Kim | XHive: Efficient Cooperative Caching for Virtual Machines[END_REF][START_REF] Koller | Deduplication: Utilizing content similarity to improve I/O performance[END_REF][START_REF] Mi Lós | enlightened page sharing[END_REF][START_REF] Sharma | Singleton: system-wide page deduplication in virtual environments[END_REF][START_REF] Wood | Memory buddies: exploiting page sharing for smart colocation in virtualized data centers[END_REF].
Among the above techniques, Satori [START_REF] Mi Lós | enlightened page sharing[END_REF] and I/O Deduplication [START_REF] Koller | Deduplication: Utilizing content similarity to improve I/O performance[END_REF] are the most relevant to VMAR. The sharing-aware block device in Satori and the contentbased cache in I/O Deduplication both capture short-lived sharing opportunities by detecting similar pages at page loading time. However, Satori consolidates pages belonging to different VMs by modifying the guest OS, while VMAR works entirely on the host level and stays transparent to VM guests. I/O Deduplication introduces a secondary content-based cache under the VFS page cache, making it difficult to avoid duplicates across the two caching levels. As a matter of fact, VMAR may complement both by providing the block maps as hints for identical pages, which they need at page loading time.
Conclusion
In this paper we propose VMAR, which is a thin I/O optimization layer that improves VM instantiation and runtime performance by redirecting data accesses between pairs of VM images. By creating a content-based block map during image capture time and always directing accesses of identical blocks to the same destination address, VMAR enables VMs to give each other "free rides" when bringing their image data to the memory page cache. Compared to existing memory and I/O deduplication techniques, VMAR operates ahead of VM I/O requests and upstream in the I/O architecture. As a result, VMAR incurs small overhead and optimizes the entire I/O stack. Moreover, implemented as a new image format, VMAR is a configurable option for each VM. This enables cloud administrators to "test drive" it before complete deployment.
On top of the main access redirection mechanism, VMAR also includes two optimizations of the block map. The first one is to reduce block map size by merging contiguous map entries. The second one is to reduce the number of block map lookup operations by using an index to quickly guide a request into the correct region of the map. Experiments have demonstrated that in I/Ointensive settings VMAR reduces VM boot time by as much as 55% and reduces application loading time up to 45%.
VMAR is a disk image driver and does not rely on any specific CPU/memory virtualization technology. Thus, it is straightforward to make it work with other virtualization platforms such as Xen [START_REF] Barham | Xen and the art of virtualization[END_REF]. Currently VMAR works entirely on the host level. As future work, we plan to integrate VMAR with our previous work on VM exclusive caching [START_REF] Zhang | Small is big: functionally partitioned file caching in virtualized environments[END_REF] to achieve further savings on the VM level. We also plan to evaluate VMAR in an image pool with a larger scale and more types of operating systems, and explore adding a second level of redirection on the block storage layer to enhance sequential I/O pattern.
Fig. 1 .
1 Fig. 1. Comparison of storage deduplication, memory deduplication, and VMAR.
Fig. 2 .
2 Fig. 2. Different configurations of virtual disks
Figure 3 Fig. 3 .
33 Figure 3 illustrates how VMAR interacts with a VM during its lifetime. First, when a new VM image is inserted into the image repository, either copied from
Fig. 4 .
4 Fig. 4. Illustration of clusters for three example images.
Fig. 5 .Fig. 6 .
56 Fig. 5. Meta-data of cluster CL-111
Fig. 7 .
7 Fig. 7. Pseudo-code of updating an image.
Fig. 9 .
9 Fig. 9. Block map size optimization.
Fig. 10 .
10 Fig. 10. CDF of the volume in the clusters with different sizes.
Fig. 11 .
11 Fig. 11. Average binary search depth for each search scheme.
Fig. 12 .
12 Fig. 12. Image blocks similarity statistics.
Fig. 13 .
13 Fig. 13. Comparison of performance and resource utilization in VM instantiation, with different number of VMs.
Fig. 14 .Fig. 15 .
1415 Fig. 14. Comparison of performance and I/O traffic in VM instantiation, with different VM arrival rates.
Fig. 16 .
16 Fig. 16. Comparison of performance and I/O traffic in application loading, with different number of VMs.
Fig. 17 .Fig. 18 .
1718 Fig. 17. Comparison of performance and I/O traffic in application loading, with different VM arrival rates.
Fig. 19 .
19 Fig. 19. Comparison of runtime for running random/sequential reading benchmark.
1 .
1 Ahead of I/O requests: VMAR detects identical data blocks when VM images are captured and generates a block translation map. This way, even before a VM starts running, VMAR has rich knowledge on its future I/O accesses and is capable of linking them to other VMs' data blocks. The data hashing and comparison can be done lazily because a VM image is typically captured when the VM using it has just been terminated. By batching these operations at image capture time, VMAR also avoids keeping large amount of hash values as deduplication metadata at compute nodes. 2. Upstream in the I/O architecture: Using the block translation map, VMAR
DB2+WAS is commonly used in online transaction processing (OLTP) workloads.
In the rest of this section, memory usage results will be omitted because they are similar to the amount of I/O traffic. | 44,514 | [
"1003217",
"1003218",
"965444",
"965445",
"1003219",
"1003220",
"1003221",
"1003222"
] | [
"99874",
"74701",
"74701",
"74701",
"74701",
"74701",
"74701",
"74701"
] |
01480778 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480778/file/978-3-642-45065-5_12_Chapter.pdf | Nima Kaviani
Eric Wohlstadter
email: wohlstad@cs
Rodger Lea
email: [email protected]
Cross-Tier Application & Data Partitioning of Web Applications for Hybrid Cloud Deployment
Hybrid cloud deployment offers flexibility in trade-offs between the cost-savings/scalability of the public cloud and control over data resources provided at a private premise. However, this flexibility comes at the expense of complexity in distributing a system over these two locations. For multi-tier web applications, this challenge manifests itself primarily in the partitioning of application-and database-tiers. While there is existing research that focuses on either application-tier or data-tier partitioning, we show that optimized partitioning of web applications benefits from both tiers being considered simultaneously. We present our research on a new cross-tier partitioning approach to help developers make effective trade-offs between performance and cost in a hybrid cloud deployment. In two case studies the approach results in up to 54% reduction in monetary costs compared to a premise only deployment and 56% improvement in execution time compared to a naïve partitioning where application-tier is deployed in the cloud and data-tier is on private infrastructure.
Introduction
While there are advantages to deploying Web applications on public cloud infrastructure, many companies wish to retain control over specific resources [START_REF] Armbrust | Above the Clouds: A Berkeley View of Cloud Computing[END_REF] by keeping them at a private premise. As a result, hybrid cloud computing, has become a popular architecture where systems are built to take advantage of both public and private infrastructure to meet different requirements. However, architecting an efficient distributed system across these locations requires significant effort. An effective partitioning should not only guarantee that privacy constraints and performance objectives are met, but also should deliver on one of the primary reasons for using the public cloud, a cheaper deployment.
In this paper we focus on partitioning of Online Transaction Processing (OLTP) style web applications. Such applications are an important target for hybrid architecture due to their popularity. Web applications follow the well known multi-tier architecture, generally consisting of tiers such as: client-tier, application-tier 1 (serving dynamic web content), and back-end data-tier. Since the hybrid architecture is motivated by the management of sensitive data resources, our research focuses on combined partitioning of the data-tier (which Fig. 1: High-level hybrid architecture with cross-tier partitioning of code and data.
hosts data resources) and the application-tier (which directly uses data resources). Figure 1 shows a high-level diagram of these tiers being jointly partitioned across a hybrid architecture, which we refer to as cross-tier partitioning.
Existing research only applies partitioning to one of the application-or data tiers and does not address cross-tier partitioning. Systems such as CloneCloud [START_REF] Chun | Clonecloud: elastic execution between mobile device and cloud[END_REF], Cloudward Bound [START_REF] Hajjat | Cloudward bound: planning for beneficial migration of enterprise applications to the cloud[END_REF], Leymann et al.'s [START_REF] Leymann | Moving applications to the cloud: an approach based on application model enrichment[END_REF], and our own work on Manticore [START_REF] Kaviani | Manticore: A Framework for Partitioning of Software Services for Hybrid Cloud[END_REF] partition only software but not data. Other work in the area provides for partitioning of relational databases [START_REF] Khadilkar | Risk-Aware Data Processing in Hybrid Clouds[END_REF] or Map-Reduce job/data components [START_REF] Agarwal | Volley: Automated data placement for geo-distributed cloud services[END_REF][START_REF] Ko | The HybrEx model for confidentiality and privacy in cloud computing[END_REF][START_REF] Wieder | Orchestrating the deployment of computations in the cloud conductor[END_REF]. Unfortunately, one cannot "cobble together" a cross-tier solution by using independent results from such approaches. A new approach is needed that integrates application and data partitioning natively. Thus we argue that research into cross-tier partitioning is both important and challenging.
First, cross-tier partitioning is important because the data-flow between these tiers is tightly coupled. The application-tier can make several queries during its execution, passing information to and from different queries; an example is discussed in Section 2. Even though developers follow best practices to ensure the source code for the business logic and the data access layer are loosely coupled, this loose coupling does not apply to the data-flow. The data-flow crosscuts application-and data-tiers requiring an optimization that considers the two simultaneously. Any optimization must avoid, whenever possible, the latency and bandwidth requirements imposed by distributing such data-flow.
Second, cross-tier partitioning is challenging because it requires an analysis that simultaneously reasons about the execution of application-tier code and data-tier queries. On the one hand, previous work on partitioning of code is not applicable to database queries because it does not account for modeling of query execution plans. On the other hand, existing work on data partitioning does not account for the data-flow or execution footprint of the application-tier [START_REF] Khadilkar | Risk-Aware Data Processing in Hybrid Clouds[END_REF]. To capture a representation for cross-tier optimization, our contribution in this paper includes a new approach for modeling dependencies across both tiers as a combined binary integer program (BIP) [START_REF] Schrijver | Theory of Linear and Integer Programming[END_REF].
We provide a tool which collects performance profiles of web application execution on a single host and converts it to the BIP format. The BIP is fed to an off-the-shelf optimizer whose output yields suggestions for placement of application-and data-tier components to either public cloud or private premise. Using proper tooling and middleware, a new system can now be distributed across the hybrid architecture using the optimized placement suggestions. To the best of our knowledge, we provide the first approach for partitioning which integrates models of both application-tier and data-tier execution.
Motivating Scenario
As a motivating example, assume a company plans to take its on-premise trading software system and deploy it to a hybrid architecture. We use Apache DayTrader [START_REF]Apache DayTrader[END_REF], a benchmark emulating the behavior of a stock trading system, to express this scenario. DayTrader implements business logic in the applicationtier as different request types, for example, allowing users to login (doLogin), view/update their account information (doAccount & doAccountUpdate), etc. At the data-tier it consists of tables storing data for account, accountprofile, holding, quote, etc. Let us further assume that, as part of company regulations, user information (account & accountprofile) must remain on-premise.
Figure 2 shows the output of our cross-tier partitioning for doLogin. The figure shows the call-tree of function execution in the application-tier as well as data-tier query plans at the leaves. In the figure, we see four categories of components: (i) data on premise shown as black nodes, (ii) data in the cloud as square nodes, (iii) functions on premise as gray nodes, and (iv) functions in the cloud as white nodes. Here we use each of these four categories to motivate cross-tier partitioning.
First, some data is not suitable for deployment in the cloud due to privacy concerns or regulations [START_REF] Hajjat | Cloudward bound: planning for beneficial migration of enterprise applications to the cloud[END_REF]. Thus, many enterprises avoid committing deployment of certain data in the public cloud, instead hosting it on private infrastructure (e.g., account & accountprofile in Figure 2). Our primary usecase here is to support cases with restrictions on where data is stored not where it flows.
Second, function execution requires CPU resources which are generally cheaper and easier to scale in the public cloud (some reports claim a typical 80% savings using public cloud versus on-premise private systems [START_REF]The Economics of the Cloud[END_REF]). Thus placing function execution in the public cloud is useful to limit the amount of on-premise infrastructure. On the other hand, sunk cost of existing hardware encourages some private deployments. So without regard to other factors, we would want to execute application-tier functions in the cloud and yet utilize existing hardware. Fig. 2: A cross-tier partitioning suggested by our tool for the doLogin request from Third, since we would like to deploy functions to the cloud, the associated data bound to those functions should be deployed to the cloud, otherwise we will incur additional latency and bandwidth usage. So there is motivation to move non-sensitive data to the cloud. However, such non-sensitive data may be bound to sensitive data through queries which operate over both. For this reason, moving non-sensitive data to the public cloud is not always a winning proposition. We will need an analysis which can reason about the benefit of moving data closer to functions executing in the public cloud versus the drawback of pulling it away from the sensitive data on premise.
Finally, executing all functions in the public cloud is also not always a winning proposition. Some functions are written as transactions over several data resources. Such functions may incur too much communication overhead if they execute in the public cloud but operate on private premise data. So the benefit of executing them in the cloud needs to be balanced with this overhead.
These four cases help to illustrate the inter-dependencies between the application-tier and data-tier. In the case of doLogin, a developer may manually arrive at a similar partitioning with only minor inconvenience. However, to cover an entire application, developers need to simultaneously reason about the effects of component placements across all request types. This motivates the need for research on automation for cross-tier partitioning.
3 Background: Application-Tier Partitioning Binary Integer Programming [START_REF] Schrijver | Theory of Linear and Integer Programming[END_REF] has been utilized previously for partitioning of applications (although not for cross-tier partitioning) [START_REF] Chong | Building secure web applications with automatic partitioning[END_REF][START_REF] Kaviani | Manticore: A Framework for Partitioning of Software Services for Hybrid Cloud[END_REF][START_REF] Newton | Wishbone: Profile-based Partitioning for Sensornet Applications[END_REF][START_REF] Yang | Hilda: A high-level language for data-driven web applications[END_REF]. A binary integer program (BIP) consists of the following:
-Binary variables: A set of binary variables x 1 , x 2 , ..., x n ∈ {0, 1}.
-Constraints: A set of linear constraints between variables where each constraint has the form: c 0 x 0 +c 1 x 1 +...+c n x n {≤, =, ≥} c m and c i is a constant. -Objective: A linear expression to minimize or maximize: cost 1 x 1 +cost 2 x 2 + ... + cost n x n , with cost i being the cost charged to the model when x i = 1. The job of a BIP optimizer is to choose the set of values for the binary variables which minimize/maximize this expression. Formulating a cross-tier BIP for partitioning will require combining one BIP for the application-tier and another for the data-tier. Creating each BIP consists of the same high-level steps (although the specific details vary): (i) profiling, (ii) analysis, (iii) generating the BIP constraints and (iv) generating the BIP objective function. The overall process of applying cross-tier partitioning is shown in Figure 3. In the top left we see an application before partitioning. Notice that the profiling results are split in two branches. Here we focus on the flow following from the Profiling Logs branch, discussing the Explain Plan flow in Section 4. Our approach for generating a BIP for the application-tier follows from our previous work on Manticore [START_REF] Kaviani | Manticore: A Framework for Partitioning of Software Services for Hybrid Cloud[END_REF] and is summarized as background here.
Profiling: The typical profiling process for application partitioning starts by taking existing software and applying instrumentation of its binaries. The software is then exercised on representative workloads, using the instrumentation to collect data on measured CPU usage of software functions and data exchange between them. This log of profiling information will be converted to the relevant variables and costs of a BIP.
Analysis:
The log of profile data is converted to a graph model before being converted to a BIP, as shown in the top flow of Figure 3. Let App(V, E) represent a model of the application, where ∀v ∈ V , v corresponds to a function execution in the application. Similarly ∀u, v ∈ V , e (u,v) ∈ E implies that there is data exchange between functions in the application corresponding to u and v in App. ∀e (u,v) ∈ E, we define d u↔v as the amount of data exchanged between u to v.
BIP Constraints:
The graph model is then used to formulate a BIP. For every node u in the model we consider a variable x u ∈ {0, 1}. Using input from a developer some nodes can be constrained to a particular location by fixing their value, e.g., (0: private premise, 1: public cloud). Unconstrained variables are free for an optimizer to choose their values so as to minimize the objective function. These values are then translated to placement decisions for function executions.
BIP Objective: For each v ∈ V we define cost execv to represent the cost of executing v on-premise and cost execv to represent cost of executing v in the cloud. We also define latency (u,v) to represent the latency cost on edge e (u,v) and calculate the communication cost (cost commu,v ) for edge e (u,v) as follows:
cost commu,v = latency (u,v) + d u↔v D unit × cost communit (1)
where D unit would be the unit of data to which cloud data charges are applied and cost communit would be the cloud charges for D unit of data transfer, and d u↔v represents data exchange between vertices u and v. As demonstrated by work such as Cloudward Bound [START_REF] Hajjat | Cloudward bound: planning for beneficial migration of enterprise applications to the cloud[END_REF], in a cloud computing setting such raw performance costs such as measured CPU usage and data transfer can be converted to monetary costs using the advertised infrastructure costs of vendors such as Amazon EC2. This allows developers to optimize for trade-offs in performance cost and monetary cost objectives.
Using such costs we can define an objective expression (The non-linear expression in the objective function can be relaxed by making the expansion in [START_REF] Newton | Wishbone: Profile-based Partitioning for Sensornet Applications[END_REF]):
min i∈V x i cost execi + (i,j)∈E (x i -x j ) 2 cost commi,j (2)
Finally, the BIP is fed to a solver which determines an assignment of functions to locations. By choosing the location for each function execution, the optimizer chooses an efficient partitioning by placing functions in the cloud when possible if it does not introduce too much additional latency or bandwidth requirements.
Different from previous work, our cross-tier partitioning incorporates a new BIP model of query plan execution into this overall process. In the next section, we describe these details which follow the bottom flow of Figure 3.
BIP for Data-Tier Partitioning
The technical details of extending application-tier partitioning to integrate the data-tier are motivated by four requirements: (i) weighing the benefits of distributing queries, (ii) comparing the trade-offs between join orders, (iii) taking into account intra-request data-dependencies and (iv) providing a query execution model comparable to application-tier function execution. In this section, we first further motivate cross-tier partitioning by describing each of these points, then we cover the technical details for the steps of partitioning as they relate to the data-tier. We focus on a data-tier implemented with a traditional SQL database. While some web application workloads can benefit from the use of alternative NoSQL techniques, we chose to focus initially on SQL due to its generality and widespread adoption.
First, as described in Section 2, placing more of the less-sensitive data in the cloud will allow for the corresponding code from the application-tier to also be placed in the cloud, thus increasing the overall efficiency of the deployment and reducing data transfer. However, this can result in splitting the set of tables used in a query across public and private locations. For our DayTrader example, each user can have many stocks in her holdings which makes the holding table quite large. As shown in Figure 2, splitting the join operation can push the holdings table to the cloud (square nodes) and eliminate the traffic of moving its data to the cloud. This splitting also maintains our constraint to have the privacy sensitive account table on the private premise. An effective modeling of the data-tier needs to help the BIP optimizer reason about the trade-offs of distributing such queries across the hybrid architecture.
Second, the order that tables are joined can have an effect not only on traditional processing time but also on round-trip latency. We use a running example throughout this section of the query shown in Figure 4, with two different join orders, left and right. If the query results are processed in the public cloud where the holding table is in the cloud and account and accountprofile are stored on the private premise, then the plan on the left will incur two-round trips from the public to private locations for distributed processing. On the other hand, the query on the right only requires one round-trip. Modeling the datatier should help the BIP optimizer reason about the cost of execution plans for different placements of tables. Third, some application requests execute more than one query. In these cases, it may be beneficial to partition functions to group execution with data at a single location. Such grouping helps to eliminate latency overhead otherwise needed to move data to the location where the application-tier code executes. An example of this is shown in Figure 2, where a sub-tree of function executions for TradeJdbc:login are labeled as "private" (gray nodes). By pushing this subtree to the private premise, the computation needed for working over account and accountprofile data in the two queries under TradeJdbc:login can be completed at the premise without multiple round-trips between locations.
Fourth, since the trade-offs on function placement depend on the placement of data and vice-versa, we need a model that can reason simultaneously about both application-tier function execution and query plan execution. Thus the model for the data-tier should be compatible for integration with an approach to application partitioning such as the one described in Section 3.
Having motivated the need for a model of query execution to incorporate the data-tier in a cross-tier partitioning, we now explore the details, following the bottom flow of Figure 3. The overall process is as follows. We first, profile query execution using Explain Plan (Section 4.1). This information is used to collect statistics for query plan operators by interrogating the database for different join orders (Section 4.2). The statistics are then used to generate both BIP constraints (Section 4.3) and a BIP objective function (Section 4.4). Finally, these constraints and objective are combined with that from the application-tier to encode a cross-tier partitioning model for a BIP solver.
Database Profiling with Explain Plan
Profiling information is available for query execution through the Explain Plan SQL command. Given a particular query, this command provides a treestructured result set detailing the execution of the query. We use a custom JDBC driver wrapper to collect information on the execution of queries. During application profiling (cf. Section 3) whenever a query is issued by the application-tier, our JDBC wrapper intercepts the query and collects the plan for its execution. The plan returned by the database contains the following information:
1. type(op): Each node in the query plan is an operator such as a join, table access, selection (i.e. filter), sort, etc. To simplify presentation of the technical details, we assume that each operator is either a join or a table access.
Other operators are handled by our implementation but they don't add extra complexity compared to a join operator. For example, in Figure 4, the selection (i.e. filter) operators are elided. We leverage the database's own cost model directly by recording from the provided plan how much each operator costs. Hence, we don't need to evaluate different operator implementations to evaluate their costs. On the other hand, we do need to handle joins specially because table placement is greatly affected by their ordering. 2. cpu(op): This statistic gives the expected time of execution for a specific operator. In general, we assume that the execution of a request in a hybrid web application will be dominated by the CPU processing of the applicationtier and the network latency. So in many cases, this statistic is negligible.
However, we include it to detect the odd case of expensive query operations which can benefit from executing on the public cloud. 3. size(op): This statistic captures the expected number of bytes output by an operator which is equal to the expected number of rows times the size of each retrieved row. From the perspective of the plan tree-structure, this is the data which flows from a child operator to its parent. 4. predicates(joinOp): Each join operator combines two inputs based on a set of predicates which relate those inputs. We use these predicates to determine if alternative join orders are possible for a query.
When profiling the application, the profiler observes and collects execution statistics only for plans that get executed but not for alternative join orders. However, the optimal plan executed by the database engine in a distributed hybrid deployment can be different from the one observed during profiling. In order to make the BIP partitioner aware of alternative orders, we have extended our JDBC wrapper to consult the database engine and examine the alternatives by utilizing a combination of Explain Plan and join order hints. Our motivation is to leverage the already existing cost model from a production database for cost estimation of local operator processing, while still covering the space of all query plans. The profiler also captures which sets of tables are accessed together as part of an atomic transaction. This information is used to model additional costs of applying a two-phase commit protocol, should the tables get partitioned.
Join Order Enumeration
We need to encode enough information in the BIP so it can reason over all possible plans. Otherwise, the BIP optimizer would mistakenly assume that the plan executed during our initial profiling is the only one possible. For example, during initial profiling on a single host, we may only observe the left plan from Figure 4. However, in the example scenario, we saw that the right plan introduces fewer round-trips across a hybrid architecture. We need to make sure the right plan is accounted for when deciding about table placement. Our strategy to collect the necessary information for all plans consists of two steps: (i) gather statistics for all operators in all plans irrespective of how they are joined, and (ii) encode BIP constraints about how the operators from step (i) can be joined.
Here we describe step 1 and then describe step 2 in the next subsection. The novelty of our approach is that instead of optimizing to a specific join order in isolation of the structure of application-tier execution, we encode the possible orders together with the BIP of the application-tier as a combined BIP.
As is commonly the case in production databases, we assume a query plan to be left-deep. In a left-deep query plan, a join takes two inputs: one from a single base relation (i.e. table) providing immediate input (referred to as the "inner relation"); and another one potentially derived as an intermediate result from a different set of relations (the "outer relation"). The identity of the inner relation and the set of tables comprising the outer relation uniquely determine the estimated best cost for an individual join operator. This is true regardless of the join order in which the outer relation was derived [START_REF] Selinger | Access path selection in a relational database management system[END_REF]. For convenience in our presentation, we call this information the operator's id, because we use it to represent an operator in the BIP. For example, the root operator in Figure 4a takes accountProfile as an inner input and {holding, account} as an outer input. The operator's id is then {(holding, account), accountProfile}. We will refer to the union of these two inputs as a join set (the set of tables joined by that operator). For example, the join set of the aforementioned operator is {holding, account, accountProfile}. Notably, while the join sets for the roots of Figures 4a &4b are the same, Figures 4b's root node has the operator id {(accountProfile, account), holding} allowing us to differentiate the operators in our BIP formulation. Our task in this section is to collect statistics for the possible join operators with unique ids.
Most databases provide the capability for developers to provide hints to the query optimizer in order to force certain joins. For example in Oracle, a developer can use the hint LEADING(X, Y, Z, ...). This tells the optimizer to create a plan where X and Y are joined first, then their intermediate result is joined with Z, etc. We use this capability to extract statistics for all join orders.
Algorithm 1 takes as input a query observed during profiling. In line 2, we extract the set of all tables referenced in the query. Next, we start collecting operator statistics for joins over two tables and progressively expand the size through each iteration of the loop on line 3. The table t, selected for each iteration of line 4 can be considered as the inner input of a join. Then, on line 5 we loop through all sets of tables of size i which don't contain t. On line 6, we verify if t is joinable with the set S by making sure that at least one table in the set S shares a join (access) predicate with t. This set forms the outer input to a join. Finally, on line 7, we collect statistics for this join operator by forcing the database to explain a plan in which the join order is prefixed by the outer input set, followed by the inner input relation. We record the information for each operator by associating it with its id. For example, consider Figure 4 as the input Q to Algorithm 1. In a particular iteration of line 5, i might be chosen as 2 and t as accountProfile. Since accountProfile has a predicate shared with account, S could be chosen as the set of size 2: {account, holdings}. Now on line 6, explainPlanWithLeadingTables({account, holdings}, ac-countProfile) will get called and the statistics for the join operator with the corresponding id will get recorded.
1 Function collectOperatorStats(Q) 2 tables ← getTables(Q);
The bottom-up structure of the algorithm follows similarly to the classic dynamic programming algorithm for query optimization [START_REF] Selinger | Access path selection in a relational database management system[END_REF]. However, in our case we make calls into the database to extract costs by leveraging Explain Plan and the LEADING hint. The complexity of Algorithm 1 is O(2 n ) (where n is the number of tables in each single query); i.e., same as the classic algorithm for query optimization [START_REF] Selinger | Access path selection in a relational database management system[END_REF], so our approach scales in a similar fashion. Even though Algorithm 1's complexity is exponential, queries typically operate on an order of tens of tables.
BIP Constraints
Now that we know the statistics for all operators with a unique id, we need to instruct the BIP how they can be composed. Our general strategy is to model each query plan operator, op, as a binary variable in a BIP. The variable will take on the value 1 if the operator is part of the query plan which minimizes the objective of the BIP and 0 otherwise. Each possible join set is also modeled Our algorithm to formulate these composition constraints makes use of two helper functions as shown in Table 1, namely genChoice and genInputConstraint. When these functions are called by our algorithms, they append the generated constraint to the BIP that was already built for the application-tier. The first function, genChoice, encodes that a particular join set may be derived by multiple possible join operators (e.g., {holding, account, accountprofile} could be derived by either of the root nodes in Figure 4). The second function, genInputConstraint, encodes that a particular join operator takes as inputs the join sets of its two children. It ensures that if op is selected, both its children's join sets (in lef t and in right ) are selected as well, constraining which subtrees of the execution plan can appear under this operator. The "≥" inequality in Table 1 helps to encode the boolean logic op → in lef t ∧ in right .
Starting with the final output join set of a query, Algorithm 2 recursively generates these constraints encoding choices between join operators and how parent operators are connected to their children. It starts on line 2 by calling a function to retrieve all operator ids which could produce that join set (these operators were all collected during the execution of Algorithm 1). It passes this information to genChoice on line 3. On line 4, we loop over all these operator ids, decomposing each into its two inputs on line 5. This information is then passed to genInputConstraint. Finally on line 7, we test for the base case of a table access operator. If we have not hit the base case, then the left input becomes the join set for recursion on line 8.
BIP Objective
Creating the optimization objective function consists of two parts: (i) determining the costs associated with the execution of individual operators, and (ii) creating a mathematical formulation of those costs. The magnitude of the execution cost for each operator and the communication cost between operators that are split across the network are computed using a similar cost model to previous work [START_REF] Yu | Distributed Query Processing[END_REF]. This accounts for the variation between local execution and distributed execution in that the latter will make use of a semi-join optimization to reduce costs (i.e. input data to a distributed join operator will transmit only the columns needed to collect matching rows). We extend the previous cost model to account for possible transaction delays. We assume that if the tables involved in an atomic transaction are split across the cloud and the private premise, by default the transaction will be resolved using the two-phase commit protocol.
Performance overhead from atomic two-phase distributed transactions comes primarily from two sources: protocol overhead and lock contention. Protocol -For some transactions, lock contention is negligible. This is because the application semantics don't induce sharing of table rows between multiple user sessions. For example, in DayTrader, although Account and Holdings tables are involved in an atomic transaction, specific rows of these tables are only ever accessed by a single user concurrently. In such cases we charge the cost of two extra round-trips between the cloud and the private premise to the objective function, one to prepare the remote site for the transaction and another to commit it. -For cases where lock contention is expected to be considerable, developers can request that certain tables be co-located in any partitioning suggested by our tool. This prevents locking for transactions over those tables to be delayed by network latency. Since such decisions require knowledge of application semantics that are difficult to infer automatically, our tool provides an interactive visualization of partitioning results, as shown in Figure 2. This allows developers to work through different "what-if" scenarios of table co-location constraints and the resulting suggested partitioning. We plan to further assist developers in making their decisions by profiling the frequency for concurrent transactions to update rows. Next, we need to encode information on CPU and data transmission costs into the objective function. In addition to generating a BIP objective, we will need some additional constraints that ensure the calculated objective is actually feasible. Table 2 shows functions to generate these constraints. The first constraint specifies that if an operator is included as part of a chosen query plan (its associated id variable is set to 1), then either the auxiliary variable op cloud or op premise will have to be 1 but not both. This enforces a single placement location for op. The second builds on the first and toggles the auxiliary variable cut op1,op2 when op 1cloud and op 2premise are 1, or when op 1premise and op 2cloud are 1.
The objective function itself is generated using two functions in Table 3. The first possibly charges to the objective function either the execution cost of the operator on the cloud infrastructure or on the premise infrastructure. Note that it will never charge both due to the constraints of Table 2. The second function charges the communication cost between two operators if the associated cut variable was set to 1. In the case that there is no communication between two operators this cost is simply 0.
Algorithm 3 takes a join set as input and follows a similar structure to Algorithm 2. The outer loop on line 3, iterates over each operator that could produce the particular join set. It generates the location constraints on line 4 and the execution cost component to the objective function on line 5. Next, on line 7, it iterates over the two inputs to the operator. For each, it extracts the operators that could produce that input (line 8) and generates the communication constraint and objective function component. Finally, if the left input is not a base relation (line 11), it recurses using the left input now as the next join set.
Having appended the constraints and objective components associated with query execution to the application-tier BIP, we make a connection between the two by encoding the dependency between each function that executes a query and the possible root operators for the associated query plan.
Implementation
We have implemented our cross-tier partitioning as a framework. It conducts profiling, partitioning, and distribution of web applications which have their business logic implemented in Java. Besides the profiling data, the analyzer also Algorithm 3: Objective generation accepts a declarative XML policy and cost parameters. The cost parameters encode the monetary costs charged by a chosen cloud infrastructure provider and expected environmental parameters such as available bandwidth and network latency. The declarative policy allows for specification of database table placement and co-location constraints. In general we consider the placement of privacy sensitive data to be the primary consideration for partitioning decisions. However, developers may wish to monitor and constrain the placement of function executions that operate over this sensitive data. For this purpose we rely on existing work using taint tracking [START_REF] Chin | Efficient character-level taint tracking for Java[END_REF] which we have integrated profiler.
For partitioning, we use the off-the-shelf integer programming solver lp solve [START_REF]lp solve Linear Programming solver[END_REF] to solve the discussed BIP optimization problem. The results lead to generating a distribution plan describing which entities need to be separated from one another (cut-points). A cut-point may separate functions from one another, functions from data, and data from one another. Separation of code and data is achievable by accessing the database engine through the database driver. Separating inter-code or inter-data dependencies requires extra middleware.
For functions, we have developed a bytecode rewriting engine as well as an HTTP remoting library that takes the partitioning plan generated by the analyzer, injects remoting code at each cut-point, and serializes data between the two locations. This remoting instrumentation is essentially a simplified version of J-Orchestra [START_REF] Tilevich | J-Orchestra: Automatic Java Application Partitioning[END_REF] implemented over HTTP (but is not yet as complete as the original J-Orchestra work). In order to allow for distribution of data entities, we have taken advantage of Oracle's distributed database management system (DDBMS). This allows for tables remote to a local Oracle DBMS, to be identified and queried for data through the local Oracle DBMS. This is possible by providing a database link (@dblink) between the local and the remote DBMS systems. Once a bidirectional dblink is established, the two databases can execute SQL statements targeting tables from one another. This allows us to use the distribution plan from our analyzer system to perform vertical sharding at the level of database tables. Note that the distributed query engine acts on the deployment of a system after a decision about the placement of tables has been made by our partitioning algorithm. We have provided an Eclipse plugin implementation of the analyzer framework available online [START_REF]Manticore[END_REF].
Evaluation
We evaluate our work using two different applications: DayTrader [START_REF]Apache DayTrader[END_REF] and RU-BiS [START_REF]Rice University Bidding System[END_REF]. DayTrader (cf. Section 2) is a Java benchmark of a stock trading system. RUBiS implements the functionality of an auctioning Web site. Both applications have already been used in evaluating previous cloud computing research [START_REF] Iqbal | SLA-driven dynamic resource management for multi-tier web applications in a cloud[END_REF][START_REF] Stewart | Empirical examination of a collaborative web application[END_REF].
We can have 9 possible deployment variations with each of the data-tier and the application tier being (i) on the private premise, (ii) on the public cloud, or (iii) partitioned for hybrid deployment. Out of all the placements we eliminate the 3 that place all data in the cloud as it contradicts the constraints to have privacy sensitive information on-premise. Also, we consider deployments with only data partitioned as a subset of deployments with both code and data partitioned, and thus do not provide separate deployments for them. The remaining four models deployed for evaluations were as follows: (i) both code and data are deployed to the premise (Private-Premise); (ii) data is on-premise and code is in the cloud (Naïve-Hybrid); (iii) data is on-premise and code is partitioned (Split-Code); and (iv) both data and code are partitioned (Cross-Tier).
For both DayTrader and RUBiS, we consider privacy incentives to be the reason behind constraining placement for some database tables. As such, when partitioning data, we constrain tables storing user information (account and accountprofile for DayTrader and users for RUBiS) to be placed on-premise. The remaining tables are allowed to be flexibly placed on-premise or in the cloud.
We used the following setup for the evaluation: for the premise machines, we used two 3.5 GHz dual core machines with 8.0 GB of memory, one as the application server and another as our database server. Both machines were located at our lab in Vancouver, and were connected through a 100 Mb/sec data link. For the cloud machines, we used an extra large EC2 instance with 8 EC2 Compute Units and 7.0 GB of memory as our application server and another extra large instance as our database server. Both machines were leased from Amazon's US West region (Oregon) and were connected by a 1 Gb/sec data link. We use Jetty as the Web server and Oracle 11g Express Edition as the database servers. We measured the round-trip latency between the cloud and our lab to be 15 milliseconds. Our intentions for choosing these setups is to create an environment where the cloud offers the faster and more scalable environment. To generate load for the deployments, we launched simulated clients from a 3.0 GHz quad core machine with 8 GB of memory located in our lab in Vancouver. DayTrader comes with a random client workload generator with uniform distribution on all requests. For RUBiS, we used its embedded client simulator in its buy mode, with an 80-20 ratio of browse-to-buy request distribution. In the rest of this section we provide the following evaluation results for the four deployments described above: execution times (Section 6.1), expected monetary deployment costs (Section 6.2), and scalability under varying load (Section 6.3).
Evaluation of Performance
We measured the execution time across all business logic functionality in Day-Trader and RUBiS under a load of 100 requests per second, for ten minutes. By execution time we mean the elapsed wall clock time from the beginning to the end of each servlet execution. Figure 5 shows those with largest average execution times. We model a situation where CPU resources are not under significant load. As shown in Figure 5, execution time in cross-tier partitioning is significantly better than any other model of hybrid deployment and is closely comparable to a non-distributed private premise deployment. As an example, response time for DayTrader's doLogin under Cross-Tier is 50% faster than Naïve-Hybrid while doLogin's response time for Cross-Tier is only 5% slower compared to Private-Premise (i.e., the lowest bar in the graph). It can also be seen that, for doLogin, Cross-Tier has 25% better response time compared to Split-Code, showing its effectiveness compared to partitioning only at the application-tier.
Similarly for other business logic functionality, we note that cross-tier partitioning achieves considerable performance improvements when compared to other distributed deployment models. It results in performance measures broadly similar to a full premise deployment. For the case of DayTrader -across all business logic functionality of Figure 5a -Cross-Tier results in an overall performance improvement of 56% compared to Naïve-Hybrid and a performance improvement of around 45% compared to Split-Code.
We observed similar performance improvements for RUBiS. Cross-Tier RU-BiS performs 28.3% better -across all business logic functionality of Figure 5b compared to its Naïve-Hybrid, and 15.2% better compared to Split-Code. Based on the results, cross-tier partitioning provides more flexibility for moving function execution to the cloud and can significantly increase performance for a hybrid deployment of an application.
Evaluation of Deployment Costs
For computing monetary costs of deployments, we use parameters taken from the advertised Amazon EC2 service where the cost of an extra large EC2 instance is $0.48/hour and the cost of data transfer is $0.12/GB. To evaluate deployment costs, we apply these machine and data transfer costs to the performance results from Section 6.1, scale the ten minute deployment times to one month, and gradually change the ratio of premise-to-cloud deployment costs to assess the effects of varying cost of private premise on the overall deployment costs.
As shown in both graphs, a Private-Premise deployment of web applications results in rapid cost increases, rendering such deployments inefficient. In contrast, all partitioned deployments of the applications result in more optimal deployments with Cross-Tier being the most efficient. For a cloud cost 80% cheaper than the private-premise cost (5 times ratio), DayTrader's Cross-Tier is 20.4% cheaper than Private-Premise and 11.8% cheaper than Naïve-Hybrid and
1 3 5 7 9
1,000 Split-Code deployments. RUBiS achieves even better cost savings with Cross-Tier being 54% cheaper than Private-Premise, 29% cheaper than Naïve-Hybrid, and 12% cheaper than Split-Code. As shown in Figure 6a, in cases where only code is partitioned, a gradual increase in costs for machines on-premise eventually results in the algorithm pushing more code to the cloud to the point where all code is in the cloud and all data is on-premise. In such a situation Split-Code eventually converges to Naïve-Hybrid; i.e., pushing all the code to the cloud. Similarly, Cross-Tier will finally stabilize. However since in Cross-Tier part of the data is also moved to the cloud, the overall cost is lower than Naïve-Hybrid and Split-Code.
Evaluation of Scalability
We also performed scalability analyses for both DayTrader and RUBiS to see how different placement choices affect application throughput. For both Day-Trader and RUBiS we used a range of 10 to 1000 client threads to send requests to the applications in 5 minute intervals with 1 minute ramp-up. Results are shown in Figure 7. As the figure shows, for both applications, after the number of requests reaches a certain threshold, Private-Premise becomes overloaded. For Naïve-Hybrid and Split-Code, the applications progressively provide better throughput. However, due to the significant bottleneck when accessing the data, both deployments maintain a consistent but rather low throughput during their executions. Finally, Cross-Tier achieved the best scalability. With a big portion of the data in the cloud, the underlying resources for both code and data can scale to reach a much better overall throughput for the applications. Despite having part of the data on the private premise, due to its small size the database machine on premise gets congested at a slower rate and the deployment can keep a high throughput. Our research bridges the two areas of application and database partitioning but differs from previous work in that it uses a new BIP formulation that considers both areas. Our focus is not on providing all of the many features provided by every previous project either on application partitioning or database partitioning. Instead, we have focused on providing a new interface between the two using our combined BIP. We describe the differences in more detail by first describing some related work in application partitioning and then database partitioning.
Application Partitioning: Coign [START_REF] Hunt | The Coign automatic distributed partitioning system[END_REF] is an example of classic application partitioning research which provides partitioning of Microsoft COM components. Other work focuses specifically on partitioning of web/mobile applications such as Swift [START_REF] Chong | Building secure web applications with automatic partitioning[END_REF], Hilda [START_REF] Yang | Hilda: A high-level language for data-driven web applications[END_REF], and AlfredO [START_REF] Rellermeyer | AlfredO: an architecture for flexible interaction with electronic devices[END_REF]. However that work is focused on partitioning the application-tier in order to off-load computation from the server-side to a client. That work does not handle partitioning of the data-tier. Minimizing cost and improving performance for deployment of software services has also been the focus of cloud computing research [START_REF] Ko | The HybrEx model for confidentiality and privacy in cloud computing[END_REF]. While approaches like Volley [START_REF] Agarwal | Volley: Automated data placement for geo-distributed cloud services[END_REF] reduce network traffic by relocating data, others like CloneCloud [START_REF] Chun | Clonecloud: elastic execution between mobile device and cloud[END_REF], Cloudward Bound [START_REF] Hajjat | Cloudward bound: planning for beneficial migration of enterprise applications to the cloud[END_REF], and our own Manticore [START_REF] Kaviani | Manticore: A Framework for Partitioning of Software Services for Hybrid Cloud[END_REF] improve performance through relocation of server components. Even though Volley examines data dependencies and CloneCloud, Cloudward Bound, and Manticore examine component or code dependencies, none of these approaches combine code and data dependencies to drive their partitioning and distribution decisions. In this paper, we demonstrated how combining code and data dependencies can provide a richer model that better supports cross-tier partitioning for web application in a hybrid architecture.
Fig. 3 :
3 Fig. 3: The overall process of applying cross-tier partitioning to a monolithic web application (process flows from left to right).
Fig. 4 :
4 Fig. 4: Two possible query plans from one of the queries in DayTrader: SELECT p.*, h.* FROM holding h, accountprofile p, account a WHERE h.accountid = a.accountid AND a.userid = p.userid AND h.quote symbol = ? AND a.accountid = ?
3 for i ← 1 to |tables| do 4 foreach t ∈ tables do 5 foreach
345 S ∈ Pi(tables -{t}) do 6 if isJoinable(S, t) then
Algorithm 1 :
1 Function to collect statistics for alternative query plan operators for the input query Q. Pi is the powerset operator over sets of size i.
Function
genExecutionCost(op) Generated objective component op cloud × execCost cloud (op) + oppremise× execCostpremise(op) Description If the variable representing op deployed in the cloud/premise is 1, then charge the associated cost of executing it in the cloud/premise respectively Function genCommCost(op1, op2) Generated objective component cutop 1 ,op 2 × commCost(op1, op2) Description If cutop 1 ,op 2 for two operators op1 and op2 was set to 1, then charge their cost of communication
Fig. 5 :
5 Fig. 5: Measured execution times for selected request types in the four deployments of DayTrader and RUBiS.
Fig. 6 :
6 Fig. 6: Monthly cost comparison for different deployments of DayTrader and RUBiS.
Fig. 7 :
7 Fig. 7: Scalability tests for full premise, full cloud, and hybrid deployments
Table 1 :
1 Constraint generation functions Constraint generation, using functions from Table1. The details for the functions getOperatorsForJoinSet, getInputs, sizeof, and left are not shown but their uses are described in the text. as a variable. Constraints are used to create a connection between operators that create a join set and operators that consume a join set (cf. Table1). The optimizer will choose a plan having the least cost given both the optimizers choice of table placement and function execution placement (for the application-tier). Each operator also has associated variables op cloud and op premise which indicate the placement of the operator. Table placement is controlled by each table's associated table access operators. The values of these variables for operators in the same query plan will allow us to model the communication costs associated with distributed queries.
6 genInputConstraint(op, inputs);
7 if sizeof(left(inputs)) > 0 then
8 createConstraints(left(inputs));
Algorithm 2:
Function genChoice(joinSet, {op1 ... opn}) Generated constraint op1 + ... + opn = joinSet Description a joinSet is produced by one and only one of the operators op1 ... opn Function genInputConstraint(op, {in lef t , in right }) Generated constraint -2 × op + in lef t + in right ≥ 0 Description If op is 1, then variables representing its left and right inputs (in lef t and in right ) must both be 1 1 Function createConstraints(joinSet) 2 ops ← getOperatorsForJoinSet(joinSet); 3 genChoice(joinSet, ops); 4 foreach op ∈ ops do 5 inputs ← getInputs(op);
Generated constraint op 1cloud + op2premise -cutop 1 ,op 2 ≤ 1 op1premise + op 2cloud -cutop 1 ,op 2 ≤ 1 DescriptionIf the variables representing the locations of two operators are different, then the variable cutop 1 ,op 2 is 1
Function genAtMostOneLocation(op)
Generated constraint op cloud + oppremise = op
Description If the variable representing op is 1, then either the variable
representing it being placed in the cloud is 1 or the variable
representing it being place in the premise is 1
Function genSeparated(op1, op2)
Table 2 :
2 Functions for generating objective helper constraints overhead is caused by the latency of prepare and commit messages in a database's two-phase commit protocol. Lock contention is caused by queuing delay which increases as transactions over common table rows become blocked. We provide two alternatives to account for such overhead:
Table 3 :
3 Functions for generating objective function
9 genSeparated(op, childOp);
10 genCommCost(op, childOp);
11 if sizeof(left(inputs)) > 0 then
12 createObjFunction(left(inputs));
1 Function createObjFunction(joinSet) 2 ops ← getOperatorsForJoinSet(joinSet); 3 foreach op ∈ ops do 4 genAtMostOneLocation(op); 5 genExecutionCost(op); 6 inputs ← getInputs(op); 7 foreach input ∈ inputs do 8 foreach childOp ∈ getOperatorsForJoinSet(input) do
In the rest of the paper we use the terms code and application-tier interchangeably.
DayTrader showing a partitioned application-and data-tier: data on premise (black nodes), data in the cloud (square nodes), functions on premise (gray nodes), and functions in the cloud (white nodes).
Database Partitioning: Database partitioning is generally divided into horizontal partitioning and vertical partitioning [START_REF] Agrawal | Integrating vertical and horizontal partitioning into automated physical database design[END_REF]. In horizontal partitioning, the rows of some tables are split across multiple hosts. A common motivation is for loadbalancing the database workload across multiple database manager instances [START_REF] Curino | Schism: a workload-driven approach to database replication and partitioning[END_REF][START_REF] Pavlo | Skew-aware automatic database partitioning in shared-nothing, parallel oltp systems[END_REF]. In vertical partitioning, some columns of the database are split into groups which are commonly accessed together, improving access locality [START_REF] Abadi | Sw-store: a vertically partitioned dbms for semantic web data management[END_REF]. Unlike traditional horizontal or vertical partitioning, our partitioning of data works at the granularity of entire tables. This is because our motivation is not only performance based but is motivated by policies on the management of data resources in the hybrid architecture. The granularity of logical tables aligns more naturally than columns with common business policies and access controls. That being said, we believe if motivated by the right use-case, our technical approach could likely be extended for column-level partitioning as well.
Limitations, Future Work, and Conclusion
While our approach simplifies manual reasoning for hybrid cloud partitioning, it requires some input from a developer. First, we require a representative workload for profiling. Second, a developer may need to provide input about the impact that atomic transactions have on partitioning. After partitioning, a developer may also want to consider changes to the implementation to handle some transactions in an alternative fashion, e.g. providing forward compensation [START_REF] Garcia-Molina | Sagas[END_REF]. Also as noted, our current implementation and experience is limited to Java-based web applications and SQL-based databases.
In future work we plan to support a more loosely coupled service-oriented architecture for partitioning applications. Our current implementation of data-tier partitioning relies on leveraging the distributed query engine from a production database. In some environments, relying on a homogeneous integration of data by the underlying platform may not be realistic. We are currently working to automatically generate REST interfaces to integrate data between the public cloud and private premise rather than relying on a SQL layer.
In this paper we have demonstrated that combining code and data dependency models can lead to cheaper and better performing hybrid deployment of Web applications. In particular, we showed that for our evaluated applications, combined code and data partitioning can achieve up to 56% performance improvement compared to a naïve partitioning of code and data between the cloud and the premise and a more than 40% performance improvement compared to when only code is partitioned (see Section 6.1). Similarly, for deployment costs, we showed that combining code and data can provide up to 54% expected cost savings compared to a fully premise deployment and almost 30% expected savings compared to a naïvely partitioned deployment of code and data or a deployment where only code is partitioned (cf. Section 6.2). | 57,666 | [
"1003226",
"1003227",
"1003228"
] | [
"366034",
"366034",
"366034"
] |
01480782 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480782/file/978-3-642-45065-5_16_Chapter.pdf | Zhenhua Li
email: [email protected]
Christo Wilson
Zhefu Jiang
Yao Liu
email: [email protected]
Ben Y Zhao
Jin Cheng
email: [email protected]
Zhi-Li Zhang
email: [email protected]
Yafei Dai
Efficient Batched Synchronization in Dropbox-like Cloud Storage Services
Keywords: Cloud storage service, Dropbox, Data synchronization, Traffic overuse
As tools for personal storage, file synchronization and data sharing, cloud storage services such as Dropbox have quickly gained popularity. These services provide users with ubiquitous, reliable data storage that can be automatically synced across multiple devices, and also shared among a group of users. To minimize the network overhead, cloud storage services employ binary diff, data compression, and other mechanisms when transferring updates among users. However, despite these optimizations, we observe that in the presence of frequent, short updates to user data, the network traffic generated by cloud storage services often exhibits pathological inefficiencies. Through comprehensive measurements and detailed analysis, we demonstrate that many cloud storage applications generate session maintenance traffic that far exceeds the useful update traffic. We refer to this behavior as the traffic overuse problem. To address this problem, we propose the update-batched delayed synchronization (UDS) mechanism. Acting as a middleware between the user's file storage system and a cloud storage application, UDS batches updates from clients to significantly reduce the overhead caused by session maintenance traffic, while preserving the rapid file synchronization that users expect from cloud storage services. Furthermore, we extend UDS with a backwards compatible Linux kernel modification that further improves the performance of cloud storage applications by reducing the CPU usage.
Introduction
As tools for personal storage, file synchronization and data sharing, cloud storage services such as Dropbox, Google Drive, and SkyDrive have become extremely popular. These services provide users with ubiquitous, reliable data storage that can be synchronized ("sync'ed") across multiple devices, and also shared among a group of users. Dropbox is arguably the most popular cloud storage service, reportedly hitting more than 100 million users who store or update one billion files per day [START_REF]tying together devices for 100M registered users who save 1B files a day[END_REF].
Cloud storage services are characterized by two key components: a (front-end) client application that runs on user devices, and a (back-end) storage service that resides within the "cloud," hosting users' files in huge data centers. A user can "drop" files into or directly modify files in a special "sync folder" that is then automatically synchronized with cloud storage by the client application.
Cloud storage applications typically use two algorithms to minimize the amount of network traffic that they generate. First, the client application computes the binary diff of modified files and only sends the altered bits to the cloud. Second, all updates are compressed before they are sent to the cloud. As a simple example, if we append 100 MB of identical characters (e.g. "a") to an existing file in the Dropbox sync folder (thus the binary diff size is 100 MB), the resulting network traffic is merely 40 KB. This amount of traffic is just slightly more than the traffic incurred by appending a single byte "a" (i.e. around 38 KB, including meta-data overhead).
The Traffic Overuse Problem. However, despite these performance optimizations, we observe that the network traffic generated by cloud storage applications exhibits pathological inefficiencies in the presence of frequent, short updates to user data. Each time a synced file is modified, the cloud storage application's update-triggered realtime synchronization (URS) mechanism is activated. URS computes and compresses the binary diff of the new data, and sends the update to the cloud along with some session maintenance data. Unfortunately, when there are frequent, short updates to synced files, the amount of session maintenance traffic far exceeds the amount of useful update traffic sent by the client over time. We call this behavior the traffic overuse problem. In essence, the traffic overuse problem originates from the update sensitivity of URS.
Our investigation into the traffic overuse problem reveals that this issue is pervasive among users. By analyzing data released from a large-scale measurement of Dropbox [START_REF] Drago | Inside Dropbox: Understanding Personal Cloud Storage Services[END_REF], we discover that for around 8.5% of users, ≥10% of their traffic is generated in response to frequent, short updates (refer to § 4.1). In addition to Dropbox, we examine seven other popular cloud storage applications across three different operating systems, and discover that their software also exhibits the traffic overuse problem.
As we show in § 4, the traffic overuse problem is exacerbated by "power users" who leverage cloud storage in situations it was not designed for. Specifically, cloud storage applications were originally designed for simple use cases like storing music and sharing photos. However, cloud storage applications are now used in place of traditional source control systems (Dropbox markets their Teams service specifically for this purpose [START_REF]DropboxTeams[END_REF]). The problem is especially acute in situations where files are shared between multiple users, since frequent, short updates by one user force all users to download updates. Similarly, users now employ cloud storage for even more advanced use cases like setting up databases [START_REF]Dropbox-as-a-Database, the tutorial[END_REF].
Deep Understanding of the Problem. To better understand the traffic overuse problem, we conduct extensive, carefully controlled experiments with the Dropbox application ( § 3). In our tests, we artificially generate streams of updates to synced files, while varying the size and frequency of updates. Although Dropbox is a closed-source application and its data packets are SSL encrypted, we are able to conduct black-box measurements of its network traffic by capturing packets with Wireshark [START_REF]Wireshark web site[END_REF].
By examining the time series of Dropbox's packets, coupled with some analysis of the Dropbox binary, we quantitatively explore the reasons why the ratio of session maintenance traffic to update traffic is poor during frequent, short file updates. In particular, we identify the operating system features that trigger Dropbox's URS mechanism, and isolate the series of steps that the application goes through before it uploads data to the cloud. This knowledge enables us to identify the precise update-frequency intervals and update sizes that lead to the generation of pathological session maintenance traffic. We reinforce these findings by examining traces from real Dropbox users in § 4.
UDS:
Addressing the Traffic Overuse Problem. Guided by our measurement findings, we develop a solution to the traffic overuse problem called update-batched delayed synchronization (UDS) ( § 5). As depicted in Fig. 1, UDS acts as a middleware between the user's file storage system and a cloud storage client application (e.g. Dropbox). UD-S is independent of any specific cloud storage service and requires no modifications to proprietary software, which makes UDS simple to deploy. Specifically, UDS instantiates a "SavingBox" folder that replaces the sync folder used by the cloud storage application. UDS detects and batches frequent, short data updates to files in the SavingBox and delays the release of updated files to the cloud storage application. In effect, UDS forces the cloud storage application to batch file updates that would otherwise trigger pathological behavior. In practice, the additional delay caused by batching file updates is very small (around several seconds), meaning that users are unlikely to notice, and the integrity of cloud-replicated files will not be adversely affected.
To evaluate the performance of UDS, we implement a version for Linux. Our prototype uses the inotify kernel API [START_REF]inotify man[END_REF] to track changes to files in the SavingBox folder, while using rsync [9] to generate compressed diffs of modified files. Results from our prototype demonstrate that it reduces the overhead of session maintenance traffic to less than 30%, compared to 620% overhead in the worst case for Dropbox. UDS+: Reducing CPU Overhead. Both URS and UDS have a drawback: in the case of frequent data updates, they generate considerable CPU overhead from constantly reindexing the updated file (i.e. splitting the file into chunks, checksumming each chunk, and calculating diffs from previous versions of each chunk). This re-indexing occurs because the inotify kernel API reports what file/directory has been modified on disk, but not how it has been modified. Thus, rsync (or an equivalent algorithm) must be run over the entire modified file to determine how it has changed.
To address this problem, we modify the Linux inotify API to return the size and location of file updates. This information is readily available inside the kernel; our modified API simply exposes this information to applications in a backwards compatible manner. We implement an improved version of our system, called UDS+, that leverages the new API ( § 6). Microbenchmark results demonstrate that UDS+ incurs significantly less CPU overhead than URS and UDS. Our kernel patch is available at https://www.dropbox.com/s/oor7vo9z49urgrp/inotify-patch.html.
Although convincing the Linux kernel community to adopt new APIs is a difficult task, we believe that our extension to inotify is a worthwhile addition to the operating system. Using the strace command, we tracked the system calls made by many commercial cloud storage applications (e.g. Dropbox, UbuntuOne, TeamDrive, SpiderOak, etc.) and confirmed that they all use the inotify API. Thus, there is a large class of applications that would benefit from merging our modified API into the Linux kernel.
Related Work
As the popularity of cloud storage services has quickly grown, so too have the number of research papers related to these services. Hu et al. performed the first measurement study on cloud storage services, focusing on Dropbox, Mozy, CrashPlan, and Carbonite [START_REF] Hu | The Good, the Bad and the Ugly of Consumer Cloud Storage[END_REF]. Their aim was to gauge the relative upload/download performance of different services, and they find that Dropbox performs best while Mozy performs worst.
Several studies have focused specifically on Dropbox. Drago et al. study the detailed architecture of the Dropbox service and conduct measurements based on ISPlevel traces of Dropbox network traffic [START_REF] Drago | Inside Dropbox: Understanding Personal Cloud Storage Services[END_REF]. The data from this paper is open-source, and we leverage it in § 4 to conduct trace-driven simulations of Dropbox behavior. Drago et al. further compare the system capabilities of Dropbox, Google Drive, SkyDrive, Wuala, and Amazon Cloud Drive, and find that each service has its limitations and advantages [START_REF] Drago | Benchmarking Personal Cloud Storage[END_REF]. A study by Wang et al. reveals that the scalability of Dropbox is limited by their use of Amazon's EC2 hosting service, and they propose novel mechanisms for overcoming these bottlenecks [START_REF] Wang | On the Impact of Virtualization on Dropboxlike Cloud File Storage/Synchronization Services[END_REF]. Dropbox cloud storage deduplication is studied in [START_REF] Harnik | Side Channels in Cloud Services: Deduplication in Cloud Storage[END_REF] [START_REF] Halevi | Proofs of Pwnership in Remote Storage Systems[END_REF], and some security/privacy issues of Dropbox are discussed in [START_REF] Mulazzani | Dark Clouds on the Horizon: Using Cloud Storage as Attack Vector and Online Slack Space[END_REF] [START_REF] Hu | The Good, the Bad and the Ugly of Consumer Cloud Storage[END_REF].
Amazon's cloud storage infrastructure has also been quantitatively analyzed. Burgen et al. measure the performance of Amazon S3 from a client's perspective [START_REF] Bergen | Client Bandwidth: The Forgotten Metric of Online Storage Providers[END_REF]. They point out that the perceived performance at the client is primarily dependent on the transfer bandwidth between the client and Amazon S3, rather than the upload bandwidth of the cloud. Consequently, the designers of cloud storage services must pay special attention to the client-side, perceived quality of service.
Li et al. develop a tool called "CloudCmp" [START_REF] Li | CloudCmp: Comparing Public Cloud Providers[END_REF] to comprehensively compare the performances of four major cloud providers: Amazon AWS [START_REF] Jackson | Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud[END_REF], Microsoft Azure [START_REF] Calder | Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency[END_REF], Google AppEngine and Rackspace CloudServers. They find that the performance of cloud storage can vary significantly across providers. Specifically, Amazon S3 is observed to be more suitable for handling large data objects rather than small data objects, which is consistent with our observation in this paper.
Based on two large-scale network-attached storage file system traces from a realworld enterprise datacenter, Chen et al. conduct a multi-dimensional analysis of data access patterns at the user, application, file, and directory levels [START_REF] Chen | Design Implications for Enterprise Storage Systems via Multi-dimensional Trace Analysis[END_REF]. Based on this analysis, they derive 12 design implications for how storage systems can be specialized for specific data access patterns. Wallace et al. also present a comprehensive characterization of backup workloads in a large production backup system [START_REF] Wallace | Characteristics of Backup Workloads in Production Systems[END_REF]. Our work follows a similar methodology: study the data access patterns of cloud storage users and then leverage the knowledge to optimize these systems for improved performance.
Finally, there are more works related to Dropbox-like cloud storage services, such as the cloud-backed file systems [START_REF] Vrable | Cumulus: Filesystem Backup to the Cloud[END_REF] [29], delta compression [START_REF] Shilane | WAN Optimized Replication of Backup Datasets Using Stream-informed Delta Compression[END_REF], real-time compression [START_REF] Harnik | To Zip or Not to Zip: Effective Resource Usage for Real-Time Compression[END_REF], dependable cloud storage design [START_REF] Mahajan | Depot: Cloud Storage with Minimal Trust[END_REF] [START_REF] Bessani | Dependable and Secure Storage in a Cloud-of-clouds[END_REF], and economic issues like the market-oriented paradigm [START_REF] Buyya | Market-oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities[END_REF] and the Storage Exchange model [START_REF] Placek | Storage Exchange: A Global Trading Platform for Storage Services[END_REF].
Understanding Cloud Storage Services
In this section, we present a brief overview of the data synchronization mechanism of cloud storage services, and perform fine-grained measurements of network usage by cloud storage applications. Although we focus on Dropbox as the most popular service, we demonstrate that our findings generalize to other services as well. The index server authenticates each user, and stores meta-data about the user's files, including: the list of the user's files, their sizes and attributes, and pointers to where the files can be found on Amazon's S3 storage service. Second, file data is stored on Amazon's S3 storage service. The Dropbox client compresses files before storing them in S3, and modifications to synced files are uploaded to S3 as compressed, binary diffs. Third, each client maintains a connection to a beacon server. Periodically, the Dropbox client sends a message to the user's beacon server to report its online status, as well as receives notifications from the cloud (e.g. a shared file has been modified by another user and should be re-synced).
Data Synchronization Mechanism of Cloud Storage Services
Relationship between the Disk and the Network. In addition to understanding the network connections made by Dropbox, we also seek to understand what activity on the local file system triggers updates to the Dropbox cloud. To measure the fine-grained behavior of the Dropbox application, we leverage the Dropbox command-line interface (CLI) [START_REF]Dropbox CLI (Command Line Interface[END_REF], which is a Python script that enables low-level monitoring of the Dropbox application. Using Dropbox CLI, we can programmatically query the status of the Dropbox application after adding files to or modifying files in the Dropbox Sync folder. By repeatedly observing the behavior of the Dropbox application in response to file system changes, we are able to discern the inner workings of Dropbox's updatetriggered real-time synchronization (URS) system. Fig. 3(a) depicts the basic operation of URS. First, a change is made on disk within the Dropbox Sync folder, e.g. a new file is created or an existing file is modified. The Dropbox application uses OS-specific APIs to monitor for changes to files and directories of interest. After receiving a change notification, the Dropbox application indexes or re-indexes the affected file(s). Next, the compressed file or binary diff is sent to Amazon S3, and the file meta-data is sent to the Dropbox cloud. This process is labeled as "Sync to the Cloud" in Fig. 3(a). After these changes have been committed in the cloud, the Dropbox cloud responds to the client with an acknowledgment message. In § 3.2, we investigate the actual length of time it takes to commit changes to the Dropbox cloud.
Although the process illustrated in Fig. 3(a) appears to be straightforward, there are some hidden conditions that complicate the process. Specifically, not every file update triggers a cloud synchronization: there are two situations where file updates are batched by the Dropbox application before they are sent to the cloud.
The first scenario is depicted in Fig. 3(b). In this situation, a file is modified numerous times after a cloud sync has begun, but before the acknowledgment is received. URS only initiates one cloud sync at a time, thus file modifications made during the network wait interval get batched until the current sync is complete. After the acknowledgment is received, the batched file changes are immediately synced to the cloud.
The second scenario is shown in Fig. 3(c). In this situation, a file is modified several times in such rapid succession that URS does not have time to finish indexing the file. Dropbox cannot begin syncing changes to the cloud until after the file is completely indexed, thus these rapid edits prevent the client from sending any network traffic.
The two cases in Fig. 3(b) and 3(c) reveal that there are complicated interactions between on-disk activity and the network traffic sent by Dropbox. On one hand, a carefully timed series of file edits can generate only a single network transfer if they occur fast enough to repeatedly interrupt file indexing. On the other hand, a poorly timed series of edits can initiate an enormous number of network transfers if the Dropbox software is not able to batch them. Fig. 3(d) depicts this worst-case situation: each file edit (regardless of how trivially small) results in a cloud synchronization. In § 4, we demonstrate that this worst-case scenario actually occurs under real-world usage conditions.
Controlled Measurements
Our investigation of the low-level behavior of the Dropbox application reveal complex interactions between file writes on disk and Dropbox's network traffic to the cloud. In this section, we delve deeper into this relationship by performing carefully controlled microbenchmarks of cloud storage applications. In particular, our goal is to quantify the relationship between frequency and size of file updates with the amount of traffic generated by cloud storage applications. As before we focus on Dropbox, however we also demonstrate that our results generalize to other cloud storage systems as well.
All of our benchmarks are conducted on two test systems located in the United States in 2012. The first is a laptop with a dual-core Intel processor @2.26 GHz, 2 GB of RAM, and a 5400 RPM, 250 GB hard drive disk (HDD). The second is a desktop with a dual-core Intel processor @3.0 GHz, 4 GB of RAM, and a 7200 RPM, 1 TB HDD. We conduct tests on machines with different hard drive rotational speeds because this impacts the time it takes for cloud storage software to index files. Both machines run Ubuntu Linux 12.04, the Linux Dropbox application version 0.7.1 [3], and the Dropbox CLI extension [START_REF]Dropbox CLI (Command Line Interface[END_REF]. Both machines are connected to a 4 Mbps Internet connection, which gives Dropbox ample resources for syncing files to the cloud. File Creation. First, we examine the amount of network traffic generated by Dropbox when new files are created in the Sync folder. Table 1 shows the amount of traffic sent to the index server and to Amazon S3 when files of different sizes are placed in the Sync folder on the 5400 RPM machine. We use JPEG files for our tests (except the 1 byte test) because JPEGs are a compressed file format. This prevents the Dropbox application from being able to further compress data updates to the cloud. Table 1 reveals several interesting facets about Dropbox traffic. First, regardless of the size of the created file, the size of the meta-data sent to the index server remains almost constant. Conversely, the amount of data sent to Amazon S3 closely tracks the size of the created file. This result makes sense, since the actual file data (plus some checksumming and HTTP overhead) are stored on S3.
The α column in Table 1 reports the ratio of total Dropbox traffic to the size of new file. α close to 1 is ideal, since that indicates that Dropbox has very little overhead beyond the size of the user's file. For small files, α is large because the fixed size of the index server meta-data dwarfs the actual size of the file. For larger files α is more reasonable, since Dropbox's overhead is amortized over the file size.
The last column of Table 1 reports the average time taken to complete the cloud synchronization. These tests reveal that, regardless of file size, all cloud synchronizations take at least 4 seconds on average. This minimum time interval is dictated by Dropbox's cloud infrastructure, and is not a function of hard drive speed, Internet connection speed or RTT. For larger files, the sync delay grows commensurately larger. In these cases, the delay is dominated by the time it takes to upload the file to Amazon S3. Short File Updates. The next set of experiments examine the behavior of Dropbox in the presence of short updates to an existing file. Each test starts with an empty file in the Dropbox Sync folder, and then periodically we append one random byte to the file until its size reaches 1 KB. Appending random bytes ensures that it is difficult for Dropbox to compress the binary diff of the file. Fig. 4 and5 show the network traffic generated by Dropbox when 1 byte per second is appended on the 5400 RPM and 7200 RPM machines. Although each append is only 1 byte long, and the total file size never exceeds 1 KB, the total traffic sent by Dropbox reaches 1.2 MB on the 5400 RPM machine, and 2 MB on the 7200 RPM machine. The majority of Dropbox's traffic is due to meta-data updates to the index server. As shown in Table 1, each index server update is roughly 30 KB in size, which dwarfs the size of our file and each individual update. The traffic sent to Amazon S3 is also significant, despite the small size of our file, while Beacon traffic is negligible. Overall, Fig. 4 and 5 clearly demonstrate that under certain conditions, the amount of traffic generated by Dropbox can be several orders of magnitude larger than the amount of underlying user data. The faster, 7200 RPM hard drive actually makes the situation worse.
Timing of File Updates. As depicted in Fig. 3(b) and 3(c), the timing of file updates can impact Dropbox's network utilization. To examine the relationship between update timing and network traffic, we now conduct experiments where the time interval between 1 byte file appends in varied from 100 ms to 10 seconds.
Fig. 6 and 7 display the amount of network traffic generated by Dropbox during each experiment on the 5400 and 7200 RPM machines. The results show a clear trend: faster file updates result in less network traffic. This is due to the mechanisms highlighted in Fig. 3(b) and 3(c), i.e. Dropbox is able to batch updates that occur very quickly. This batching reduces the total number of meta-data updates that are sent to the index sever, and allows multiple appended bytes in the file to be aggregated into a single binary diff for Amazon S3. Unfortunately, Dropbox is able to perform less batching as the time interval between appends grows. This is particularly evident for the 5 and 10 second tests in Fig. 6 and7. This case represents the extreme scenario shown in Fig. 3(d), where almost every 1 byte update triggers a full synchronization with the cloud.
Indexing Time of Files. The results in Fig. 6 and 7 reveal that the timing of file updates impacts Dropbox's network traffic. However, at this point we do not know which factor is responsible for lowering network usage: is it the network waiting interval as in Fig. 3(b), the interrupted file indexing as in Fig. 3(c), or some combination of the two?
To answer this question, we perform microbenchmarks to examine how long it takes Dropbox to index files. As before, we begin with an empty file and periodically append one random byte until the file size reaches 1 KB. In these tests, we wait 5 seconds in-between appends, since this time is long enough that the indexing operation is never interrupted. We measure the time Dropbox spends indexing the modified file by monitoring the Dropbox process using Dropbox CLI. Fig. 8 shows the indexing time distribution for Dropbox. The median indexing time for the 5400 and 7200 RPM drives are ≈400 ms and ≈200 ms, respectively. The longest indexing time we observed was 960 ms. These results indicates that file updates that occur within ≈200-400 ms of each other (depending on hard drive speed) should interrupt Dropbox's indexing process, causing it to restart and batch the updates together.
Comparing the results from Fig. 6 and 7 to Fig. 8 reveals that indexing interrupts play a role in reducing Dropbox's network traffic. The amount of traffic generated by Dropbox steadily rises as the time between file appends increases from 200 to 500 ms. This corresponds to the likelihood of file appends interrupting the indexing process shown in Fig. 8. When the time between appends is 1 second, it is highly unlikely that sequential appends will interrupt the indexing process (the longest index we observed took 960 ms). Consequently, the amount of network traffic generated during the 1 second interval test is more than double the amount generated during the 500 ms test.
Although indexing interrupts are responsible for Dropbox's network traffic patterns at short time scales, they cannot explain the sharp increase in network traffic that occurs when the time between appends rises from 1 to 5 seconds. Instead, in these situations the delimiting factor is the network synchronization delay depicted in Fig. 3(b). As shown in Fig. 9, one third of Dropbox synchronizations complete in 1-4 seconds, while another third complete in 4-7 seconds. Thus, increasing the time between file appends from 1 to 10 seconds causes the number of file updates that trigger network synchronization to rise (i.e. there is little batching of updates).
Long File Updates. So far, all of our results have focused on very short, 1 byte updates to files. We now seek to measure the behavior of Dropbox when updates are longer. As before, we begin by looking at the amount of traffic generated by Dropbox when a file in the Sync folder is modified. In these tests, we append blocks of randomized data to an initially empty file every second until the total file size reaches 5 MB. We vary the size of the data blocks between 50 KB and 100 KB, in increments of 10KB. Fig. 10 shows the results of the experiment for the 5400 RPM test machine. Unlike the results for the 1 byte append tests, the amount of network traffic generated by Dropbox in these experiments is comparable to the total file size (5 MB). As the number of kilobytes per second appended to the file increases, the ratio of network traffic to total file size falls. These results reiterate the point that the Dropbox application uses network resources more effectively when dealing with larger files.
Fig. 11 explores the relationship between the size of appended data and the file indexing time for Dropbox. There is a clear linear relationship between these two variables: as the size of the appended data increases, so does the indexing time of the file. This makes intuitive sense, since it takes more time to load larger files from disk. Fig. 11 indicates that interrupted indexing will be a more common occurrence with larger files, since they take longer to index, especially on devices with slower hard drives. Therefore, Dropbox will use network resources more efficiently when dealing with files on the order of megabytes in size. Similarly, the fixed overhead of updating the index server is easier to amortize over large files.
Other Cloud Storage Services and Operating Systems
We now survey seven additional cloud storage services to see if they also exhibit the traffic overuse problem. For this experiment, we re-run our 1 byte per second append test on each cloud storage application. As before, the maximum size of the file is 1 KB. All of our measurements are conducted on the following two test machines: a desktop with a dual-core Intel processor @3.0 GHz, 4 GB of RAM, and a 7200 RPM, 1 TB hard drive, and a MacBook Pro laptop with a dual-core Intel processor @2.5 GHz, 4 GB of RAM, and a 7200 RPM, 512 GB hard drive. The desktop dual boots Ubuntu 12.04 and Windows 7 SP1, while the laptop runs OS X Lion 10.7. We test each cloud storage application on all OSes it supports. Because 360 CloudDisk, Everbox, Kanbox, Kuaipan, and VDisk are Chinese services, we executed these tests in China. Dropbox, UbuntuOne, and IDriveSync were tested in the US. Fig. 12 displays the results of our experiments, from which there are two important takeaways. First, we observe that the traffic overuse problem is pervasive across different cloud storage applications. All of the tested applications generate megabytes of traffic when faced with frequent, short file updates, even though the actual size of the file in only 1KB. All applications perform equal to or worse than Dropbox. Secondly, we see that the traffic overuse problem exists whether the client is run on Windows, Linux, or OS X.
Summary
Below we briefly summarize our observations and insights got from the experimental results in this section.
-The Dropbox client only synchronizes data to the cloud after the local data has been indexed, and any prior synchronizations have been resolved. File updates that occur within 200-400 ms intervals are likely to be batched due to file indexing. Similarly, file updates that occur within a 4 second interval may be batched due to waiting for a previous cloud synchronization to finish.
-The traffic overuse problem occurs when there are numerous, small updates to files that occur at intervals on the order of several seconds. Under these conditions, cloud storage applications are unable to batch updates together, causing the amount of sync traffic to be several orders of magnitude larger than the actual size of the file. -Our tests reveal that the traffic overuse problem is pervasive across cloud storage applications. The traffic overuse problem occurs on different OSes, and is actually made worse by faster hard drive speeds.
The Traffic Overuse Problem in Practice
The results in the previous section demonstrate that under controlled conditions, cloud storage applications generate large amounts of network traffic that far exceed the size of users' actual data. In this section, we address a new question: are users actually affected by the traffic overuse problem? To answer this question, we measure the characteristics of Dropbox network traffic in real-world scenarios. First, we analyze data from a large-scale trace of Dropbox traffic to illustrate the pervasiveness of the traffic overuse problem in the real world. To confirm these findings, we use data from the trace to drive a simulation on our test machines. Second, we experiment with two practical Dropbox usage scenarios that may trigger the traffic overuse problem. The results of these tests reveal that the amount of network traffic generated by Dropbox is anywhere from 11 to 130 times the size of data on disk. This confirms that the traffic overuse problem can arise under real-world use cases.
Analysis of Real-World Dropbox Network Traces
To understand the pervasiveness of the traffic overuse problem, we analyze networklevel traces from a recent, large-scale measurement study of Dropbox [START_REF]Dropbox traces[END_REF]. This trace is collected at the ISP level, and involves over 10,000 unique IP addresses and millions of data updates to/from Dropbox. To analyze the behavior of each Dropbox user, we assume all traffic generated from a given IP address corresponds to a single Dropbox user (unfortunately, we are unable to disambiguate multiple users behind a NAT). For each user, we calculate the percentage of Dropbox requests and traffic that can be attributed to frequent, short file updates in a coarse-grained and conservative manner.
As mentioned in § 3.4, the exact parameters for frequent, short updates that trigger the traffic overuse problem vary from system to system. Thus, we adopt the following conservative metrics to locate a frequent, short update (U i ): 1) the inter-update time between updates U i and U i-1 is <1 second, and 2) the size of (compressed) data associated with U i is <1 KB.
Figures 13 and14 plot the percentage of requests and network traffic caused by frequent, short updates, respectively. In both figures, users are sorted in descending order by percentage of short, frequent requests/traffic. Fig. 13 reveals that for 11% of users, ≥10% of their Dropbox requests are caused by frequent, short updates. Fig. 14 shows that for 8.5% of users, ≥10% of their traffic is due to frequent, short updates. These results demonstrate that a significant portion of the network traffic from a particular population of Dropbox users is due to the traffic overuse problem.
Log Appending Experiment. To confirm that frequent, short updates are the cause of the traffic patterns observed in Figures 13 and14, we chose one trace from an active user and recreated her/his traffic on our test machine (i.e. the same Ubuntu laptop used in § 3). Specifically, we play back the user's trace by writing the events to an empty log in the Dropbox Sync folder. We use the event timestamps from the trace to ensure that updates are written to the log at precisely the same rate that they actually occurred.
The user chosen for this experiment uses Dropbox for four hours, with an average interupdate time of 2.6 seconds. Fig. 15 shows the amount of network traffic generated by Dropbox as well as the true size of the log file over time. By the end of the test, Dropbox generates 21 times as much traffic as the size of data on disk. This result confirms that an active real-world Dropbox user can trigger the traffic overuse problem.
Examining Practical Dropbox Usage Scenarios
In the previous section, we showed that real-world users are impacted by the traffic overuse problem. However, the traces do not tell us what high-level user behavior generates the observed frequent, short updates. In this section, we analyze two practical use cases for Dropbox that involve frequent, short updates.
HTTP File Download.
One of the primary use cases for Dropbox is sharing files with friends and colleagues. In some cases, it may be expedient for users to download files from the Web directly into the Dropbox Sync folder to share them with others. In this case, the browser writes chunks of the file to disk as pieces arrive via HTTP from the web. This manifests as repeated appends to the file at the disk-level. How does the Dropbox application react to this file writing pattern?
To answer this question, we used wget to download a compressed, 5 MB file into the Dropbox Sync folder. All network traffic was captured using Wireshark. As before, we use a compressed file for the test because this prevents Dropbox from being able to perform any additional compression while uploading data to the cloud. Fig. 16 plots the amount of traffic from the incoming HTTP download and the outgoing Dropbox upload. For this test, we fixed the download rate of wget at 80 Kbps. The 75 MB of traffic generated by Dropbox is far greater than the 5.5 MB of traffic generated by the HTTP download (5 MB file plus HTTP header overhead). Fig. 16 and Fig. 4 demonstrate very similar results: in both cases, Dropbox transmits at least one order of magnitude more data to the cloud than the data in the actual file.
We now examine the behavior of the Dropbox software as the HTTP download rate is varied. Fig. 17 examines the ratio of network traffic to actual file size for Dropbox and HTTP as the HTTP download rate is varied. For the HTTP download, the ratio between the amount of incoming network traffic and the actual file size (5 MB) is constantly 1.1. The slight amount of overhead comes from the HTTP headers. For Dropbox, the ratio between outgoing traffic and file size varies between 30 and 1.1. The best case occurs when the HTTP download rate is high.
To explain why the network overhead for Dropbox is lowest when the HTTP download rate is high, we examine the interactions between wget and the hard drive. Fig. 18 shows the time between hard drive writes by wget, as well as the size of writes, as the HTTP download rate is varied. The left hand axis and solid line correspond to the inter-update time, while the right hand axis and dashed line depict the size of writes. The network overhead for Dropbox is lowest when the HTTP download rate is ≥200 Kbps. This corresponds to the scenario where file updates are written to disk every 300 ms, and the sizes of the updates are maximal (≈ 9 KB per update). Under these conditions, the Dropbox software is able to batch many updates together. Conversely, when the HTTP download rate is low, the inter-update time between hard disk writes is longer, and the size per write is smaller. Thus, Dropbox has fewer opportunities to batch updates, which triggers the traffic overuse problem.
In addition to our tests with wget, we have run identical experiments using Chrome and Firefox. The results for these browsers are similar to our results for wget: Dropbox generates large amounts of network traffic when HTTP download rates are low.
Collaborative Document Editing.
In this experiment, we simulate the situation where multiple users are collaboratively editing a document stored in the Dropbox Sync folder. Specifically, we place a 1 MB file full of random ASCII characters in the Dropbox Sync folder and share the file with a second Dropbox user. Each user edits the document by modifying or appending l random bytes at location x every t seconds, where l is a random integer between 1 and 10, and t is a random float between 0 and 10. Each user performs modifying and appending operations with the same probability (=0.5). If a user appends to the file, x is set to the end of the file.
We ran the collaborative document editing experiment for a single hour. During this period of time, we measured the amount of network traffic generated by Dropbox. By the end of the experiment, Dropbox had generated close to 130 MB of network traffic: two orders of magnitude more data than the size of the file (1 MB).
The UDS Middleware
In § 3, we demonstrate that the design of cloud storage applications gives rise to situations where they can send orders-of-magnitude more traffic than would be reasonably expected. We follow this up in § 4 by showing that this pathological application behavior can actually be triggered in real-world situations.
To overcome the traffic overuse problem, we implement an application-level mechanism that dramatically reduces the network utilization of cloud storage applications. We call this mechanism update-batched delayed synchronization (UDS). The high-level operation of UDS is shown in Fig. 1. Intuitively, UDS is implemented as a replacement for the normal cloud sync folder (e.g. the Dropbox Sync folder). UDS proactively detects and batches frequent, short updates to files in its "SavingBox" folder. These batched updates are then merged into the true cloud-sync folder, so they can be transferred to the cloud. Thus, UDS acts as a middleware that protects the cloud storage application from file update patterns that would otherwise trigger the traffic overuse problem.
In this section, we discuss the implementation details of UDS, and present benchmarks of the system. In keeping with the methodology in previous sections, we pair UDS with Dropbox when conducting experiments. Our benchmarks reveal that UDS effectively eliminates the traffic overuse problem, while only adding a few seconds of additional delay to Dropbox's cloud synchronization.
UDS Implementation
At a high level the design of UDS is driven by two goals. First, the mechanism should fix the traffic overuse problem by forcing the cloud storage application to batch file updates. Second, the mechanism should be compatible with multiple cloud storage services. This second goal rules out directly modifying an existing application (e.g. the Dropbox application) or writing a custom client for a specific cloud storage service.
To satisfy these goals, we implement UDS as a middleware layer that sits between the user and an existing cloud storage application. From the user's perspective, UDS acts just like any existing cloud storage service. UDS creates a "SavingBox" folder on the user's hard drive, and monitors the files and folders placed in the SavingBox. When the user adds new files to the SavingBox, UDS automatically computes a compressed version of the data. Similarly, when a file in the SavingBox folder is modified, UDS calculates a compressed, binary diff of the file versus the original. If a time period t elapses after the last file update, or the total size of file updates surpasses a threshold c, then UDS pushes the updates over to the true cloud sync folder (e.g. the Dropbox Sync folder). At this point, the user's cloud storage application (e.g. Dropbox) syncs the new/modified files to the cloud normally. In the event that files in the true cloud sync folder are modified (e.g. by a remote user acting on a shared file), UDS will copy the updated files to the SavingBox. Thus, the contents of the SavingBox are always consistent with content in the true cloud-synchronization folder.
As a proof of concept, we implement a version of UDS for Linux. We tested our implementation by pairing it with the Linux Dropbox client. However, we stress that it would be trivial to reconfigure UDS to work with other cloud storage software as well (e.g. Google Drive, SkyDrive, and UbuntuOne). Similarly, there is nothing fundamental about our implementation that prevents it from being ported to Windows, OS X, or Linux derivatives such as Android.
Implementation Details. Our UDS implementation uses the Linux inotify APIs to monitor changes to the SavingBox folder. Specifically, UDS calls inotify add watch() to set up a callback that is invoked by the kernel whenever files or folders of interest are modified by the user. Once the callback is invoked, UDS writes information such as the type of event (e.g. file created, file modified, etc.) and the file path to an event log. If the target file is new, UDS computes the compressed size of the file using gzip. However, if the target file has been modified then UDS uses the standard rsync tool to compute a binary diff between the updated file and the original version in the cloudsynchronization folder. UDS then computes the compressed size of the binary diff.
Periodically, UDS pushes new/modified files from the SavingBox to the true cloud sync folder. In the case of new files, UDS copies them entirely to the cloud sync folder.
Alternatively, in the case of modified files, the binary diff previously computed by UDS is applied to the copy of the file in the cloud sync folder.
Internally, UDS maintains two variables that determine how often new/modified files are pushed to the true cloud sync folder. Intuitively, these two variables control the frequency of batched updates to the cloud. The first variable is a timer: whenever a file is created/modified, the timer gets reset to zero. If the timer reaches a threshold value t, then all new/modified files in the SavingBox are pushed to the true cloud sync folder.
The second variable is a byte counter that ensures frequent, small updates to files are batched together into chunks of at least some minimum size before they get pushed to the cloud. Specifically, UDS records the total size of all compressed data that has not been pushed to cloud storage. If this counter exceeds a threshold c, then all new/modified files in the SavingBox are pushed to the true cloud-synchronization folder. Note that all cloud storage software may not use gzip for file compression: thus, UDS's byte counter is an estimate of the amount of data the cloud storage software will send on the network. Although UDS's estimate may not perfectly reflect the behavior of the cloud storage application, we show in the next section that this does not impact UDS's performance.
As a fail-safe mechanism, UDS includes a second timer that pushes updates to the cloud on a coarse timeframe. This fail-safe is necessary because pathological file update patterns could otherwise block UDS's synchronization mechanisms. For example, consider the case where bytes are appended to a file. If c is large, then it may take some time before the threshold is breached. Similarly, if the appends occur at intervals < t, the first timer will always be reset before the threshold is reached. In this practically unlikely but possible scenario, the fail-safe timer ensures that the append operations cannot perpetually block cloud synchronization. In our UDS implementation, the fail-safe timer automatically causes UDS to push updates to the cloud every 30 seconds.
Configuring and Benchmarking UDS
In this section we investigate two aspects of UDS. First, we establish values for the UD-S variables c and t that offer a good tradeoff between reduced network traffic and low synchronization delay. Second, we compare the performance of UDS to the stock Dropbox application by re-running our earlier benchmarks. In this section, all experiments are conducted on a laptop with a dual-core Intel processor 2.26GHz, 2 GB of RAM, and a 5400 RPM, 250 GB hard drive. Our results show that when properly configured, UDS eliminates the traffic overuse problem.
Choosing Threshold Values. Before we can benchmark the performance of UDS, the values of the time threshold t and byte counter threshold c must be established. Intuitively, these variables represent a tradeoff between network traffic and timeliness of updates to the cloud. On one hand, a short time interval and a small byte counter would cause UDS to push updates to the cloud very quickly. This reduces the delay between file modifications on disk and syncing those updates to the cloud, at the expense of increased traffic. Conversely, a long timer and large byte counter causes many file updates to be batched together, reducing traffic at the expense of increased sync delay.
What we want is to locate a good tradeoff between network traffic and delay. To locate this point, we conduct an experiment: we append random bytes to an empty file in the SavingBox folder until its size reaches 5 MB while recording how much network traffic is generated by UDS (by forwarding updates to Dropbox) and the resulting sync delay. We run this experiment several times, varying the size of the byte counter threshold c to observe its impact on network traffic and sync delay.
Fig. 19 and 20 show the results of this experiment. As expected, UDS generates a greater amount of network traffic but incurs shorter sync delay when c is small because there is less batching of file updates. The interesting feature of Fig. 19 is that the amount of network traffic quickly declines and then levels off. The ideal tradeoff between network traffic and delay occurs when c = 250 KB; any smaller and network traffic quickly rises, any larger and there are diminishing returns in terms of enhanced network performance. On the other hand, Fig. 20 illustrates an approximately linear relationship between UDS's batching threshold and the resulting sync delay, so there is no especially "good" threshold c in terms of the sync delay. Therefore, we use c = 250 KB for the remainder of our experiments.
We configure the timer threshold t to be 5 seconds. This value is chosen as a qualitative tradeoff between network performance and user perception. Longer times allow for more batching of updates, however long delays also negatively impact the perceived performance of cloud storage systems (i.e. the time between file updates and availability of that data in the cloud). We manually evaluated our UDS prototype, and determined that a 5 second delay does not negatively impact the end-user experience of cloud storage systems, but is long enough to mitigate the traffic overuse problem.
Although the values for c and t presented here were calculated on a specific machine configuration, we have conducted the same battery of tests on other, faster machines as well. Even when the speed of the hard drive is increased, c = 250 KB and t = 5 seconds are adequate to prevent the traffic overuse problem. UDS's Performance vs. Dropbox. Having configured UDS's threshold values, we can now compare its performance to a stock instance of Dropbox. To this end, we re-run 1) the wget experiment and 2) the active user's log file experiment from § 4. Fig. 21 plots the total traffic generated by a stock instance of Dropbox, UDS (which batches updates before pushing them to Dropbox), and the amount of real data downloaded over time by wget. The results for Dropbox are identical to those presented in Fig. 16, and the traffic overuse problem is clearly visible. In contrast, the amount of traffic generated by UDS is only slightly more than the real data traffic. By the end of the HTTP download, UDS has generated 6.2 MB of traffic, compared to the true file size of 5 MB.
Fig. 22 plots the results of the log file append test. As in the previous experiment, the network traffic of UDS is only slightly more than the true size of the log file, and much less than that of Dropbox. These results clearly demonstrate that UDS's batching mechanism is able to eliminate the traffic overuse problem.
UDS+: Reducing CPU Utilization
In the previous section, we demonstrate how our UDS middleware successfully reduces the network usage of cloud storage applications. In this section, we take our evaluation and our system design to the next level by analyzing its CPU usage. First, we analyze the CPU usage of Dropbox and find that it uses significant resources to index files (up to one full CPU core for megabyte sized files). In contrast, our UDS software significantly reduces the CPU overhead of cloud storage. Next, we extend the kernel level APIs of Linux in order to further improve the CPU performance of UDS. We call this modified system UDS+. We show that by extending Linux's existing APIs, the CPU overhead of UDS (and by extension, all cloud storage software) can be further reduced.
CPU Usage of Dropbox and UDS
We begin by evaluating the CPU usage characteristics of the Dropbox cloud storage application by itself (i.e. without the use of UDS). As in § 3, our test setup is a generic laptop with a dual-core Intel processor @2.26 GHz, 2 GB of RAM, and a 5400 RPM, 250 GB hard drive. On this platform, we conduct a benchmark where 2K random bytes are appended to an initially empty file in the Dropbox Sync folder every 200 ms for 1000 seconds. Thus, the final size of the file is 10 MB. During this process, we record the CPU utilization of the Dropbox process. Fig. 23 shows the percentage of CPU resources being used by the Dropbox application over the course of the benchmark. The Dropbox application is single threaded, thus it only uses resources on one of the laptop's two CPUs. There are two main findings visible in Fig. 23. First, the Dropbox application exhibits two large jumps in CPU utilization that occur around 400 seconds (4 MB file size) and 800 seconds (8 MB). These jumps occur because the Dropbox application segments files into 4 MB chunks [START_REF] Mulazzani | Dark Clouds on the Horizon: Using Cloud Storage as Attack Vector and Online Slack Space[END_REF]. Second, the average CPU utilization of Dropbox is 54% during the benchmark, which is quite high. There are even periods when Dropbox uses 100% of the CPU.
CPU usage of UDS. Next, we evaluate the CPU usage of our UDS middleware when paired with Dropbox. We conduct the same benchmark as before, except in this case the target file is placed in UDS's SavingBox folder. Fig. 24 shows the results of the benchmark (note that the scale of the y-axis has changed from Fig. 23). Immediately, it is clear that the combination of UDS and Dropbox uses much less CPU than Dropbox alone: on average, CPU utilization is just 12% during the UDS/Dropbox benchmark. Between 6% and 20% of CPU resources are used by UDS (specifically, by rsync), while the Dropbox application averages 2% CPU utilization. The large reduction in overall CPU utilization is due to UDS's batching of file updates, which reduces the frequency and amount of work done by the Dropbox application. The CPU usage of UDS does increase over time as the size of the target file grows.
Reducing the CPU Utilization of UDS
Although UDS significantly reduces the CPU overhead of using cloud storage software, we pose the question: can the system still be further improved? In particular, while developing UDS, we noticed a shortcoming in the Linux inotify API: the callback that reports file modification events includes parameters stating which file was changed, but not where the modification occurred within the file or how much data was written. These two pieces of information are very important to all cloud storage applications, since they capture the byte range of the diff from the previous version of the file. Currently, cloud storage applications must calculate this information independently, e.g. using rsync.
Our key insight is that these two pieces of meta-information are available inside the kernel; they just are not exposed by the existing Linux inotify API. Thus, having the kernel report where and how much a file is modified imposes no additional overhead on the kernel, but it would save cloud storage applications the trouble of calculating this information independently. To implement this idea, we changed the inotify API of the Linux kernel to report: 1) the byte offset of file modifications, and 2) the number of bytes that were modified. Making these changes requires altering the inotify and fsnotify [7] functions listed in Table 2 (fsnotify is the subsystem that inotify is built on). Two integer variables are added to the fsnotify event and inotify event structures to store the additional file meta-data. We also updated kernel functions that rely directly on the inotify and fsnotify APIs. In total, we changed around 160 lines of code in the kernel, spread over eight functions.
UDS+.
Having updated the kernel inotify API, we created an updated version of UDS, called UDS+, that leverages the new API. The implementation of UDS+ is significantly simpler than that of UDS, since it no longer needs to use rsync to compute binary diffs. Instead, UDS+ simply leverages the "where" and "how much" information provided by the new inotify APIs. Based on this information, UDS+ can read the fresh data from the disk, compress it using gzip, and update the byte counter.
To evaluate the performance improvement of UDS+, we re-run the earlier benchmark scenario using UDS+ paired with Dropbox, and present the results in Fig. 25. UDS+ performs even better than UDS: the average CPU utilization during the UDS+ test is only 7%, compared to 12% for UDS. UDS+ exhibits more even and predictable CPU utilization than UDS. Furthermore, the CPU usage of UDS+ increases much more slowly over time, since it no longer relies on rsync.
Conclusion
In this paper, we identify a pathological issue that causes cloud storage applications to upload large amount of traffic to the cloud: many times more data than the actual content of the user's files. We call this issue the traffic overuse problem.
We measure the traffic overuse problem under synthetic and real-world conditions to understand the underlying causes that trigger this problem. Guided by this knowledge, we develop UDS: a middleware layer that sits between the user and the cloud storage application, to batch file updates in the background before handing them off to the true cloud storage software. UDS significantly reduces the traffic overhead of cloud storage applications, while only adding several seconds of delay to file transfers to the cloud. Importantly, UDS is compatible with any cloud storage application, and can easily be ported to different OSes.
Finally, by making proof-of-concept modifications to the Linux kernel that can be leveraged by cloud storage services to increase their performance, we implement an enhanced version of our middleware, called UDS+. UDS+ leverages these kernel enhancements to further reduce the CPU usage of cloud storage applications.
Fig. 1 .
1 Fig. 1. High-level design of the UDS middleware.
Fig. 2 .
2 Fig. 2. Dropbox data sync mechanism.
Fig. 2
2 Fig.2depicts a high-level outline of Dropbox's data sync mechanism. Each instance of the Dropbox client application sends three different types of traffic. First, each client maintains a connection to an index server. The index server authenticates each user, and stores meta-data about the user's files, including: the list of the user's files, their sizes and attributes, and pointers to where the files can be found on Amazon's S3 storage service. Second, file data is stored on Amazon's S3 storage service. The Dropbox client compresses files before storing them in S3, and modifications to synced files are uploaded to S3 as compressed, binary diffs. Third, each client maintains a connection to a beacon server. Periodically, the Dropbox client sends a message to the user's beacon server to report its online status, as well as receives notifications from the cloud (e.g. a shared file has been modified by another user and should be re-synced).
Fig. 3 .
3 Fig. 3. Diagrams showing the low-level behavior of the Dropbox application following a file update. (a) shows the fundamental operations, while (b) and (c) show situations where file updates are batched together. (d) shows the worst-case scenario where no file updates are batched together.
Fig. 4 .Fig. 5 .
45 Fig. 4. Dropbox traffic corresponding to rapid, 1 byte appends to a file (5400 RPM HDD).
Fig. 6 .
6 Fig. 6. Dropbox traffic as the time between 1 byte appends is varied (5400 RPM HDD).
Fig. 7 .
7 Fig. 7. Dropbox traffic as the time between 1 byte appends is varied (7200 RPM HDD).
Fig. 8 .
8 Fig. 8. Distribution of Dropbox file indexing time. Total file size is 1 KB.
Fig. 9 .
9 Fig. 9. Distribution of sync delays. Total file size is 1 KB.
Fig. 10 .
10 Fig. 10. Network traffic as the speed of file appends is varied.
Fig. 11 .
11 Fig. 11. File indexing time as the total file size is varied.
Fig. 12 .
12 Fig. 12. Total network traffic for various cloud storage applications running on three OSes after appending 1 byte to a file 1024 times.
Fig. 13 .
13 Fig. 13. Each user's percentage of frequent, short network requests, in descending order.
Fig. 14 .Fig. 15 .
1415 Fig. 14. Each user's percentage of frequent, short network traffic, in descending order.
Fig. 16 .
16 Fig. 16. Dropbox upload traffic as a 5MB file is downloaded into the Sync folder via HTTP.
Fig. 17 .
17 Fig. 17. Ratio of network traffic to real file size for the Dropbox upload and HTTP download.
Fig. 18 .
18 Fig. 18. Average inter-update time and data update length as HTTP download rate varies.
Fig. 19 .
19 Fig. 19. Network traffic corresponding to various thresholds of the UDS byte counter c.
Fig. 20 .
20 Fig. 20. Sync delay corresponding to various thresholds of the UDS byte counter c.
Fig. 21 .
21 Fig. 21. Dropbox and UDS traffic as a 5 MB file is downloaded into the Sync folder.
Fig. 22 .
22 Fig. 22. Dropbox and UDS traffic corresponding to an active user's log file backup process.
Fig. 23 .
23 Fig. 23. Original CPU utilization of Dropbox.
Fig. 24 .
24 Fig. 24. CPU utilization of UD-S and Dropbox.
Fig. 25 .
25 Fig. 25. CPU utilization of UD-S+ and Dropbox.
Table 1 .
1 Network traffic generated by adding new files to the Dropbox Sync folder.
New
File Size Index Server Traffic Amazon S3 Traffic α Sync Delay (s
)
1 B 29.8 KB 6.5 KB 38200 4.0
1 KB 31.3 KB 6.8 KB 40.1 4.0
10 KB 31.8 KB 13.9 KB 4.63 4.1
100 KB 32.3 KB 118.7 KB 1.528 4.8
1 MB 35.3 KB 1.2 MB 1.22 9.2
10 MB 35.1 KB 11.5 MB 1.149 54.7
100 MB 38.5 KB 112.6 MB 1.1266 496.3
Table 2 .
2 Modified kernel functions.
fsnotify create event()
fsnotify modify()
fsnotify access()
inotify add watch()
copy event to user()
vfs write()
nfsd vfs write()
compat do readv writev()
Acknowledgements
This work is supported in part by the National Basic Research Program of China (973) Grant. 2011CB302305, the NSFC Grant. 61073015, 61190110 (China Major Program), and 61232004. Prof. Ben Y. Zhao is supported in part by the US NSF Grant. IIS-1321083 and CNS-1224100. Prof. Zhi-Li Zhang is supported in part by the US NSF Grant. CNS-1017647 and CNS-1117536, the DTRA Grant. HDTRA1-09-1-0050, and the DoD ARO MURI Award W911NF-12-1-0385.
We appreciate the instructive comments made by the reviewers, and the helpful advice offered by Prof. Baochun Li (University of Toronto), Prof. Yunhao Liu (Tsinghua University), Dr. Tianyin Xu (UCSD), and the 360 CloudDisk development team. | 63,504 | [
"1003235",
"1003236",
"1003237",
"1003238",
"1003239",
"1003240",
"1003241",
"1003242"
] | [
"300884",
"24050",
"29479",
"99874",
"325206",
"300693",
"307094",
"307094",
"300884"
] |
01480786 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480786/file/978-3-642-45065-5_21_Chapter.pdf | João M Silva
email: [email protected]
José Simão
email: [email protected]
Luís Veiga
email: [email protected]
Ditto -Deterministic Execution Replayability-as-a-Service for Java VM on Multiprocessors
Keywords: Deterministic Replay, Concurrency, Debugging, JVM
Alongside the rise of multi-processor machines, concurrent programming models have grown to near ubiquity. Programs built on these models are prone to bugs with rare pre-conditions, arising from unanticipated interactions between parallel tasks. Replayers can be efficient on uni-processor machines, but struggle with unreasonable overhead on multi-processors, both concerning slowdown of the execution time and size of the replay log. We present Ditto, a deterministic replayer for concurrent JVM applications executed on multi-processor machines, using both state-of-the-art and novel techniques. The main contribution of Ditto is a novel pair of recording and replaying algorithms that: (a) serialize memory accesses at the instance field level, (b) employ partial transitive reduction and program-order pruning on-the-fly, (c) take advantage of TLO static analysis, escape analysis and JVM compiler optimizations to identify thread-local accesses, and (d) take advantage of a lightweight checkpoint mechanism to avoid large logs in long running applications with fine granularity interactions, and for faster replay to any point in execution. The results show that Ditto out-performs previous deterministic replayers targeted at Java programs.
Introduction
The transition to the new concurrent paradigm of programming has not been the easiest, as developers struggle to visualize all possible interleavings of parallel tasks that interact through shared memory. Concurrent programs are harder to build than their sequential counterparts, but they are arguably even more challenging to debug. The difficulty in anticipating all possible interactions between parallel threads makes these programs especially prone to the appearance of bugs triggered by rare pre-conditions, capable of evading detection for long periods. Moreover, the debugging methodologies developed over the years for sequential programs fall short when applied to concurrent ones. Cyclic debugging, arguably the most common methodology, depends on repeated bug reproduction to find its cause, requiring the fault to be deterministic given the same input. The inherent memory non-determinism of concurrent programs breaks this assumption of fault-determinism, rendering cycling debugging inefficient, as most time and resources are taken up by bug reproduction attempts [START_REF] Lu | Learning from mistakes: a comprehensive study on real world concurrency bug characteristics[END_REF]. Furthermore, any trace statements, added to the program in an effort to learn more about the problem, can actually contribute further to the fault's evasiveness. Hence, cyclic debugging becomes even less efficient in the best case, and ineffective in the worst.
Memory non-determinism, inherent to concurrent programs, results from the occurrence of data races, i.e., unsynchronized accesses to the same shared memory location in which at least one is a write operation. The outcomes of these races must be reproduced in order to perform a correct execution replay. In uniprocessors, these outcomes can be derived from the outcomes of a much smaller subset of races, the synchronization races, used in synchronization primitives to allow threads to compete for access to shared resources. Efficient deterministic replayers have been developed taking advantage of this observation [START_REF] Choi | Deterministic replay of java multithreaded applications[END_REF][START_REF] Ronsse | Recplay: a fully integrated practical record/replay system[END_REF][START_REF] Georges | Jarec: a portable record/replay environment for multi-threaded java applications[END_REF][START_REF] Dunlap | Execution replay of multiprocessor virtual machines[END_REF].
Replaying executions on multi-processors is much more challenging, because the outcomes to synchronization races are no longer enough to derive the outcomes to all data races. The reason is that while parallelism in uniprocessors is an abstraction provided by the task scheduler, in multi-processor machines it has a physical significance. In fact, knowing the task scheduling decisions [START_REF] Russinovich | Replay for concurrent non-deterministic sharedmemory applications[END_REF][START_REF] Geels | Replay debugging for distributed applications[END_REF] does not allow us to resolve races between threads concurrently executing in different processors. Deterministic replayers have difficulties with unreasonable overhead when applied in this context, as the instructions that can lead to data races make up a significant amount of the instructions executed by a typical application. Currently there are four distinct approaches to deal with this open research problem, discussed in Section 2. Even using techniques to prune the events of interest, long running applications can make the log of events grow to an unmanageable size. To avoid this, a checkpointing mechanism can also be used to transparently save the state of the program, with the events before the checkpoint truncated from the log. The last saved state, may be potentially smaller than the original untruncated log, and can also be used as a starting point for a future replay allowing for a faster replay solution.
In this paper, we present Ditto, our deterministic replayer for unmodified user-level applications executed by the JVM on multi-processor machines. It integrates state-of-the-art and novel techniques to improve upon previous work. The main contributions that make Ditto unique are: (a) A novel pair of logical clock-based [START_REF] Lamport | Time, clocks, and the ordering of events in a distributed system[END_REF] recording and replaying algorithms. This allows us to leverage the semantic differences between load and store memory accesses to reduce trace data and maximize replay-time concurrency. Furthermore, we serialize memory accesses at the finest possible granularity, distinguishing between instance fields and array indexes; (b) Reduced trace and log space. We use a constraint pruning algorithm based on program order and partial transitive reduction to reduce the amount of trace data on-the-fly and a checkpointing mechanism to employ in long running applications; (c) A trace file optimization that highly reduces the size of logical clock-based traces; Though we discuss and implement Ditto in the context of a JVM runtime, its underlying techniques may be directly applied to other high-level, object-oriented runtime platforms, such as the Common Language Runtime (CLR).
We implemented Ditto on top of the open-source JVM implementation Jikes RVM (Research Virtual Machine). Ditto is evaluated to assess its replay correctness, bug reproduction capabilities and performance. Experimental results show that Ditto consistently out-performs previous state-of-the-art deterministic replayers targeted at Java programs in terms of record-time overhead, trace file size and replay-time overhead. It does so across multiple axes of application properties, namely number of threads, number of processors, load to store ratio, number of memory accesses, number of fields per shared object, and number of shared objects.
The rest of the paper is organized as follows: Section 2 describes some instances of related work; Section 3 explains the base design and algorithms of Ditto; Section 4 presents fundamental optimizations; Section 5 discusses some implementation related details; Section 6 presents and analyzes evaluation results; and Section 7 concludes the paper and offers our thoughts on the directions of future work.
Related Work
Deterministic replayers for multi-processor executions can be divided into four categories in terms of the approach taken to tackle the problem of excessive overhead. Some systems replay solely synchronization races, thus guaranteeing a correct replay up until the occurrence of a data race. RecPlay [START_REF] Ronsse | Recplay: a fully integrated practical record/replay system[END_REF] and JaRec [START_REF] Georges | Jarec: a portable record/replay environment for multi-threaded java applications[END_REF] are two similar systems that use logical clock-based recording algorithms to trace a partial ordering over all synchronization operations. RecPlay is capable of detecting data races during replay. Nonetheless, we believe the assumption that programs are perfectly synchronized severely limits the effectiveness of these solutions as debugging tools in multi-processor environments.
Researchers have developed specialized hardware-based solutions. FDR [START_REF] Xu | A "flight data recorder" for enabling full-system multiprocessor deterministic replay[END_REF] extends the cache coherence protocol to propagate causality information and generate an ordering over memory accesses. DeLorean [START_REF] Montesinos | Delorean: Recording and deterministically replaying shared-memory multiprocessor execution efficiently[END_REF] forces processors to execute instructions in chunks that are only committed if they do not conflict with other chunks in terms of memory accesses. Hence, the order of memory accesses can be derived from the order of chunk commits. Though efficient, these techniques have the drawback of requiring special hardware.
A more recent proposal is to use probabilistic replay techniques that explore the trade-off between recording overhead reduction through partial execution tracing and relaxation of replay guarantees. PRES partially traces executions and performs an offline exploration phase to find an execution that conforms with the partial trace and with user-defined conditions [START_REF] Park | Pres: probabilistic replay with execution sketching on multiprocessors[END_REF]. ODR uses a formulasolver and a partial execution trace to find executions that generate the same output as the original [START_REF] Altekar | Odr: output-deterministic replay for multicore debugging[END_REF]. These techniques show a lot of potential as debugging tools, but are unable to put an upper limit on how long it takes for a successful replay to be performed, though the problem is minimized by fully recording replay attempts.
LEAP is a relevant Java deterministic replayer that employs static analysis, to identify memory accesses performed on actual thread-shared variables, hence reducing the amount of monitored accesses [START_REF] Huang | Leap: lightweight deterministic multi-processor replay of concurrent java programs[END_REF]. Because LEAP recording algorithm associates access vectors to fields, it can not distinguish accesses to the same field of different objects. In workloads where there are many objects of a single type but they are not shared among threads, this will diminish the concurrency of the recording and replaying mechanisms. ORDER [START_REF] Yang | Order: object centric deterministic replay for java[END_REF] is, like Ditto, an object centric recorder. From a design point of view, ORDER misses support for online pruning of events and a checkpoint mechanism for faster replay. Regarding current implementation, the baseline code base (Apache harmony) is now deprecated, while Ditto was developed in a research oriented, yet production-like quality JVM, that is widely supported by the research community.
Deterministic replay can also be used as an efficient means for a fault-tolerant system to maintain replicas and recover after experiencing a fault [START_REF] Bressoud | Hypervisor-based fault tolerance[END_REF][START_REF] Napper | A fault-tolerant java virtual machine[END_REF].
Ditto -System Overview
Ditto must record the outcomes of all data races in order to support reproduction of any execution on multi-processor machines. Data races arise from non-synchronized shared memory accesses in which at least one is a write operation. Thus, to trace outcomes to data races, one must monitor shared memory accesses. The JVM's memory model limits the set of instructions that can manipulate shared memory to three groups: (i) accesses to static fields, (ii) accesses to object fields, and (iii) accesses to array fields of any type.
In addition to shared memory accesses, it is mandatory that we trace the order in which synchronization operations are performed. Though these events have no effect on shared memory, an incorrect ordering can cause the replayer to deadlock when shared memory accesses are performed inside critical sections. They need not, however, be ordered with shared memory accesses. In the JVM, synchronization is supported by synchronized methods, synchronized blocks and synchronization methods, such as wait and notify. Since all these mechanisms use monitors as their underlying synchronization primitive, their acquisitions are the events that Ditto intercepts. For completeness, we also record values and orderings of external input to threads, such as random numbers and from other library functions, while assuming the content of input from files is available.
Base Record and Replay Algorithms
The recording and replaying algorithms of Ditto rely on logical clocks (or Lamport clocks) [START_REF] Lamport | Time, clocks, and the ordering of events in a distributed system[END_REF], a mechanism designed to capture chronological and causal relationships, consisting of a monotonically increasing software counter. Logical clocks are associated with threads, objects and object fields to identify the order Algorithm 1 Load wrapper Parameters: f is the field, v is the value loaded 1: method wrapLoad(f ,v) 2:
monitorEnter(f ) 3: t ← getCurrentThread()
4:
trace(f.storeClock)
5:
f.loadCount ← f.loadCount + 1
4:
trace(f.storeClock, f.loadCount)
5:
clock ← max(t.clock, f.storeClock) + 1
6:
f.storeClock ← clock
7:
f.loadCount ← 0 8:
t.clock ← clock 9: store(f, v) 10: monitorExit(f )
11: end method between events of interest. For each such event, the recorder generates an order constraint that is later used by the replayer to order the event after past events on which its outcome depends.
Recording:
The recorder creates two streams of order constraints per threadone orders shared memory accesses, while the other orders monitor acquisitions. The recording algorithm for shared memory accesses was designed to take advantage of the semantic differences between load and store memory accesses. To do so, Ditto requires state to be associated with threads and fields. Threads are augmented with one logical clock, the thread's clock, incremented whenever it performs a store operation. Fields are extended with (a) one logical clock, the field's store clock, incremented whenever its value is modified; and (b) a load counter, incremented when the field's value is loaded and reset when it is modified. The manipulation of this state and the load/store operation itself must be performed atomically. Ditto acquires a monitor associated with the field to create a critical section and achieve atomicity. It is important that the monitor is not part of the application's scope, as its usage would interfere with the application and potentially lead to deadlocks.
When a thread T i performs a load operation on a field f , it starts by acquiring f 's associated monitor. Then, it adds an order constraint to the trace consisting of f 's store clock, implying that the current operation is to be ordered after the store that wrote f 's current value, but specifying no order in relation to other loads. Thread and field state are then updated by incrementing f 's load count, and the load operation itself performed. Finally, the monitor of f is released. If T i instead performs a store operation on f , it still starts by acquiring f 's monitor, but follows by tracing an order constraint composed of the field's store clock and load count, implying that this store is to be performed after the store that wrote f 's current value and all loads that read said value. Thread and field states are then updated by increasing clocks and resetting f 's load count. Finally, the store is performed and the monitor released. Algorithms 1 and 2 list pseudo-code for these recording processes.
t ← getCurrentThread()
3:
clock ← nextLoadConstraint(t)
4:
while f.storeClock < clock do
5:
wait(f )
6:
end while
7:
v ← load(f )
8:
t ← getCurrentThread()
9:
if f.storeClock > t.clock then 10:
t.clock ← f.storeClock
11:
end if
12:
f.loadCount ← f.loadCount + 1
13:
notifyAll(f )
14: end method
Unlike memory accesses, performed on fields, monitor acquisitions are performed on objects. As such, we associate with each object a logical clock. Moreover, given that synchronization is not serialized with memory accesses, we add a second clock to threads. When a thread T i acquires the monitor of an object o, it performs Algorithm 3. Note that we do not require a monitor this time, as the critical section of o's monitor already protects the update of thread and object state.
Consistent Thread Identification: Ditto's traces are composed of individual streams for each thread. Thus, it is mandatory that we map record-time threads to their replay-time counterparts. Threads can race to create child threads, making typical Java thread identifiers, attributed in a sequential manner, unfit for our purposes. To achieve the desired effect, Ditto wraps thread creation in a critical section and attributes a replay identifier to the child thread. The monitor acquisitions involved are replayed using the same algorithms that handle applicationlevel synchronization, ensuring that replay identifiers remain consistent across executions.
Replaying: As each thread is created, the replayer uses its assigned replay identifier to pull the corresponding stream of order constraints from the trace file. Before a thread executes each event of interest, the replayer is responsible for using the order constraints to guarantee that all events on which its outcome depends have already been performed. The trace does not contain metadata about the events from which it was generated, leaving the user with the responsibility of providing a program that generates the same stream of events of interest as it did at record-time. Ditto nonetheless allows the original program to be modified while maintaining a constant event stream through the use of Java annotations or command-line arguments, an important feature for its usage as a debugging tool.
Replaying Shared Memory Accesses: Using the order constraints in a trace file, the replayer delays load operations until the value read at record-time is available, while store operations are additionally delayed until that value has been read as many times as it was during recording, using the field's load count. This approach allows for maximum replay concurrency, as each memory access waits solely for those events that it affects and is affected by.
When a thread T i performs a load operation on a field f , it starts by reading a load order constraint from its trace, extracting a target store clock from it. Until f 's store clock equals this target, the thread waits. Upon being notified and positively re-evaluating the conditions for advancement, it is free to perform the actual load operation. After doing so, thread and field states are updated and waiting threads are notified of the changes. Algorithm 4 lists pseudo-code for this process. If T i was performing a store operation, the process would be the same, but a store order constraint would be loaded instead, from which a target store clock and a target load count would be extracted. The thread would proceed with the store once f 's store clock and load count both equaled the respective targets. State would be updated according to the rules used in Algorithm 1. Replaying monitor acquisitions is very similar to replaying load operations, with two differences: (i) a sync order constraint is read from the trace, from which a target sync clock is extracted and used as a condition for advancement; and (ii) thread and object state are updated according to the rules in Algorithm 3.
Notice that during replay there is no longer a need for protecting shared memory accesses with a monitor, as synchronization between threads is now performed by Ditto's wait/notify mechanism. Furthermore, notice that the load counter enables concurrent loads to be replayed in an arbitrary order, hence in parallel and faster, rather than being serialized unnecessarily.
Wait and Notify Mechanism
During execution replay, threads are often forced to wait until the conditions for advancement related to field or object state hold true. As such, threads that modify the states are given the responsibility of notifying those waiting for changes. Having threads wait and notify on the monitor associated with the field or object they intend to, or have manipulated, as suggested in Algorithms 1-2 and 3, is a simple but sub-optimal approach which notifies threads too often and causes bottlenecks when they attempt to reacquire the monitor. Ditto uses a much more refined approach, in which threads are only notified if the state has reached the conditions for their advancement.
Replay-time states of fields and objects are augmented with a table indexed by three types of keys: (i) load keys, used by load operations to wait for a specific store clock; (ii) store keys, used by store operations to wait for a specific combination of store clock and load count; and (iii) synchronization keys, used by monitor acquisitions to wait for a specific synchronization clock. Let us consider an example to illustrate how these keys are used. When a thread T i attempts to load the value of a field f but finds f 's store clock lower than its target store clock (tc), it creates a load key using the latter. T i then adds a new entry to f 's table using the key as both index and value, and waits on the key. When another thread T j modifies f 's store clock to contain the value tc, it uses a load key (tc) and a store key (tc, 0) to index the table. As a result of using the load key, it will retrieve the object on which T i is waiting and invokes notifyAll on it. Thus, T i is notified only once its conditions for proceeding are met.
Lightweight Checkpointing
For long running applications, and especially those with fine-grained thread interactions, the log can grow to a large size. Furthermore, the replay can only be necessary to be done from a certain point in time because the fault is known to occur only at the end of execution. Ditto uses a lightweight checkpointing mechanism [START_REF] Simão | A checkpointing-enabled and resourceaware java virtual machine for efficient and robust e-science applications in grid environments[END_REF] to offer two new replay services: (i) replay to most recent point before fault; (ii) replay to any instant M in execution. Checkpoint is done recording each thread stack and reachable objects. In general, the checkpoint size is closely related to the size of live objects, plus the overhead of booking metadata necessary for recovery. While the size of live objects can remain consistent over time, the log size will only grow. Regarding scenario (i), replay starts by recovering from the last checkpoint and continues with the partial truncated log. So, the total recording space is sizeof (lastCheckpoint) + sizeof (truncatedLog) which is still bounded to be smaller than 2 * sizeof (checkpointSize), since we trigger checkpointing when the log reaches a size close to the total memory used by objects (90%). In scenario (ii), replay starts with the most recent checkpoint before instant M (chosen by the user), and the partial log collected after that instant. In this case, the total recording space is N * sizeof (checkpoint) + N * sizeof (truncatedLog), where N is the number of times a checkpoint is done. In this case there is a trade-off between overhead in execution time and granularity in available replay start times [START_REF] Simão | A checkpointing-enabled and resourceaware java virtual machine for efficient and robust e-science applications in grid environments[END_REF]. Even so, the total recording space is bounded to be smaller than 2 * N * sizeof (checkpoint).
Input Related non-Deterministic Events
Besides access to shared variables, another source of non-determinism is the input some programs use to progress their calculus. This input can come from information asked to the program's user or from calling non-deterministic services, such as the current time or the random number generator. All such services are either available through the base class library or calls using the Java Native Interface. Each time a call is made to a method that is a source of input nondeterminism (e.g. Random.nextInt, System.nanoTime), the result is saved in association with the current thread. If the load/store is made over a shared field, the replay mechanism will already ensure the same thread interleaving as occurred in the recording phase. Regarding non shared fields, the replay of deterministic information can occur in a different order than the one of the original execution. This is not a problem since the values are affiliated with a thread and are delivered using FIFO order during each thread execution.
Additional Optimizations
Recording Granularity
Ditto records at the finest possible granularity, distinguishing between different fields of individual instances when serializing memory accesses. Previous deterministic replayers for Java programs had taken sub-optimal approaches: (i) De-jaVu creates a global-order [START_REF] Choi | Deterministic replay of java multithreaded applications[END_REF]; (ii) LEAP generates a partial-order that distinguishes between different fields, but not distinct instances [START_REF] Huang | Leap: lightweight deterministic multi-processor replay of concurrent java programs[END_REF]; and (iii) JaRec does the exact opposite of LEAP [START_REF] Georges | Jarec: a portable record/replay environment for multi-threaded java applications[END_REF]. The finer recording granularity maximizes replay-time concurrency and reduces recording overhead due to lower contention when modifying recorder state. The downside is higher memory consumption associated with field states. If this becomes a problem, Ditto is capable of operating with an object-level granularity.
Array indexes are treated like object fields, but with a slight twist. To keep index state under control for large arrays, a user-defined cap is placed on how many index states Ditto can keep for each array. Hence, multiple array indexes may map to a single index state and be treated as one program entity in the eyes of the recorder and replayer. This is not an optimal solution, but it goes towards a compromise with the memory requirements of Ditto.
Pruning Redundant Order Constraints
The base recording algorithm traces an order constraint per event of interest. Though correct, it can generate unreasonably high amounts of trace data, mostly due to the fact that shared memory accesses can comprise a very significant fraction of the instructions executed by a typical application. Fortunately, many order constraints are redundant, i.e., the order they enforce is already indirectly enforced by other constraints or program order. Such constraints can be safely pruned from the trace without compromising correctness. Ditto uses a pruning algorithm that does so on-the-fly.
Pruning order constraints leaves gaps in the trace which our base replay algorithm is not equipped to deal with. To handle these gaps, we introduce the concept of free runs, which represent a sequence of one or more events of interest that can be performed freely. When performing a free run of size n, the replayer essentially allows n events to occur without concerning itself with the progress of other threads. Free runs are placed in the trace where the events they replace would have been.
Program Order Pruning: Consider the recorded execution in Figure 1(a), in which arrows represent order constraints traced by the base recording algorithm. Notice how all dashed constraints enforce orderings between events which are implied by program order. To prune them, Ditto needs additional state to be associated with fields: the identifier of the last thread to store a value in the field, and a flag signaling whether that value has been loaded by other threads. Potential load order constraints are not traced if the thread loading the value is the one that wrote it. Thus, constraints 1, 2, 4, 10 and 11 in Figure 1(a) are pruned, but not constraint 6. Similarly, a potential store order constraint is not traced if it is performed by the thread that wrote the current value and if that value has not been loaded by other threads. Hence, constraints 3 and 5 are pruned, while 9 is not, as presented in Figure 1(b). Synchronization order constraints are handled in the same way as load operations, but state is associated with an object instead of a field.
TA TB S0(x) L0(x) L1(x) S1(x) L2(x) S2(x) L3(x) L4(x) L5(x) S3(x) L6(x) L7(x)
Partial Transitive Reduction: Netzer introduced an algorithm to find the optimal set of constraints to reproduce an execution [START_REF] Netzer | Optimal tracing and replay for debugging shared-memory parallel programs[END_REF], which was later improved upon in RTR [START_REF] Xu | A regulated transitive reduction (rtr) for longer memory race recording[END_REF] by introducing artificial constraints that enabled the removal of multiple real ones. Ditto does not directly employ any of these algorithms for reasons related to performance degradation and the need for keeping flexibilitylimiting state, such as Netzer's usage of vector clocks, requiring the number of threads to be known a priori. Instead, Ditto uses a novel partial transitive reduction algorithm designed to find a balance between trace file size reduction and additional overhead.
Transitive reduction prunes order constraints that enforce orderings implicitly enforced by other constraints. In Figure 1, for example, T B performs three consecutive load operations which read the same value of x, written by T A . Given that the loads are ordered by program order, enforcing the order S 2 (x) → L 3 (x) is enough to guarantee that the following two loads are also subsequent to S 2 (x). As such, constraints 7 and 8 are redundant and can be removed, resulting in the final trace file of Figure 2 with only 2 constrains.
To perform transitive reduction, we add a table to the state of threads that tracks the most recent inter-thread interaction with each other thread. Whenever a thread T i accesses a field f last written to by thread T j (with T i = T j ), f 's store clock is inserted in the interaction table of T i at index T j . This allows Ditto to declare that order constraints whose source is T j with a clock lower than the one in the interaction table are redundant, implied by a previous constraint. Figure 3 shows a sample recording that stresses the partial nature of Ditto's transitive reduction, since the set of traced constraints is sub-optimal. Constraint 4 is redundant, as the combination of constraints 1 and 2 would indirectly enforce the order S 0 (x) → L 0 (x). For Ditto to achieve this conclusion, however, the interaction tables of T B and T C would have to be merged when tracing constraint 2. The merge operation proved to be too detrimental to efficiency, especially given that the benefit is limited to one order constraint, as the subsequent constraint 5, similar to 4, is pruned. In summary, Ditto is aware of thread interactions that span a maximum of one traced order constraint.
Thread Local Objects and Array Escape Analysis
Thread Local Objects (TLO) static analysis provides locality information on class fields, that is, it determines fields which are not involved in inter-thread interactions, aiming to save execution time and log space. The output of this kind of analysis is a classification of either thread-local or thread-shared for each class field. We developed a stand-alone application that uses the TLO implementation in the Soot bytecode optimization framework4 to generate a report file that lists all thread-shared fields of the analyzed application. This file can be fed as optional input to Ditto, which uses the information to avoid intercepting accesses to thread-local fields.
TLO analysis provides very useful information about the locality of class fields, but no information is offered on array fields. Without further measures, we would be required to conservatively monitor all array fields accesses. Ditto uses, at runtime, information collected from the just-in-time compiler to do escape analysis on array references and avoid monitoring accesses to elements of arrays declared in a method whose reference never escapes that same method. This analysis, although simple, can still avoid some useless overhead at little cost. Nonetheless, there is a lot of unexplored potential for this kind of analysis on array references to reduce recording overhead which we see as future work.
Trace File
Ditto's traces are composed of one order constraint stream per record-time thread. Organizing the trace by thread is advantageous for various reasons. The first is that it is easy to intercept the creation and termination of threads. Intercepting these events is crucial for the management of trace memory buffers, as they must be created when a thread starts and dumped to disk once it terminates. Moreover, it allows us to place an upper limit on how much memory can be spent on memory buffers, as the number of simultaneously running threads is limited and usually low. Other trace organizations, such as the field-oriented one of LEAP [START_REF] Huang | Leap: lightweight deterministic multi-processor replay of concurrent java programs[END_REF], do not benefit from this -the lifetime of a field is the lifetime of the application itself. A stream organized by instance would be even more problematic, as intercepting object creation and collection is not an easy task.
The trace file is organized as a table that maps thread replay identifiers to the corresponding order constraint streams. The table and the streams themselves are organized in a linked list of chunks, as a direct consequence of the need to dump memory buffers to disk as they become full. Though sequential I/O is generally more efficient than random I/O, using multiple sequential files (one per thread) turned out to be less efficient than updating pointers in random file locations as new chunks were added to it. Hence, Ditto creates a single-file trace.
Given that logical clocks are monotonically increasing counters, they are expected to grow to very large values during long running executions. For the trace file, this would mean reserving upwards of 8 bytes to store each clock value. Ditto uses a simple but effective optimization that stores clock values as increments in relation to the last one in the stream, instead of as absolute values. Considering that clocks always move forward and mostly in small increments, the great majority of clock values can be stored in 1 or 2 bytes.
Implementation Details
Ditto is implemented in Jikes RVM, a high performance implementation of the JVM written almost entirely in a slightly enhanced Java that provides "magic" methods for low-level operations, such as pointer arithmetic [START_REF] Alpern | The jalapeño virtual machine[END_REF]. The RVM is very modular, as it was designed to be a research platform where novel VM ideas could be easily implemented and evaluated. This was the main reason we developed Ditto on it.
The implementation efforts were done in two main sub-systems: threading and compiler. Regarding the threading system, each Java thread is mapped to a single native thread. This is relevant to Ditto, as it means scheduling decisions are offloaded to the OS and cannot be traced or controlled from inside the RVM. As a consequence, Java monitors are also implemented with resort to OS locking primitives. Regarding the compiler, Jikes RVM does not interpret bytecode; all methods are compiled to machine code on-demand. The VM uses an adaptive compilation system in which methods are first compiled by a fast baseline compiler which generates inefficient code. A profiling mechanism detects hot methods at runtime, which are then recompiled by a slower but much better optimizing compiler. This compiler manipulates three intermediate representations (IR) on which different optimizations are performed. The high-level IR (HIR) is very similar to the bytecode instruction set, but subsequent IRs are closer to actual processor ISAs.
Intercepting Events of Interest: Implementing Ditto in Jikes RVM required intercepting the events of interest through hooks in the thread management subsystem and the addition of instrumentation phases to the compilers. Moreover, mechanisms were added to manage thread, object and field states. A drawback of Jikes being written in Java is that it uses the same mechanisms for executing as the application. As such, when intercepting events, we must ignore those triggered by the VM. Depending on the event, the VM/application distinction is done using either static tests that rely on package names, or runtime tests that inspect the Java stack.
Ditto intercepts thread creation, both before and after the launch of the native thread, and thread termination, mainly for the purpose of initializing and dumping trace memory buffers. The thread creation hooks are also used to enter and exit the critical section protecting replay identifier assignment. If the event occurs in the context of a synchronized method or block, Ditto simply replaces the usual method used to implement the monitor enter operation with a wrapper method during compilation. Monitor acquisitions performed in the context of synchronization methods like wait or notify are intercepted by a hook in the VM's internal implementation of said methods. To avoid costly runtime tests, call sites are instrumented to activate a thread-local flag which lets Ditto know that the next executed synchronization method was invoked by the application. Events triggered through the JNI interface are also intercepted by a hook inside the VM, but they require a runtime test to ascertain the source, as we do not compile native code.
During method compilation, accesses to shared memory are wrapped in two calls to methods that trace the operation. Instrumentation is performed after HIR optimizations have been executed on the method, allowing Ditto to take advantage of those that remove object, array or static field accesses. Such optimizations include common sub-expression elimination and object/array replacement with scalar variables using escape analysis, among others.
Threading and State: Thread state is easily kept in the VM's own thread objects. Object and field states are kept in a state instance whose reference is stored in the object's header. After modifying the GC to scan these references, this approach allows us to create states for objects on-demand and keep them only while the corresponding object stays alive. Ditto requires the trace file to be finalized in order to replay the corresponding execution. When a deadlock occurs, the JVM does not shutdown and the trace memory buffers are never dumped, leaving the trace in an unfinished state. The problem is solved by adding a signal handler to Jikes which intercepts SIGUSR1 signals and instructs the replay system to finish the trace. The user is responsible for delivering the signal to Jikes before killing its process if a deadlock is thought to have been reached.
Trace File: In Section 4 we described the way thread order constraint streams are located in the trace file using a combination of table and linked list structures. Structuring the streams themselves is another issue, as Ditto's recording algorithm generates three types of values that must be somehow encoded in the stream: (i) clock increment values; (ii) free run values; and (iii) load count values. Furthermore, the clock value optimization, also presented in Section 4, makes value sizes flexible, requiring the introduction of a way to encode this information as well.
The three kinds of values are encoded using the two most significant bits of each value as identification metadata. However, adding two more bits for size metadata would severely limit the range of values that each entry could represent. Moreover, it is usual for consecutive values to have equal size, leading to a lot of redundant information if the size is declared for each individual entry. Taking these observations in mind, we introduce meta-values to the stream which encode the size and number of the values that follow them in the stream. The meta-values take up two bytes, but their number is insignificant in comparison to the total amount of values stored in the trace. Ditto uses a VM's internal thread whose only purpose is to write trace buffers to disk. By giving each thread two buffers, we allow one buffer to be dumped to disk by the writer thread while the other is concurrently filled. In most cases, writing to disk is faster than filling a second buffer, allowing threads to waste no time waiting for I/O operations.
Evaluation
We evaluate Ditto by assessing its ability to correctly replay recorded executions and by measuring its performance in terms of recording overhead, replaying overhead and trace file size. Performance measurements are compared with those of previous approaches, which we implemented in Jikes RVM using the same facilities that support Ditto itself. The implemented replayers are: (a) DejaVu [START_REF] Choi | Deterministic replay of java multithreaded applications[END_REF], a global-order replayer; (b) JaRec [START_REF] Georges | Jarec: a portable record/replay environment for multi-threaded java applications[END_REF], a partial-order, logical clock-based replayer; and (c) LEAP [START_REF] Huang | Leap: lightweight deterministic multi-processor replay of concurrent java programs[END_REF], a recent partial-order, access vector-based replayer. We followed their respective publications as closely as possible, introducing modifications when necessary. For instance, DejaVu and JaRec, originally designed to record synchronization races, were extended to deal with all data races, while LEAP's algorithm was extended to compress consecutive accesses to a field by the same thread, absent in available codebase. Moreover, our checkpoint for instant replay is deactivated for fairness.
We start by using a highly non-deterministic microbenchmark and a number of applications from the IBM Concurrency Testing Repository5 to assess replay correctness. This is followed by a thorough comparison between Ditto's runtime performance characteristics and those of the other implemented replayers. The results are gathered by performing a microbenchmark and recording executions of selected applications (because of space constraints) from the Java Grande and DaCapo benchmark suites. All experiments were conducted on a 8-core 3.40Ghz Intel i7 machine with 12GB of primary memory and running 64-bit Linux 3.2.0. Baseline version of the Jikes RVM is 3.1.2. Ditto's source will be available in the Jikes RVM research archive.
Replay Correctness: In the context of Ditto, an execution replay is said to be correct if the shared program state goes through the same transitions as it did during recording, even if thread local state diverges. Other types of deterministic replayers may offer more relaxed fidelity guarantees, as is the case of the probabilistic replayers PRES [START_REF] Park | Pres: probabilistic replay with execution sketching on multiprocessors[END_REF] and ODR [START_REF] Altekar | Odr: output-deterministic replay for multicore debugging[END_REF].
We design a microbenchmark to produce a highly erratic and non-deterministic output, so that we can confirm the correctness of replay with a high degree of assurance. This is accomplished by having threads randomly increment multiple shared counters without any kind of synchronization, and using the final counter values as the output. After a few iterations, the final counter values are completely unpredictable due to the non-atomic nature of the increments. Naively re-executing the benchmark in hopes of getting the same output will prove unsuccessful virtually every time. On the contrary, Ditto is able to reproduce the final counter values every single time, even when stressing the system by using a high number of threads and iterations. The microbenchmark will also be available in the Jikes RVM research archive. Regarding the IBM concurrency testing repository, it contains a number of small applications that exhibit various concurrent bug patterns while performing some practical task. Ditto is capable of correctly reproducing each and every one of these bugs.
Performance Results
After confirming Ditto's capability to correctly replay many kinds of concurrent bug patterns, we set off to evaluate its performance by measuring recording overhead, trace file size and replaying overhead. To put experimental results in perspective, we use the same performance indicators to evaluate the three implemented state-of-the-art deterministic replay techniques for Java programs: DejaVu (Global), JaRec, LEAP.
Microbenchmarking:
The same microbenchmark used to assess replay correctness is now used to compare Ditto's performance characteristics with those of the other replayers regarding recording time, trace size and replaying time, across multiple target application properties: (i) number of threads, (ii) number of shared memory accesses per thread, (iii) load to store ratio, (iv) number of fields per shared object, and (v) number of shared objects, (vi) number of processors.
The results are presented in Figures 4 and5. Note that graphs related to execution times use a logarithmic scale due to the order of magnitude-sized differences between replayers' performance, and that in all graphs lower is better.
Figure 4 shows the performance results of application properties (i) to (iii). Record and replay execution times grows linearly with the number of threads, with Ditto taking the lead in absolute values by one and two orders of magnitude, respectively. As for trace file sizes, Ditto stays below 200Mb, while no other replayer comes under 500Mb. The maximum is achieved by LEAP at around 1.5Gb. Concerning the number of memory access operations, the three indicators increase linearly with the number of memory accesses for all algorithms. We attribute this result to two factors: (i) none of them keeps state whose complexity better than its competitors in all three indicators, while other replayers tend to overly sacrifice trace file size or the replay execution time in favor of recording efficiency.
Complete Applications
In this section we use complete applications to compare the execution time overhead and the log size of Ditto when compared to other state of the art replayers. Furthermore, the impact of the TLO analysis is also evaluated. All applications were parametrized to use 8 threads (i.e. the number of cores of the available hardware). From the the Java Grande benchmark 6 we selected the multi-threaded applications, namely: (a) MolDyn, a molecular dynamics simulation; (b) Mon-teCarlo, a monte carlo simulation; and (c) RayTracer, a 3D ray tracer. Table 1 reports on the results in terms of recording overhead and trace file size. Considering them, two main remarks can be made: Ditto's record-time performance is superior to that of competing replayers, and the trace files generated by Ditto are insignificantly small. The result suggests that the static analysis can be further improved to better identify thread-local memory accesses, which represents a relevant future research topic.
From the DaCapo 7 benchmark, we evaluate the record-time performance of Ditto and the other replayers using the lusearch, xalan and avrora applications with the large set. The results are shown in Table 1 and highlight an interesting observation: for applications with very coarse-grained sharing, as is the case of lusearch and xalan, Ditto's higher complexity is actually detrimental. The lack of stress allows the other algorithms to perform better in terms of recording overhead, albeit generating larger trace files (with the exception of JaRec). Nonetheless, Ditto's recording overhead is still quite low.
Conclusions and Future Work
We presented Ditto, a deterministic replay system for the JVM, capable of correctly replaying executions of imperfectly synchronized applications on multi- processors. It uses a novel pair of recording and replaying algorithms that combine state-of-the-art and original techniques, including (a) managing differences between load and store memory accesses, (b) serializing events at instance field granularity, (c) pruning redundant constraints using program order and partial transitive reduction, (d) taking advantage of TLO static analysis, escape analysis and compiler optimizations, and (e) applying a simple but effective trace file optimization. Ditto was successfully evaluated to ascertain its capability to reproduce different concurrent bug patterns and highly non-deterministic executions. Performance results show Ditto consistently outperforming previous Java replayers across multiple application properties, in terms of overhead and trace size, being the most well-rounded system, multicore scalable and leveraging checkpointing and restore capabilities. Evaluation results suggest that future efforts to improve deterministic replay should be focused on improving static analysis to identify thread-local events.
Pruning constraints implied by program order.
Fig. 1 :Fig. 2 :
12 Fig. 1: Example of Ditto's constraint pruning algorithm.
Fig. 3 :
3 Fig. 3: Example of partial transitive reduction.
Fig. 6 :
6 Fig. 6: Effects of Ditto's pruning algorithm.
Table 1 :
1 6 http://www.epcc.ed.ac.uk/research/java-grande 7 http://dacapobench.org Current implementation cannot deal with trace files over 2 GB. Record-time performance results for representative Java workloads.
Ditto Global JaRec LEAP
Overhead Trace Overhead Trace Overhead Trace Overhead Trace
MolDyn 2831% 239Kb >181596%* >2Gb* 3887% 188Mb >13956%* >2Gb
MonteCarlo 390% 248Kb 79575% 1273Mb 410% 0.39Kb 10188% 336Mb
RayTracer 4729% 4.72Kb >164877%* >2Gb* 5197% 21Mb >9697%* >2Gb*
lusearch 4.56% 3Kb 1.89% 288 Kb 2.26% 3Kb 0.69% 564Kb
xalan 5.23% 6kb 4.52% 475Kb 2.71% 0.2Kb 2.73% 485Kb
avrora 378% 22Mb 2771% 565Mb 372% 23Mb -* >2Gb*
*
http://www.sable.mcgill.ca/soot/
https://qp.research.ibm.com/concurrency testing
Acknowledgments:This work was partially supported by national funds through FCT -Fundação para a Ciência e a Tecnologia, projects PTDC/EIA-EIA/113613/2009, PTDC/EIA-EIA/102250/2008, PEst-OE/EEI/LA0021/2013 and the PROTEC program of the Polytechnic Institute of Lisbon (IPL)
Ditto
Global JaRec LEAP Baseline increases over time, and (ii) our conscious effort during implementation to keep memory usage constant. Ditto is nonetheless superior in terms of absolute values. Finally, regarding the load and store ratio, Ditto is the only evaluated replayer that takes advantage of the semantic differences between load and store memory accesses. As such, we expect it to be the only system to positively react in the presence of a higher load:store ratio. The experimental results are consistent with this, as we can observe reductions in both overheads and a very significant reduction of the trace file size.
Figure 5 shows the performance results of application properties (iv) to (vi). Stressing the system with an increasing number of fields per object, property (iv), and number of shared objects, property (v), is crucial to measure the impact of Ditto's recording granularity. Ditto and LEAP are the only replayers that improve performance (smaller recording and replaying times) as more shared fields are present, though Ditto has the lowest absolute values. This result is due to both replayers distinguishing between different fields when serializing events. However, LEAP actually increases its trace file size as the number of fields increases, a result we believe to be caused by their access vector-based approach to recording.
Regarding the number of shared objects, JaRec is the main competitor of Ditto as they are the only ones that can distinguish between distinct objects. LEAP's offline transformation approach does not allow it to take advantage from this runtime information. Although JaRec is marginally better than Ditto past the 64 object mark, it fails to take advantage of the number of shared objects during the replay phase.
Concerning the number of processors, the experimental results were obtained by limiting the JikesRVM process to a subset of processors in our 8-core test machine. Ditto is the only algorithm that lowers its record execution time as the number of processors increases, promising increased scalability to future deployments and applications in production environments. Additionally, its trace file size increases much slower than that of other replayers and the replay execution time is three orders of magnitude lower than the second best replayer at the 8 processor mark.
Effects of the Pruning Algorithm:
To assess the effects of Ditto's pruning algorithm we modified the microbenchmark to use a more sequential memory access pattern in which each thread accesses a subset of shared objects that overlaps with that of two other threads. Figure 6 shows the trace file size reduction percentage, the recording speedup and the replaying speedup over the base recording algorithm from applying program order pruning only, and program order pruning plus partial transitive reduction. The results clearly demonstrate the potential of the algorithm, reducing the trace by 81.6 to 99.8%. With reductions of this magnitude, instead of seeing increased execution times, we actually observe significant drops in overhead due to the avoided tracing efforts.
Looking at the results of all microbenchmark experiments, it is clear that Ditto is the most well-rounded deterministic replayer. It consistently performs | 54,624 | [
"1003249",
"1003250",
"978509"
] | [
"1114557",
"46348",
"450862",
"1114557",
"46348"
] |
01480790 | en | [
"info"
] | 2024/03/04 23:41:46 | 2013 | https://inria.hal.science/hal-01480790/file/978-3-642-45065-5_2_Chapter.pdf | Spyros Voulgaris
email: [email protected]
Maarten Van Steen
VICINITY: A Pinch of Randomness Brings out the Structure
Overlay networks are central to the operation of large-scale decentralized applications, be it Internet-scale P2P systems deployed in the wild or cloud applications running in a controlled-albeit large-scale-environment. A number of custom solutions exist for individual applications, each employing a tailormade mechanism to build and maintain its specific structure. This paper addresses the role of randomness in developing and maintaining such structures. Taking VICINITY, a generic overlay management framework based on self-organization, we explore tradeoffs between deterministic and probabilistic decision-making for structuring overlays. We come to the conclusion that a pinch of randomness may even be needed in overlay construction, but also that much randomness or randomness alone is not good either.
Introduction
Does randomness matter? In this paper we claim it does, and, in fact, that incorporating randomness into distributed algorithms may even be necessary. We do not claim that randomness is necessary for all algorithms (which would clearly be wrong), but that for many large-scale distributed algorithms it is important to strive for simplicity through loose control. What is lost is determinism and the potential to formally prove correctness. Instead, at best only statistical properties can be shown to hold, but what can be achieved is that those properties emerge from very simple principles. A fundamental principle being that decisions concerning selection, of whatever kind, are sometimes random.
To substantiate our claim, we consider the influence of randomness in distributed gossiping algorithms. Gossiping is a well-known, and simple technique, widely deployed for a range of applications, including data replication, information dissemination, and system management. Gossiping is often deterministic: the rules for selecting whom to gossip with and what to gossip are strict, with no probabilistic element. On the other hand, there are also many gossiping algorithms that incorporate probabilistic decision-making, yet lack an examination of why such decision-making is so effective.
We have no general answer to where the effectiveness of randomness comes from, yet we believe such understanding is crucial for designing large-scale distributed systems. As a step toward such understanding, we concentrate in this paper on deploying a gossiping algorithm called VICINITY, for constructing overlay networks. It is not our purpose to advocate our solution to overlay construction. Instead, we use VICINITY as a framework to demonstrate how crucial incorporating randomness is. More specifically, we show that there is a subtle balance to be sought between deterministic and probabilistic decision-making. A pinch of randomness is enough, too much randomness will spoil matters.
Our main contribution is systematically exploring the effect of randomness in gossipbased overlay construction. This brings us to the conclusion that such exploration can be crucial and that deciding in advance on the amount of randomness is diffcult, if not impossible. As a side-effect of this exploration, we present VICINITY, a novel gossiping algorithm that can be deployed for a wide range of applications.
The rest of the paper is organized as follows. Section 2 defines our system model. Section 3 presents the VICINITY protocol, starting from its intuition, a baseline model, and the detailed design decisions that lead to the complete version of the protocol. Section 4 sheds some light on the individual roles of determinism and randomness. Section 5 offers an evaluation of VICINITY through two scenarios that portray the interplay between determinism and randomness and highlight their individual strengths and weaknesses. Section 6 discusses related work, and Section 7 communicates our overall conclusions from this work.
System Model
The Network We consider a set of N nodes connected over a routed network infrastructure. Each node has a profile, containing some application-specific data of the node, determining the node's neighbors in the target structure. Such a profile could contain a node's geographic coordinates, a vector of preferences, social network information, or in general any other metric that the application uses for defining the target structure.
Knowledge regarding neighbors is stored and exchanged by means of node descriptors. The descriptor of a given node can be generated exclusively by that very node, but it can be freely handed by any third node to any other. The descriptor of a node is a tuple containing the following three fields:
1. the node's address (i.e., IP address and port) 2. the descriptor's age (a numeric field) 3. the node's application-specific profile We consider that nodes are connected over a network that supports routing. That is, any node can send a message to any other, provided that the sender knows the receiver's address (i.e., IP address and port, on the Internet).
To enable communication with other nodes, each node maintains a small dynamic list of neighbors, called its view, V . A node view is essentially a list of descriptors of the node's neighbors. Node views have a small fixed length, . Their contents are dynamic, and are updated in an epidemic fashion through pairwise node communication. Although this is not binding, for simplicity we will consider that all nodes have the same view length.
The network is inherently dynamic and unreliable. Nodes can join, leave, or crash at any time and without prior notice. In particular, we make no distinction between node crashes and node leaves. Additionally, nodes are also free to dynamically update their profiles. Messages may be lost, or delayed. Byzantine behavior is beyond the scope of this work.
Finally, we consider that nodes participate in a peer sampling service [START_REF] Jelasity | Gossip-based aggregation in large dynamic networks[END_REF], which provides them with a continuous stream of links to nodes picked uniformly at random among all alive nodes. Peer sampling protocols form a fundamental ingredient of many peer-to-peer applications nowadays, they are completely decentralized, and they have shown to be remarkably inexpensive.
As VICINITY strives for creating structure, we will be referring to its view as V str , to its view's length as str , and to its gossip length (i.e., the number of descriptors exchanged in each direction in a gossip interaction) as g str . Likewise, as the peer sampling service is responsible for randomness, its view, view length, and gossip length will be referred to as V rnd , rnd , and g rnd , respectively.
The Target Overlay We also consider a selection function SELECT(p, D,k), that, given the descriptor of node p and a set D of node descriptors, returns the set of k descriptors (or all of them, if |D| < k) that best approximate p's outgoing links in the target structure. The selection is based on node profiles. We assume function SELECT to be globally known by all nodes in the system.
The selection function essentially defines the target structure. Each node p aims at eventually establishing links to the "best" str nodes, as defined by the outcome of SELECT(p, D * p , str ), where D * p is the set of descriptors of all nodes in the network excluding p.
Often, the selection function SELECT is based on a globally defined node proximity metric. That is, SELECT(p, D,k) sorts all descriptors in D with respect to their proximity to node p, and selects the k closest ones. Typical proximity metrics include semantic similarity, ID-based sorting, domain name proximity, geographic-or latency-based proximity, etc. Some applications may apply composite proximity metrics, combining two or more of the above. In certain cases, though, selecting appropriate neighbors involves more than a mere sorting based on some metric, typically when a node's sig-nificance as a neighbor depends not only on the its proximity to a given node, but also on which other nodes are being selected. We assume that the selection function exhibits some sort of transitivity, in the sense that if node b is a "good" selection for node a (a SELECT b), and c is a "good" selection for b (b SELECT c), then c tends to be a "good" selection for a too (a SELECT c). Generally, the "better" a selection node q is for node p, the more likely it is that q's "good" selections are also "good" for p. This transitivity is essentially a correlation property between nodes sharing common neighbors, embodying the principle "my friend's friend is also my friend". Surely, this correlation is fuzzy and generally hard to quantify. It is more of a desired property rather than a hard requirement for our topology construction framework. The framework excels for networks exhibiting strong transitivity. However, its efficiency degrades as the transitivity becomes weaker. In the extreme case that no correlation holds between nodes with common neighbors, related nodes eventually discover each other through random encounters, although this may take a long time.
3 The VICINITY Protocol
VICINITY: The Intuition
The goal is to organize all VICINITY views so as to approximate the target structure as closely as possible. To this end, nodes regularly exchange node descriptors to gradually evolve their views towards the target. When gossiping, nodes send each other a subset of their views, of fixed small length g str , known as the gossip length. The gossip length is the same for all nodes.
From our previous discussion, we are seeking a means to construct, for each node and with respect to the given selection function, the optimal view from all nodes currently in the system. There are two sides to this construction.
First, based on the assumption of transitivity in the selection function, SELECT, a node should explore the nearby nodes that its neighbors have found. In other words, if b is in a's VICINITY view, and c is in b's view, it makes sense to check whether c would also be suitable as a neighbor of a. Exploiting the transitivity in SELECT should then quickly lead to high-quality views. The way a node tries to improve its VICIN-ITY view resembles hill-climbing algorithms [START_REF] Russell | Artificial Intelligence: A Modern Approach[END_REF]. However, instead of trying to locate a single optimal node, here the objective is to optimize the selection of a whole set of nodes, namely the view. In that respect, VICINITY can be thought of as a distributed, collaborative hill-climbing algorithm.
Second, it is important that all nodes be examined. The problem with following transitivity alone is that a node will be eventually searching only in a single cluster of related nodes, possibly missing out on other clusters of also related-but still unknown-peers, in a way similar to getting locked in a local maximum in hill-climbing algorithms. Analogously to the special "long" links in small-world networks [START_REF] Watts | Small Worlds, The Dynamics of Networks between Order and Randomness[END_REF], a node needs to establish links outside its neighborhood's cluster. Likewise, when new nodes join the network, they should easily find an appropriate cluster to join. These issues call for a randomization of candidates for including in a view. wait(T time units) In our design we decouple these two aspects by adopting a two-layered gossiping framework, as can be seen in Figure 1. The lower layer is the peer sampling service, responsible for maintaining a connected overlay and for periodically feeding the toplayer protocol with nodes uniformly randomly selected from the whole network. In its turn, the top-layer protocol, called VICINITY, is in charge of discovering nodes that are favored by the selection function. Each layer maintains its own, separate view, and communicates to the respective layer of other nodes.
3 q ← SELECTRANDOMNEIGHBOR()
VICINITY: Baseline Version
To better grasp the principal operation of the protocol, we first present a baseline version of VICINITY, shown in Figure 2. In this baseline version, each node periodically contacts a random node from its view, and the two nodes send each other the bestwith respect to the receiver's profile-g str neighbors they have in their views. Note that this baseline version of VICINITY is completely equivalent to the related T-MAN protocol [START_REF] Jelasity | T-man: Gossip-based fast overlay topology construction[END_REF].
As can be seen in the pseudocode of Figure 2, each node, p, periodically picks from its view a random node, q, to gossip with (line 3). It then applies the SELECT function to select the g str nodes that are best for q, from the union of its own view and p itself (lines 4-5), and sends them to q (line 6). Upon reception of p's message, q selects the g str best nodes for p among all nodes in its view and q itself (lines 7-8), and sends them back to p (line 9). Finally, each node updates its own view, by selecting the str best neighbors out of its previous view and all received descriptors (line 11).
Note that the code for selecting and sending descriptors to the other side is symmetric for the two nodes (lines 4-6 vs. lines 7-9), as well as the code for merging the received descriptors to the current view (lines 11).
Each node essentially runs two threads. An active one, which periodically wakes up and initiates communication to another node, and a passive thread, which responds to the communication initiated by another node.
VICINITY: Fine-tuning the nuts and bolts
A number of interesting design choices can substantially boost the performance of the baseline VICINITY protocol. In this section, we will motivate them and demonstrate them in parallel. For our demonstration we will consider a sample testbed, simulated on PeerNet [START_REF]PeerNet[END_REF], an open-source simulation and emulation framework for peer-to-peer networks written in Java, branching the popular PeerSim simulator [START_REF] Peersim | [END_REF].
Our testbed consists of a network of 10,000 nodes, assigned distinct 2D coordinates from a 100 × 100 grid, and whose aim is to self-organize in the respective torus overlay, starting from an arbitrary random topology. Nodes maintain a short view of str =12 descriptors each, which is initially filled with 12 neighbors picked uniformly at random from the whole network. When gossiping, nodes send g str =12 descriptors to each other. The selection function selects, out of a given set, the k neighbors that are the closest to the reference node in Euclidean space. The goal of a node is to discover its four closest nodes out of the whole network, that is, to get their descriptors in its view. For example, the node with coordinates (20, 40) should get nodes (19, 40), (21, 40), (20, 39), and (20, 41) among its neighbors. We consider space to wrap around the edges of the grid, resulting in a torus topology. For example, the four closest nodes for node (0, 0) are (99, 0), (1, 0), (0, 99), and (0, 1).
Figure 3 plots the number of target links that are missing from all nodes' views, collectively. Initially, this accounts to 40,000 links, i.e., four for each of the 10,000 nodes. The red plot corresponds to the baseline version of VICINITY, detailed in the previous section. Clearly, target links are being discovered at exponential speed, and within 61 rounds nodes have self-organized to a complete torus structure. Nevertheless, as we see, the baseline is the slowest of all five versions shown.
Round-robin neighbor selection
The first improvement concerns the policy for selecting which neighbor to gossip with. Rather than picking from one's view at random, we impose a round robin selection of gossip partners. The motivation behind this policy is twofold.
First, contacting one's neighbors in a round-robin order improves the node's chances to optimize its view faster, by increasing the number of different potentially good neighbors the node encounters. It is not hard to envisage that probing a single neighbor multiple times in a short time frame has little value, as the neighbor is unlikely to have new useful information to trade every time. In contrast, maximizing the intervals at which a given neighbor is probed, maximizes the potential utility of each gossip exchange. Given the rather static nature of a node's VICINITY view when converged, this is achieved by visiting neighbors in a round-robin fashion.
The second motivation for the round-robin policy is that, in the case of a dynamic network, it serves garbage collection of obsolete node descriptors. A descriptor may become obsolete as a result of network dynamics, if the node it points at is no longer alive. By picking neighbors in round-robin order, neighbors are being contacted in roughly uniform time periods, preventing any single-and possibly obsolete-descriptor from lingering indefinitely in a node's view.
The green plot of Figure 3 shows the evolution of the same experiment, with roundrobin neighbor selection enabled. The improved performance over the baseline version is evident already from the early rounds of the experiment.
Maximize descriptor diversity Another way to squeeze more benefit out of a single gossip exchange, is to increase the diversity of descriptors exchanged between the nodes. When responding to a node's gossip request, there is no value in sending back descriptors that were also included in that node's message. That node has these descriptors already. This can be very common especially when the network is in a converged or nearly converged state, in which case nodes are highly clustered. In that respect, a node's passive thread should exclude all received descriptors from the set of potential descriptors to send back.
The dark blue plot of Figure 3 presents the evolution of the experiment, this time applying both the round-robin and the diversity maximization policies. The plot confirms our reasoning, and shows that the discovery of target links is indeed accelerated, particularly at the stages closer to convergence, as anticipated.
Randomness for me
Let us now take a ground-breaking twist in our design. All configurations considered so far have been too narrowly structure-oriented. They all exploit a single input channel of information for improving structure, and that channel is nothing more than other nodes' structure information. We have created a feedback loop on structure for structure! Or rather, a vicious cycle around structure.
Depending on the scenario, this can be a strength or a weakness. Once connected to some "good" neighbors, the chances to be introduced to additional "good" nodes increases. Once, however, connected exclusively to largely irrelevant nodes, navigating towards one's "neighborhood" can be slow, or in certain occasions impossible. We defer this discussion to Section 4.
With the intent of breaking the closed loop on structural information, we introduce randomness as a second input channel. Rather than having nodes discover new neigh-bors exclusively through their current neighbors' structural links, we also offer them the chance to sample nodes from the whole network at random.
To this end, we employ CYCLON [START_REF] Voulgaris | Cyclon: Inexpensive membership management for unstructured p2p overlays[END_REF] as a peer sampling protocol, to provide nodes with a stream of random neighbors. In each round, each node's active thread pulls the random descriptors provided by its CYCLON instance, merges them with it normal VICINITY view, and filters the union through the SELECT function to keep the best str neighbors. This way, if CYCLON encounters a good neighbor by chance, that neighbor is pick up by VICINITY to improve its view.
For the sake of a fair comparison, we maintain the number of descriptors exchanged per round the same as in the baseline configuration, that is, 12 descriptors per round. However, now we exchange g str =6 descriptors on behalf of VICINITY, and another six descriptors on behalf of CYCLON. This creates precisely the same bandwidth requirements as in the previous configurations, although distributed in a double number of half-sized packets.
The magenta plot of Figure 3 confirms that the configuration combining structure with randomness significantly outperforms all previous versions. It is worth emphasizing that the rate of discovering target links is significantly faster for the whole extent of the experiment, from its early stages until full convergence, despite the fact that only six links are exchanged per round by VICINITY as opposed to 12 in previous configurations.
Randomness for all A final optization is to borrow the random links obtained through CYCLON not only for improve a node's own structure links, but also to improve the quality of links it sends to other nodes.
The dark blue plot of Figure 3 clearly shows that this optimization further improves performance. This last configuration constitutes the complete VICINITY protocol, and will be the one used by default for the rest of the paper, unless otherwise mentioned.
VICINITY: The Complete Protocol
The complete VICINITY protocol is presented-in pseudocode-in Figure 4. The rest of this section discusses the differences to the baseline protocol.
The round-robin neighbor selection policy is implemented by means of the age field in descriptors. The age of a descriptor gives an approximate estimation of how many rounds ago that descriptor either (i) was introduced in that node's VICINITY view, or (ii) was last used by the node for gossiping with the respective neighbor. Neighbors of higher age are given priority when choosing a neighbor for gossiping (line 3), and subsequently their age is zeroed, which results in a round-robin selection policy.
To approximate the number of rounds some descriptor has been in a node's view, any new descriptor entering a view is initialized with zero age (lines 9-active, 13both), and the ages of all descriptors in the view are incremented by one once per round (line 5). Also, when a node is selected for gossiping (line 3), it is also removed from the view (line 4), as a means for garbage collection of descriptors. If that neighbor is still alive and responds, its fresh descriptor (with age zero) will be inserted anew to the view.
wait(T time units) The role of randomness can be seen in lines 6-active and 9-passive, where random neighbors are also considered in the message to send to the other peer, as well as in line 10-active, where a node pulls "good" neighbors from its randomized view into its structured view.
3 q ← SELECTOLDESTNEIGHBOR() 4 V str ← V str \ {q} //
From this point on, by VICINITY we will be referring to the complete version of the protocol, including all the design optimizations presented so far.
How much Randomness is Enough?
Randomness is good. At least for the specific scenario of the previous section. But how general can this claim be? How good is randomness in other scenarios? Just good, or rather necessary? How much randomness is "enough", and how much can it assist in structuring? Although it is infeasible to give a universal rule to quantitatively assess the value of randomness, in this section we aim at shedding some light at these questions.
To answer these questions, we delve into the principles governing self-organization, and we distinguish the specific roles of determinism and randomness in it.
The Role of Determinism
To explore the role of determinism, alone, isolated from the effects of randomness, let us consider self-organization without randomness, relying exclusively on structure. To further isolate our reasoning from the effects of randomness, including pseudo-randomness due to nodes continuously replacing their links during the process of convergence, it may help to think of fresh nodes joining an already converged network.
The whole operation of self-organization relies on the ability to periodically compare potential neighbors, and on being able to determine which ones are a step closer to your targets. We are looking, therefore, at some form of routing or orientation property in the target overlay.
For simplicity, let us consider a very trivial case. The whole network has converged, except for a single node, x. Node x has one target, z, and currently has exactly one neighbor, y. Imagine, for instance, a fresh node x joining an already converged network using an arbitrary node y as its bootstrap node. For self-organization to be successful, x should be able to reach z through y, y's neighbors, y's neighbors' neighbors, and so on. And this should be the case for any y and any z. This dictates the first required property for self-organization based exclusively on structure to be correct: the target topology should form a strongly-connected graph.
This, however, is not sufficient. Even if a directed path from y to z exists, say consisting of nodes y 1 , y 2 , . . . , y k , the selection function should be such that a call to SE-LECT(x, Neighbors(y),g str ) returns a subset of y's neighbors that contains y 1 , then a call to SELECT(x, Neighbors(y 1 ),g str ) returns a set of nodes that contains y 2 , and so on. We will refer to this property as navigability, and we state the second required property for correctness: the given selection function should render the given target overlay navigable.
Note that navigability is a property of the combination of (i) the target overlay and (ii) the selection function. Clearly, a strongly connected target overlay with a selection function that returns "bad" selections, will not let a network self-organize. The other way around, a selection function that works for some particular overlay will not necessarily be sufficient for any overlay. For instance the proximity-based selection function used in Section 3 is excellent for uniformly populated topologies, but it can get some nodes trapped in "local optima" in the presence of a U-shaped gap, a well known problem of greedy geographic routing protocols [START_REF] Cadger | A survey of geographical routing in wireless ad-hoc networks[END_REF].
The Triple Role of Randomness
Having discussed the weaknesses of determinism in self-organization, it is not hard to imagine the benefits offered by randomization.
First, maintaining the whole overlay in a single connected partition is the cornerstone of any large-scale decentralized application. This need is even more pressuring in the case of a custom overlay management protocol, as the target overlay may per se consist of multiple distinct components. Keeping the whole overlay connected in a single component allows the joining of new nodes at arbitrary bootstrap points, and generally allows the reconfiguration of nodes in case of updates to their profiles.
Second, feeding nodes with neighbors picked uniformly at random from the whole overlay, prevents them from getting indefinitely stuck in local optima. Similarly to hill climbing algorithms, random sampling is crucial at helping nodes reach their global optimum.
Finally, even in target overlays and selection functions that guarantee a strongly connected, navigable target overlay, the diameter of that overlay is often large. When new nodes join a converged overlay at an arbitrary bootstrap node, it may take them long to gradually navigate to their optimal neighbors. Having a continuous stream of random samples from the whole network, however, gives them the opportunity to take a shortcut link close to their target, a well-known property of random, complex networks. Given VICINITY's generic applicability, it is practically infeasible to provide an exhaustive evaluation of the framework. Instead, we will focus on the following two test cases that underline its two key components: its reliance on structure and its benefit from randomness: Two-dimensional Torus This is the same overlay structure we used in Section 3. Nodes are assigned two-dimensional coordinates, and their goal is to establish links to their closest neighbors. Building this target topology is primarily based on the Euclidean proximity heuristic. Informally speaking, the general idea is that nodes gradually improve their views with closer neighbors, which they then probe to find new, even closer ones, eventually reaching their closest neighbors. This emphasizes the utility of the deterministic component of VICINITY. Clustering Nodes in Groups In this test case, nodes are split up in uncorrelated groups.
Each node's goal is to cluster with other nodes of the same group. The key difference with the previous test case is that nodes cannot gradually connect to groups "closer" to their own, as there is no notion of proximity between groups. The target overlay is explicitly clustered in non-connected components, therefore it is neither (strongly) connected nor navigable. Finding a node of the same group can be accomplished only by means of random encounters, which highlights the role of randomness. Once a node of the same group is found, though, the two nodes can grreatly assist each other by sharing their further knowledge of same group neighbors.
Two-dimensional Torus
Overview We consider a two-dimensional space. We assign each node (x, y) coordinates, such that they are (virtually) aligned in a regular square grid organization. A node's coordinates constitute its profile. Each node's goal is to establish links to its four closest neighbors, to the north, south, east, and west (wrapping around the grid's edges).
The natural choice of a selection function for such a target topology is one that gives preference to neighbors spatially closer to the reference node. More formally, we define the distance between two nodes a and b, with coordinates (x a , y a ) and (x b , y b ), respectively, to be their two-dimensional Euclidean distance, assuming that space wraps around the edges to form a torus:
dx = min {|x a -x b |, width -|x a -x b |} dy = min {|y a -y b |, height -|y a -y b |} dist(a, b) = dx 2 + dy 2
The selection function SELECT(p, D,k) sorts all node descriptors in D by their distance to the reference node p, and returns the k closest ones.
Figure 5 graphically illustrates the self-organization of a "toy-size" network of 1024 nodes into a torus overlay, depicting snapshots of the overlay at different stages. Nodes' after 1 round after 3 rounds after 6 rounds after 12 rounds Fig. 5. Self-organization in a 32 × 32 torus topology.
deterministic and randomized views have been set to a size of six, each. For clarity of the snapshots, only the best four outgoing links of each node are shown in the figure. Note the existence of either one or two lines between two connected nodes. This is because links are directed. A single line denotes a single link only from one node to the other (directionality not shown). A double line means that both nodes have established a link to each other. In the completed target topology (last snapshot) all links are double.
Experimental Analysis Let us now observe the progress of self-organization for different network sizes and protocol configurations. Figure 6 plots the fraction of missing target links as a function of the experiment round, for networks of size 2 12 , 2 14 , and 2 16 nodes, respectively. For each network size we present the progress of five different configurations. For a fair comparison, we have fixed the total number of descriptors exchanged in a single round by a node's active thread to 12.
The thick solid blue and green lines correspond to trading exclusively deterministic or randomized links, respectively. That is, all 12 links exchanged come either from the deterministic view or from the randomized view, respectively. The fine line of a given color corresponds to a very close configuration to its solid line counterpart, where just one link has been reserved for trading neighbors of the other view type. E.g., a fine blue line corresponds to the settings g str =11 and g rnd =1. Finally, the red line corresponds to an equally balanced use of determinism and randomness: six links are being traded per round for each view type.
A number of observations can be made from these graphs. Most importantly, we easily identify determinism as the primary component responsible for efficient selforganization. On the contrary, when randomness is used alone (solid green line), it performs several orders of magnitude worse than the other protocol configurations, whose performances are comparable to each other. This indicates that, for the given target topology, the crucial element accelerating self-organization is determinism.
It is not hard to see why using randomness alone is so inefficient. A node's only chance to find a target neighbor is if that neighbor shows up in its peer sampling service view, which is periodically refreshed with random nodes. In other words, a node is fishing for target neighbors blindly. As expected, its time to converge increases significantly as the size of the network grows, since the probability of spotting a target link at random diminishes. Note that just a "pinch" of structure in a nearly random-only configuration (fine green line) brings a dramatic improvement to the outcome. This emphasizes the importance of structure, particularly in an overlay as navigable as a torus topology. In this scenario, a node has plenty of random input, while that single structured link deterministically brings it closer to its target neighborhood in each round.
When determinism is in exclusive control (blue line), convergence comes fast as node views deterministically improve in each round. An important observation, though, is that in all network sizes, the determinism-only experiment slows down when approaching complete convergence. This can be explained as follows. In these experiments nodes are initialized with a few random links all over the network, which are generally long-range links. Nodes that are priviledged to be initialized with links close to their target neighbohood, take a shortcut and converge very fast, replacing all their initial random links with very specific, short ones. Soon enough, the network becomes nearly converged, and nearly all long-range links have been replaced by local ones. This, however, creates an obstacle to nodes that have not managed to converge yet, as they can only navigate slowly, in small local steps, towards their target neighborhoods, "crawling" in an almost converged overlay. The aforementioned issue is circumvented by adding a "pinch" of randomness in an otherwise fully-deterministic configuration (fine blue line). This provides nodes with an extra source of random, potentially long-range, links. In accordance to our explanation in the previous paragraph, this visibly accelerates the last few stages of convergence.
Quite clearly, the balanced use of determinism and randomness (red line) outperforms all other configurations. This is a firm validation of our claim that both policies have distinct advantages to offer, which are best utilized in combination.
Determinism vs. Randomness Space Exploration
Having developed an understanding on the specific roles of determinism and randomness in self-organization in a torus topology, we now run an extensive set of experiments to create a complete picture of their interaction.
We considered eight different network sizes, namely 2 10 (1024), 2 11 , 2 12 , 2 13 , 2 14 , Figure 7(a) presents the results of these experiments. Each experiment is represented by a single dot, while dots corresponding to experiments on the same network size have been connected by lines. The lowest line corresponds to networks of 2 10 nodes and the highest one to networks of 2 17 nodes. The horizontal axis shows the specific combination of determinism and randomness used in each experiment. More specifically, the value on the horizontal axis corresponds to the g str value of each configuration.
The first observation is that the dynamics of convergence follow the same patterns in all network sizes. It should be particularly noted that these results correspond to a single run per configuration, which prevents loss of information due to averaging.
The most distinguishing message from this graph is that the use of randomness alone (rightmost column) performs orders of magnitude worse than any other configuration. It can also be observed that the other extremem, that is, complete determinism (leftmost column) performs a bit worse than most other configurations that combine the two.
Node Joins In addition to the experiments carried out so far, where all nodes start the VICINITY protocol at the same time, we also want to explore the behavior of VICINITY when nodes join an already converged overlay.
We considered the same combinations of network sizes and protocol configurations as the ones of Figure 7(a). In each experiment, we first let the network converge to the target topology, and then we inserted a new node initialized with exactly one neighbor picked at random, and we recorded how many rounds it took for that node to find its target neighbors.
Figure 7(b) shows the number of rounds it took a node to reach its target vicinity in networks of the aforementioned sizes and configurations.
As expected, purely randomized views result in slower convergence. However, we observe remarkably bad behavior also for determinism-only configurations. The explanation is that, as has also been discussed earlier, navigating in an already converged overlay in the absence of random long-range shortcuts is a slow process.
These graphs emphasize our claim, that neither of the two policies is sufficiently good on its own. Determinism and randomness appear to be complementary in creating structure.
Clustering Nodes in Groups
Overview In this scenario, we assign each node a group ID, which constitutes its profile. The goal is to form clusters of nodes that share the same group IDs. From a node's perspective, the goal is to establish links to other nodes with the same group ID.
The only comparison operator defined on node profiles is equality of group IDs. By comparing their profiles, nodes can tell whether they belong to the same group or not. However, no other type of comparison or proximity metrics apply: any foreign group is "equally foreign", there is no notion of ranking or proximity. The target topology has been explicitly selected to form a non-connected, non-navigable graph, to shed light at the operation of VICINITY in such overlays. p's. If these are fewer than k, it continues by selecting randomly from the rest of the descriptors.
Similarly to the torus scenario, Figure 8 provides a graphical illustration of an 1024node network self-organizing into the target overlay. Again, nodes' deterministic and randomized views have been set to a size of six, each. Nodes are assigned group IDs such that a total of 16 groups exist, each having 64 nodes. Nodes are plotted in a layout that places members of the same group together, purely to make visualization intuitive. As far as the protocol operation is concerned, nodes do not have coordinates, but only their group ID. To avoid cluttering the graph, only two random outgoing links of each node's V str view are shown, with links to foreign groups given higher priority. This way, when a node in Figure 8 appears to have no links to groups other than its own, it is guaranteed that all its V str links point at nodes within its group.
Experimental Analysis Figure 9 presents the progress of self-organization of the grouping scenario, for the same network sizes and protocol settings used in the torus overlay. That is, network sizes of 2 12 , 2 14 , and 2 16 have been considered, and the sum of the structured (g str ) and randomized (g rnd ) gossip length has been fixed to 12. Note that in this scenario, the group size is fixed to 64 nodes, therefore the networks of 4096, 16384, and 65536 nodes consist of 64, 256, and 1024 groups, respectively.
To better interpret the experimental results, we should build a good understanding of nodes' goals in this scenario. A node's task is divided in two steps: first, discover the right cluster; second, get well connected in it. The deterministic component of VICIN-ITY excels in the second. Through a single link to the target cluster, a node rapidly learns and becomes known to additional nodes in that cluster. It turns out that the crucial step in this test case is the first one: discovering the target cluster.
Returning now to the results of Figure 9, the most important observation is that, contrary to the torus scenario, randomness is clearly the key component for self-organization. Determinism alone (solid blue line) is consistently unable to let nodes find their group partners, indefinitely failing to build the target topology. It is not hard to see why pure determinism fails. As nodes start clustering with other nodes of the same group, the pool of intergroup links in the network shrinks significantly. As explained above, once a node forms a link and gossips to one other node of its group, chances are it will acquire plenty of links to more nodes of the same group, rapidly trading its intergroup for intragroup links. In not so many rounds, most nodes end up having neighbors from their own groups exclusively. The problem comes with nodes that have not encountered nodes of their group early enough. If a node's neighbors are all from other groups, and these groups have already clustered into closed, self-contained clusters, the node has no chances whatsoever to be handed a link to a node of its own group, ever. A neighbor from such a self-contained foreign group can only provide alternative neighbors of that same, foreign group. The node, thus, finds itself in a dead end.
This demonstrates the need of a source of random, long-range links, to prevent such dead end scenarios. Indeed, just a "pinch" of randomness (fine blue line) is enough to save the day. It may not account for the most efficient configuration, but it clearly bridges the huge gap between dead end and convergence. This is a particularly significant observation, as it clearly demonstrates that involving randomness, even just a "pinch" of it, is not just a matter of performance, but a matter of correctness.
When randomness acts on its own (solid green line), exposing each node to 12 random links in each round, convergence is certainly faster. However, in the lack of the deterministic component of VICINITY, a node should rely on randomness to discover independently each of the 12 target nodes of the same group.
Augmenting an almost complete randomness-based configuration with just a "pinch" of determinism (fine green line), gives the best achievable results. This was expected. Nodes, in this configuration, put nearly all of their communication quota on the hunt for same group nodes, through randomization. At the same time, this single link they reserve for targeted, deterministic communication, is sufficient to let them discover very fast all nodes of their group once they have discovered at least one of them.
Finally, the middleground configuration (red line), combining the deterministic and randomized components each with a gossip length of six descriptors, performs reasonably well in all cases, even if giving higher priority on randomness seems to improve things further for large networks.
Determinism vs. Randomness Space Exploration Similarly to the torus scenario, we perform a number of experiments to assess the performance of all combinations of determinism and randomness for a number of different network sizes.
Figure 10(a) plots the number of rounds needed for each experiment to build the target overlay. Recall that in the node grouping scenario, we identified randomness as being the key component for self-organization. This is clearly depicted in this graph, as the more randomness we use the faster we converge. However, when randomness is used exclusively, without any assistance from determinism (rightmost column), convergence is slower.
Note that experiments corresponding to a determinism-only configuration (leftmost column) did not converge, hence they were omitted from the plots.
Node Joins Finally, we want to assess the number of rounds it takes new nodes to find their location in the target overlay, when joining an already converged network.
Figure 10(b) presents the results of these experiments. In accordance with the number of rounds it takes a whole network to converge from scratch, the rounds it takes nodes to join already converged overlays is very comparable.
The clear message from this graph is that, as we have consistently experienced also in our previous experiments, the two extremes should be avoided. Pure determinism in the case of node grouping, with a non-connected target structure, should be avoided by all means, as it will fail to build the target overlay. Pure randomness should also be avoided, as it will provide poor performance.
Concluding our entire evaluation of self-organization, we can state that picking a configuration that balances determinism with randomness, is a safe option for a system that self-organizes the network efficiently and works for diverse topologies.
Related Work
The work most closely related to VICINITY is the T-MAN protocol, by Jelasity et. al. [START_REF] Jelasity | T-Man: Gossip-based Overlay Topology Management[END_REF][START_REF] Jelasity | T-Man: Gossip-based overlay topology management[END_REF][START_REF] Jelasity | T-man: Gossip-based fast overlay topology construction[END_REF]. T-MAN is focused exclusively on the deterministic structuring aspect in self-organization of overlays. Although its design does employ a peer sampling service, this is used exclusively for providing nodes with random views once, during intialization, as well as for synchronizing nodes to start the topology building process together. As such, it is targeted at bootstrapping overlays, rather than maintaining them under dynamic network conditions. For example, garbage collection for stale descriptors and support for nodes joining an already converged overlay have not been considered in the design. The baseline version of VICINITY, shown in Figure 2, is nearly equivalent to the T-MAN protocol.
Earlier efforts for self-organization of overlays have led to solutions that are tailormade for specific applications, such as [START_REF] Voulgaris | Epidemic-style Management of Semantic Overlays for Content-Based Searching[END_REF], which clusters users of a file-sharing application based on the content they share.
BuddyCast is a file recommendation mechanism embedded in the Tribler [START_REF] Pouwelse | Tribler: a social-based peer-to-peer system: Research articles[END_REF] Bit-Torrent client. Inspired by [START_REF] Voulgaris | Epidemic-style Management of Semantic Overlays for Content-Based Searching[END_REF], it essentially constitutes a deployment of VICINITY in the real world, clustering users by their file content preferences, to provide them with relevant recommendations.
Conclusions
Does randomness matter? The main conclusion from our research is a clear affirmative answer. In some cases, having probabilistic decision-making is even necessary.
In our study, we have concentrated exclusively on overlay construction and maintenance. For this domain it is also clear that structure matters as well. Having only randomness may severely affect the behavior of our overlay-maintenance algorithm. What is striking, however, is that adding either a pinch of randomness accompanying an otherwise deterministic technique, or adding a pinch of structure to an otherwise fully random process can have dramatic effects. In our examples we have been able to trace with reasonable confidence why such pinches of randomness or structure helped, but there is still much research to be done when it comes to developing more general insights and to identifying which classes algorithms and data structures benefit from randomness and which not.
The foundational question is why a specific mix of randomness and structure works so well, and how much of a pinch will indeed do the job. Our study sheds some light on this question, but also makes clear that much more work, and extended to other subfields, is necessary to come to a principled approach when dealing with designing large-scale distributed systems.
Fig. 1 .
1 Fig. 1. The VICINITY framework.
Fig. 3 .
3 Fig.3. Self-organization in a 100 × 100 torus, demonstrating the performance for different versions of VICINITY, ranging from the baseline to the complete one.
Fig. 4 .
4 Fig. 4. The complete VICINITY protocol.
Fig. 6 .
6 Fig.6. Progress of self-organization in a torus overlay, for different configurations of VICINITY and a total gossip length (g str +g rnd ) fixed to 12.
Fig. 7 .
7 Fig.7. Structure vs. Randomness in a torus topology. These graphs show the number of rounds it takes to reach the 99th percentile of convergence when bootstrapping an entire network (left), and the number of rounds for new nodes to join an already converged overlay (right). In all experiments, exactly 12 links are being exchanged by nodes when gossipping. Each line corresponds to a different network size (from 2 10 at the bottom to 2 17 at the top), and each dot corresponds to a different allocation of the 12 gossip slots to structured and randomized links.
Fig. 8 .
8 Fig. 8. Self-organization into 16 groups of 64 nodes each, in a 1024-node network.
Fig. 9 .
9 Fig.9. Progress of self-organization in disjoint groups, for different configurations of VICINITY and a total gossip length (g str +g rnd ) fixed to 12.
Fig. 10 .
10 Fig.10. Structure vs. Randomness in group clustering. The number of rounds it takes to reach the 99th percentile of convergence when bootstrapping an entire network (left), and the number of rounds for new nodes to join an already converged overlay (right). Experiments corresponding to value 0 of the horizontal axis do not converge, as they rely exclusively on determinism without any pinch of randomness.
15 , 216 , and 2 17 (131072), and for each network size we considered all possible combinations of deterministic and randomized gossip lengths, so that the total gossip length stays fixed and equal to 12. This accounts to 13 experiments per network size, that is, all combinations such that g str ∈ [0, 12] and g rnd = 12-g str . For each experiment we recorded the number of rounds it took to establish 99% of the target links. | 50,686 | [
"831879",
"831880"
] | [
"62433",
"62433"
] |
01480796 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01480796/file/978-3-642-45065-5_9_Chapter.pdf | Chamikara Jayalath
email: [email protected]
Julian Stephen
email: [email protected]
Patrick Eugster
email: [email protected]
Atmosphere: A Universal Cross-Cloud Communication Infrastructure
Keywords: cloud, publish/subscribe, unicast, multicast, multi-send
or cloud-of-clouds [2], the landscape of third-party computation is moving beyond straightforward single datacenter-based cloud computing. However, building applications that execute efficiently across data-centers and clouds is tedious due to the variety of communication abstractions provided, and variations in latencies within and between datacenters. The publish/subscribe paradigm seems like an adequate abstraction for supporting "cross-cloud" communication as it abstracts low-level communication and addressing and supports many-to-many communication between publishers and subscribers, of which one-to-one or one-to-many addressing can be viewed as special cases. In particular, content-based publish/subscribe (CPS) provides an expressive abstraction that matches well with the key-value pair model of many established cloud storage and computing systems, and decentralized overlaybased CPS implementations scale up well. On the flip side, such CPS systems perform poorly at small scale. This holds especially for multi-send scenarios which we refer to as entourages that range from a channel between a publisher and a single subscriber to a broadcast between a publisher and a handful of subscribers. These scenarios are common in datacenter computing, where cheap hardware is exploited for parallelism (efficiency) and redundancy (fault-tolerance). In this paper, we present Atmosphere, a CPS system for cross-cloud communication that can dynamically identify entourages of publishers and corresponding subscribers, taking geographical constraints into account. Atmosphere connects publishers with their entourages through überlays, enabling low latency communication. We describe three case studies of systems that employ Atmosphere as communication framework, illustrating that Atmosphere can be utilized to considerably improve cross-cloud communication efficiency.
Introduction
Consider recent paradigm shifts such as the advent of cloud brokers [START_REF] Grivas | Cloud Broker: Bringing Intelligence into the Cloud[END_REF] for mediating between different cloud providers, the cloud-of-clouds [START_REF] Bessani | DepSky: Dependable and Secure Storage in a Cloud-of-Clouds[END_REF] paradigm denoting the integration of different clouds, or fog computing [START_REF] Bonomi | Fog Computing and its Role in the Internet of Things[END_REF] which similarly signals a departure Supported by DARPA grant # N11AP20014, PRF grant # 204533, Google Research Award "Geo-Distributed Big Data Processing", Cisco Research Award "A Fog Architecture". from straightforward third-party computing in a single datacenter. However, building cross-cloud applications -applications that execute across datacenters and cloudsis tedious due to the variety of abstractions provided (e.g., Infrastructure as a Service vs. Platform as a Service).
Cross-cloud communication. One particularly tedious aspect of cross-cloud integration, addressed herein, is communication. Providing a communication middleware solution which supports efficient cross-cloud deployment goes through addressing a number of challenges. A candidate solution should namely R1. support a variety of communication patterns (e.g., communication rate, number of interacting entities) effectively. Given the variety of target applications (e.g., social networking, web servers), the system must be able to cope with one-to-one communication as well as different forms of multicast (one-to-many, many-to-many).
In particular, the system must be able to scale up as well as down ("elasticity") based on current needs [START_REF] Li | A Scalable and Elastic Publish/Subscribe Service[END_REF] such as number of communicating endpoints. R2. run on standard "low-level" network layers and abstractions without relying on any specific protocols such as IP Multicast [START_REF] Deering | Multicast Routing in Datagram Internetworks and Extended LANs[END_REF] that may be deployed in certain clouds but not supported in others or across them [START_REF] Vigfusson | Multicast: Rx for Data Center Communication Scalability[END_REF]. R3. provide an interface which hides cloud-specific hardware addresses and integrates well with abstractions of widespread cloud storage and computing systems in order to support a wide variety of applications. R4. operate efficiently despite varying network latencies within/across datacenters.
Publish/subscribe for the cloud. One candidate abstraction is publish/subscribe. Components act as publishers of messages, and dually as subscribers by delineating messages of interest. Examples of publish/subscribe services designed for and/or deployed in the cloud include Amazon's Simple Notification Service (SNS) [START_REF]Amazon SNS[END_REF], Apache Hedwig [START_REF]Apache BookKeeper: Hedwig[END_REF], LinkedIn's Kafka [START_REF] Kreps | Kafka: a Distributed Messaging System for Log Processing[END_REF], or Blue Dove [START_REF] Li | A Scalable and Elastic Publish/Subscribe Service[END_REF]. Intuitively, publish/subscribe is an adequate abstraction because it supports generic many-to-many interaction, shields applications from specific lower-level communication -in particular hardware addresses -thus supporting application interoperability and portability. In particular, contentbased publish/subscribe (CPS) [START_REF] Carzaniga | Achieving Scalability and Expressiveness in an Internet-scale Event Notification Service[END_REF][START_REF] Pietzuch | Hermes: A Distributed Event-Based Middleware Architecture[END_REF][START_REF] Fiege | Supporting Mobility in Content-Based Publish/Subscribe Middleware[END_REF][START_REF] Aguilera | Matching Events in a Content-based Subscription System[END_REF][START_REF] Li | A Unified Approach to Routing, Covering and Merging in Publish/Subscribe Systems Based on Modified Binary Decision Diagrams[END_REF] promotes an addressing model based on message properties and corresponding values (subscribers delineate values of interest for relevant properties) which matches well the key-value pair abstractions used by many cloud storage (e.g., [START_REF] Decandia | Dynamo: Amazon's Highly Available Key-Value Store[END_REF][START_REF] Das | G-Store: A Scalable Data Store for Transactional Multi Key Access in the Cloud[END_REF]) and computing (e.g., [START_REF] Dean | MapReduce: Simplified Data Processing on Large Clusters[END_REF]) systems.
Limitations. However, existing publish/subscribe systems for the cloud are not designed to operate beyond single datacenters, and CPS systems focus on scaling up to large numbers of subscribers: to "mediate" between published messages and subscriptions, CPS systems typically employ an overlay network of brokers, with filtering happening downstream from publishers to subscribers based on upstream aggregation of subscriptions. When messages from a publisher are only of interest to one or few subscribers, such overlay-based multi-hop routing (and filtering) will impose increased latency compared to a direct multi-send via UDP or TCP from the publisher to its subscribers. Yet such scenarios are particularly wide-spread in third-party computing models, where many cheap resources are exploited for parallelism (efficiency) or redundancy (fault-tolerance). A particular example are distributed file-systems, which store data in a redundant manner to deal with crash failures [START_REF]Apache HDFS[END_REF], thus leading to frequent communication between an updating component and (typically 3) replicas. Another example for multi-sends are (group) chat sessions in social networks.
Existing approaches to adapting interaction and communication between participants based on actual communication patterns (e.g., [START_REF] Voulgaris | Sub-2-Sub: Self-Organizing Content-Based Publish Subscribe for Dynamic Large Scale Collaborative Networks[END_REF][START_REF] Li | A Scalable and Elastic Publish/Subscribe Service[END_REF][START_REF] Tariq | Distributed Spectral Cluster Management: A Method for Building Dynamic Publish/Subscribe Systems[END_REF]) are agnostic to deployment constraints such as network topology. Topic-based publish/subscribe (TPS) [START_REF]Active MQ[END_REF][START_REF]Websphere MQ[END_REF] -where messages are published to topics and delivered to consumers based on topics they subscribed to -is typically implemented by assigning topics to nodes. This limits the communication hops in multi-send scenarios, but also the number of subscribers.
In short, CPS is an appealing, generic, communication abstraction (R2, R3), but existing implementations are not efficient at small scale (R1), or, when adapting to application characteristics, do not consider deployment constraints in the network (R4); inversely, TPS is less expressive than CPS, and existing systems do not scale up as well.
Atmosphere. This paper describes Atmosphere, a middleware solution that aims at supporting the expressive CPS abstraction across datacenters and clouds in a way which is effective for a wide range of communication patterns. Specifically, our goal is to support the extreme cases of communication between individual pairs of publishers and subscribers (unicast) and large scale CPS, and to elastically scale both up and down between these cases, whilst providing performance which is comparable to more specialized solutions for individual communication patterns. This allows applications to focus on the logical content of communication rather than on peer addresses even in the unicast case: application components need not contain hardcoded addresses or use corresponding deployment parameters as the middleware automatically determines associations between publishers and subscribers based on advertisements and subscriptions.
Our approach relies on a CPS-like peer-based overlay network which is used primarily for "membership" purposes, i.e., to keep participants in an application connected, and as a fallback for content-based message routing. The system dynamically identifies clusters of publishers and their corresponding subscribers, termed entourages while taking network topology into account. Members of such entourages are connected directly via individual "over-overlays" termed überlays, so that they can communicate with low latency. The überlay may only involve publishers and subscribers or may involve one or many brokers depending on entourage characteristics and resource availabilities of involved publishers, subscribers, brokers, and network links. In any case, these direct connections which are gradually established based on resource availabilities, will effectively reduce the latency of message transfers from publishers to subscribers.
Contributions. Atmosphere adopts several concepts proposed in earlier CPS systems. In the present paper, we focus on the following novel contributions of Atmosphere: system. Our technique hinges on precise advertisements. To not compromise on flexibility, advertisements can be updated at runtime; 2. a technique to efficiently and dynamically construct überlays interconnecting members of entourages with low latency based on resource availabilities; 3. the implementation of a scalable fault tolerant CPS system for geo-distributed deployments named Atmosphere that utilizes our entourage identification and überlay construction techniques; 4. an evaluation of Atmosphere using real-life applications, including social networking, news feeds, and the ZooKeeper [START_REF]Apache ZooKeeper[END_REF] distributed lock service, demonstrating the efficiency and versatility of Atmosphere through performance improvements over more straightforward approaches.
Roadmap. Section 2 provides background information and related work. Section 3 presents our protocols. Section 4 introduces Atmosphere. Section 5 evaluates our solution. Section 6 draws conclusions.
Background and Related Work
This section presents background information and work related to our research.
System Model
We assume a system of processes communicating via unicast channels spanning g cloud datacenters or more generally regions. Regions may be operated by different cloud providers. Each region contains a number of components that produce messages and/or that are interested in consuming messages produced. Figure 1(a) shows an example system with three regions from two different providers where each region hosts a single producing and multiple consuming components.
CPS Communication
With content-based publish/subscribe (CPS), a message produced by a publisher con-tains a set of property-value pairs; inversely, components engage into consumption of messages by issuing subscriptions which consist in ranges of values -typically defined indirectly through operators such as ≤ or ≥ and corresponding threshold values.
A broker overlay network typically mediates the message distribution between publishers and subscribers. A broker, when receiving a message, analyzes the set of propertyvalue pairs, and forwards the message to its neighbors accordingly. (For alignment with the terminology used in clouds we may refer to properties henceforth as keys.) Siena [START_REF] Carzaniga | Design and Evaluation of a Wide-Area Event Notification Service[END_REF] is a seminal CPS framework for distributed wide-area networks that spearheaded the above-mentioned CPS overlay model. Siena's routing layer consists of broker nodes that maintain the interests of sub-brokers and end hosts connected to them in a partially ordered set (poset) structure. The root of the poset is sent to the parent broker to which the given broker is subscribed to. CPS systems like Siena employ subscription summarization [START_REF] Carzaniga | Achieving Scalability and Expressiveness in an Internet-scale Event Notification Service[END_REF][START_REF] Triantafillou | Subscription Summarization: A New Paradigm for Efficient Publish/Subscribe Systems[END_REF] for brokers to construct a summary of the interests of the subscribers and brokers connected to it. This summary is sent to neighboring brokers. A broker that receives a published message determines the set of neighbors to which the message has to be forwarded by analyzing the corresponding subscription summaries. Summaries are continuously updated to reflect the changes to the routing network, occurring for instance through joins, leaves, and failures of subscribers or brokers.
Existing CPS System Limitations
When deployed naïvely, i.e., without considering topology, in the considered multiregion model (see Figure 1(a)) CPS overlays will perform poorly especially if following a DAG as is commonly the case, due to the differences in latencies between intra-and inter-region links. To cater for such differences, a broker network deployed across regions could be set up such that (1) brokers in individual regions are hierarchically arranged and each subscriber/publisher is connected to exactly one broker (see Figure 1(b)), and (2) root brokers of individual regions are connected (no DAG). The techniques that we propose shortly are tailored to this setup.
However, the problem with such a deployment is still that -no matter how well the broker graph matches the network topology -routing will happen in most cases over multiple hops which is ineffective for multi-send scenarios where few subscribers only are interested in messages of a given publisher. In the extreme case where messages produced by a publisher are consumed by a single subscriber there will be a huge overhead from application-level routing and filtering over multiple hops compared to a direct use of UDP or TCP. The same holds with multiple subscribers as long as the publisher has ample local resources to serve all subscribers over respective direct channels.
While several authors have proposed ways to identify and more effectively interconnect matching subscribers and publishers, these approaches are deployment-agnostic in that they do not consider network topology (or resource availabilities). Thus they trade logical proximity (in the message space) for topological proximity.
Majumder et al. [START_REF] Majumder | Scalable Content-Based Routing in Pub/Sub Systems[END_REF] for instance show that using a single minimum spanning or a Steiner tree will not be optimal for subscriptions with differing interests. They propose a multiple tree-based approach and introduce an approximation algorithm for finding the optimum tree for a given type of publications. But unlike in our approach these trees are location agnostic hence when applied to our model a given tree may contain brokers/subscribers from multiple regions and a given message may get transmitted across region boundaries multiple times unnecessarily increasing the transmission latency. Sub-2-Sub [START_REF] Voulgaris | Sub-2-Sub: Self-Organizing Content-Based Publish Subscribe for Dynamic Large Scale Collaborative Networks[END_REF] uses gossip-based protocols to identify subscribers with similar subscriptions and interconnect them in an effective manner along with their publishers. In this process, network topology is not taken into account, which is paramount in a multi-region setup with varying latencies. Similarly, Tariq et al. [START_REF] Tariq | Distributed Spectral Cluster Management: A Method for Building Dynamic Publish/Subscribe Systems[END_REF] employ spectral graph theory to efficiently regroup and connect components with matching interests, but do not take network topology or latencies into account. Thus these systems can not be readily deployed across regions. Publiy+ [START_REF] Kazemzadeh | Publiy+: A Peer-Assisted Publish/Subscribe Service for Timely Dissemination of Bulk Content[END_REF] introduces a publish/subscribe framework optimized for bulk data dissemination. Similar to our approach, brokers of Publiy+ identify publishers and their interested subscribers and instruct them to directly communicate for disseminating large bulk data. Publiy+ uses a secondary content-based publish/subscribe network only to connect publishers and interested subscribers in different regions. Publiy+ is not designed for dissemination of large amounts of small messages since the data dissemination between publishers and subscribers is always direct and the publish/subscribe network is only used to form these direct connections.
Other Solutions for Cloud Communication
Cloud service providers such as Microsoft and Amazon have introduced content delivery networks (CDNs) for communication between their datacenters. Microsoft Azure CDN caches Azure blob content at strategic locations to make them available around the globe. Amazon's CloudFront is a CDN service that can be used to transfer data across Amazon's datacenters. CloudFront can be used to transfer both static and streamed content using a global network of edge locations. CDNs focus on stored large multimedia data rather than on live communication. Also, both above-mentioned CDN networks can be used only within their respective service provider boundaries and regions.
Volley [START_REF] Agarwal | Volley: Automated Data Placement for Geo-Distributed Cloud Services[END_REF] strategically partitions geo-distributed stored data so that the individual data items are placed close to the global "centroid" of the past accesses.
Use of IP Multicast has been restricted in some regions and across the Internet due to difficulties arising with multicast storms or multicast DOS attacks. Dr. Multicast [START_REF] Vigfusson | Multicast: Rx for Data Center Communication Scalability[END_REF] is a protocol that can be implemented to mitigate these issues. The idea is to introduce a new logical group addressing layer on top of IP Multicast so that access to physical multicast groups and data rates can be controlled with a acceptable user policy. This way system administrators can place caps on the amount of data exchanged in groups and the members that can participate on a group. Dr. Multicast specializes on intradatacenter communication and does not consider inter-datacenter communication.
Entourage Communication
In this section, we introduce our solution for efficient communication between publishers and "small" sets of subscribers on a two-level geo-distributed CPS network of brokers with hierarchical deployments within individual regions as outlined in Figure 2 for two regions. This solution can be adapted to existing overlay-based CPS systems characterized in Section 2.2.
Broker group
Fig. 2. Broker Hierarchies
Definition of Entourages
The range of messages published by a publisher p are identified by advertisements τ p , which, as is customary in CPS, include the keys and the value ranges for each key. Analogously, the interest range of each subscriber or broker n is denoted by τ n . τ p ∩ τ n denotes the common interest between a publisher p and a subscriber or broker n.
We define the interest match between a publisher p and a subscriber/broker n as a numerical value that represents the fraction of the publisher's messages that the subscriber/broker is interested in assuming the publisher to have an equal probability of publishing a message with any given value within its range. If the range τ p of p is denoted by key 1 , range 1 , key 2 , range 2 , ..., key x , range x and key 1 , range 1 , key 2 , range 2 , ..., key x , range x denotes the range τ n of n, then the interest match is given by:
Π x i=1 |range i ∩ range i | |range i |
So, the interest match is defined to be the product of the intersection of the value ranges that corresponds to the same key. If ranges that correspond to a given key have an empty intersection, then n is not interested in messages with the publishers value range for that key, hence there is zero interest match.
A publisher p and a set Φ p of subscribers/brokers form a ψ-close entourage if each member of Φ p has at least a ψ interest match with p where 0 ≤ ψ ≤ 1. ψ is a parameter that defines how close the cluster is to a topic. If ψ = 1, each member of the cluster is interested in every message published by p, hence the cluster can be viewed as a topic.
Solution Overview
Next we describe our solution to efficient cross-cloud communication in entourages. The solution consists of three main parts which we describe in turn.
1. A decentralized protocol that can be used to identify entourages in a CPS system. 2. A mechanism to determine the maximum number K p of direct connections a given publisher p can maintain without adversely affecting message transmission. 3. A mechanism to efficiently establish auxiliary networks termed überlay between publishers and their respective subscribers using information from above two.
Entourage Identification
We describe the DCI (dynamic entourage identification) protocol that can be used to identify entourages in a CPS-based application. The protocol assumes the brokers in region i to form a hierarchy, starting from one or more root brokers. An abstract version of the protocol is given in the Figure 3. The protocol works by disseminating a message named COUNT along the message dissemination path of publishers. A message initiated by a publisher p contains τ p and ψ values. Once the message reaches a root node of the publishers region, it is forwarded to each of the remote regions.
The brokers implement two main event handlers, (1) to handle COUNT messages (line 7) and (2) to handle replies to COUNT messages -COUNTREPLY messages (line 29.)
COUNT messages are embedded into advertisements and carry the keys and value ranges of the publisher. When a broker receives a COUNT message via event handler [START_REF] Bonomi | Fog Computing and its Role in the Internet of Things[END_REF], it first determines the subscribers/brokers directly attached to it that have an interest match of at least ψ with the publisher p. If there is at least one subscriber/broker with a non-zero interest match that is smaller than ψ then the count message is not forwarded to any child. Otherwise the COUNT message is forwarded to all interested children. This is because children with less than ψ interest match are not considered to be direct members of the p's entourage and yet messages published by p have to be transmitted to all interested subscribers including those with less than ψ interest match. In such a situation, instead of creating direct connections with an ancestor node and some of the descendants, we choose to only establish direct connections with the ancestor node since establishing direct connections with both an ancestor and a descendent will result in duplicate message delivery and unfair latency advantages to a portion of subscribers. A subscriber or a broker that does not forward a COUNT message immediately creates a COUNTREPLY message with its own information and sends it back to the parent.
A broker does not add its own information to the reply sent to the parent broker if the broker forwarded the COUNT message to exactly one child. This is because a broker that is only used to transfer traffic between two other brokers or a broker and a subscriber has a child that has the same interest match with p but is hop-wise closer to the subscribers. This child is a better match when establishing an entourage.
In the latter event handler (2), a broker aggregates COUNTREPLY messages from its children that have at least a ψ interest match with p, and send this information to its respective parent broker through a new COUNTREPLY message. Aggregated COUN-TREPLY messages are ultimately sent to p. To stop the COUNTREPLY messages from growing indefinitely, a broker may truncate COUNTREPLY messages that are larger than a predefined size M . When truncating, entries from the lowest levels of the hierarchy are removed first. When removing an entry, entries of all its siblings (i.e., entries that have the same parent) are also removed. This is because as mentioned before, our entourage establishment protocol does not create direct connections with both a ancestor node and one of its descendants.
A subscriber or a broker may decide to respond to its parent with a COUNTREJECT message instead of a COUNTREPLY, either due to policy decisions or local resource limitations. A broker that receives a COUNTREJECT from at least one of its children will discard COUNTREPLY messages for the same publisher from the rest of its children.
As a publisher's range of values in published messages evolves, it will have to send new advertisements with COUNT messages to keep its entourage up to date. This is supported in our system Atmosphere presented in the next section by exposing an advertisement update feature in the client API.
Entourage Size
We devise a heuristic to determine the maximum number of direct connections a given publisher can maintain to its entourage without adversely affecting the performance of transmission of messages.
Factors and challenges. Capabilities of any node connected to a broker network are limited by a number of factors. A node obviously has to spend processor and memory resources to process and publish a stream of messages. The bandwidth between the node and the rest of the network could also become a bottleneck if messages are significantly large, or transmitted at a significantly high rate. This is particularly valid in a multitenant cloud environment. The transport protocols used by the publisher and latencies between it and the receivers could limit the rate at which the messages are transmitted.
If the implementation is done in a smart enough way, the increase in memory footprint and the increase in latency due to transport deficiencies can be minimized. The additional memory required for creating data-structures for new connections is much smaller compared to the RAM available in todays computers (note that we do not consider embedded agents with significantly low memory footprints). The latencies could become a significant factor if the transport protocol is implemented in a naïve manner, e.g., with a single thread that sends messages via TCP directly to many nodes, one by one. The effect could be minimized by using smarter implementation techniques, e.g., by using features such as multi-threaded transport layers, custom built asynchronous transport protocols, and message aggregation.
Conversely, the processor and bandwidth consumption could significantly increase with the number of unicast channels maintained by a publisher as every message has to be repeatedly transmitted over each connection and every transmission requires CPU cycles and network bandwidth.
Number of connections.
First we determine the increase in processor usage of a given publisher due to establishing direct connections with subscribers or brokers. With each new direct connection, a publisher has to repeatedly send its messages along a new transport channel. So a safe worst case assumption is to assume that the amount of processing power needs to be proportional to the number of connections over which messages are transmitted.
Additionally, as mentioned previously, a given publisher p will have a bandwidth quota of W p when communicating with remote regions. Considering both these factors, the number of direct connections K p which publisher p can establish can be approximated by the expression min( 1Up ,
Wp rp×sp
). This requires the publishers to keep track of their processor utilization; in most of the operating systems, processor utilization can be determined by using system services (e.g., the top command in Unix). The above bound on the number of directly connected nodes is not an absolute bound, but rather a initial measure used by any publisher to prevent itself from creating an unbounded number of connections. A publisher that establishes K p connections and needs more connections will reevaluate its processor and bandwidth usage and will create further direct connections using the same heuristic, i.e., assuming the required processor and bandwidth usage to be proportional to the number of connections established.
Überlay Establishment
We use information obtained through the techniques described above to dynamically form "over-overlays" termed überlays between members of identified entourages so that they can communicate efficiently and with low latency.
Graph construction.
A publisher first constructs a graph data structure with the information received from the DCI protocol. This graph gives the publisher an abstract view a. G1 only shows brokers that distribute the publisher's traffic to two or more subbrokers while G2 will also show any broker that simply forwards traffic between two other brokers or a broker and a subscriber. b. G1 may have been truncated to show only a number of levels starting from the first broker that distribute the publisher's traffic into two children while G2 will show all the brokers and subscribers that receive the publisher's traffic. c. G1 will only show brokers/subscribers that have at least a ψ interest match with the publisher while G2 will show all brokers/subscribers that show interest in some of the publisher's messages.
Figures 4(a) and 4(b) show an example graph constructed by a publisher and an actual network of brokers and subscribers that will result in the graph respectively. The broker B5 was not included in the former due to a. above and subscribers S4 and S5 may not have been included either due to b. or c. (i.e., either because the graph was truncated after three levels or because subscriber S4 or S5 did not have at least ψ interest match with the publisher p) or simply because S4 or S5 decided to reject the COUNT message from its parent due to one of many reasons given previously.
Connection establishment. Once the graphs are established for each remote region publisher can go ahead and establish überlays. The publisher determine the number of direct connections it can establish with each remote region r (K r p ) by dividing K p among regions proportional to the sizes (number of nodes) of respective G1 graphs.
For each region r the publisher tries to decide if it should create direct connections with brokers/subscribers in one of the levels of the graph, and if so with which level. The former question is answered based on the existence of a non-empty graph. If the graph is empty, this means that none of the brokers/subscribers had at least ψ interest match with the publisher and hence forming an entourage for distributing messages of p is not viable. To answer the latter question, i.e., the level with which direct connections should be created, we compare two properties of the graph. ad -the average distance to the subscribers. cv -the portion of the total overlay of the region that will be covered by the selection.
By creating direct connections closer to the subscribers, the entourage will be able to deliver messages with low latency. By creating direct connections at higher levels, the direct connections will cover a larger portion of the region's broker network, hence reducing the likelihood of having to recreate the direct connections due to new subscriber joins. This is especially important in the presence high levels of churn (ch r for region r). Additionally the publisher can create direct connections which are also bounded by the value of K r p for the considered region. The publisher proceeds by selecting the level to which it will establish direct connections (L p ) based on the following heuristic.
cv Lp ×ch r +1 ad Lp +1 ≥ cv l ×ch r +1 ad l +1 ∀ l ∈ {1... log K r p }
Basically the heuristic determines the level which gives the best balance between the coverage and the average distance to subscribers. importance of coverage depends on the churn of the system. Each factor of the heuristic is incremented by one so that the heuristic gives a non-zero and deterministic value when either churn or distance is zero. To measure the churn, each broker keeps track of the rate at which subscribers join/leave it. This information is aggregated and sent upwards towards the roots where the total churn of the region is determined.
If there are more than K r p nodes at the selected level then the publisher will first establish connections with K r p randomly selected nodes there. The publisher will keep sending messages through its parent so that the rest of the nodes receive the published messages. Any node that already establishes direct connections with the publisher will discard any message from the publisher received through the node's parent. Once these connections are established the publisher as mentioned previously reevaluates its resource usage and creates further direct connections as necessary.
If a new subscriber that is interested in messages from the publisher joins the system, initially it will get messages routed via the CPS overlay. The new subscriber will be identified, and a direct connection may be established in the next execution of the DCI protocol. If a node that is directly connected to the publisher needs to discard the connection, it can do so by sending a COUNTREJECT message directly to the publisher. A publisher upon seeing such a message will discard the direct connection established with the corresponding node.
Atmosphere
In this section, we describe Atmosphere, our CPS framework for multi-region deployments which employs the DCI protocol introduced previously. The core implementation of Atmosphere in Java has approximately 3200 lines of code.
Overlay Structure
Atmosphere uses a two-level overlay structure based on broker nodes. Every application node that wishes to communicate with other nodes has to initially connect to one of the brokers which will be identified as the node's parent. A set of peer brokers form a broker group. Each broker in a group is aware of other brokers in that group. Broker-groups are arranged to form broker-hierarchies. Broker-hierarchies are illustrated in Figure 2. As the figure depicts, a broker-hierarchy is established in each considered region. A region can typically represent a LAN, a datacenter, or a zone within a datacenter. At the top (root) level broker-groups hierarchies are connected to each other. The administrator has to decide on the number of broker-groups to be formed in each region and the placement of broker-groups.
Atmosphere employs subscription summarization to route messages. Each broker summarizes the interests of its subordinates and sends the summaries to its parent broker. Root-level brokers of a broker-hierarchy share their subscription summaries with each other. At initiation, the administrator has to provide each root-level group the identifier of at least one root-level broker from each of the remote regions.
Fault Tolerance and Scalability
Atmosphere employs common mechanisms for fault tolerance and scalability. Each broker group maintains a strongly consistent membership, so that each broker is aware of the live brokers within its group. A node that needs to connect to a broker-group has to be initially aware of at least one live broker (which will become the node's parent). Once connected, the parent broker provides the node with a list of live brokers within its broker-group and keeps the node updated about membership changes. Each broker, from time to time, sends heartbeat messages to its children.
If a node does not receive a heartbeat from its parent for a predefined amount of time, the parent is presumed to have failed, and the node connects to a different broker of the same group according to the last membership update from the failed parent. A node that wishes to leave, sends an unsubscription message to its parent broker. The parent removes the node from its records and updates the peer brokers as necessary.
Atmosphere can be scaled both horizontally and vertically. Horizontal scaling can be performed by adding more brokers to groups. Additionally, Atmosphere can be vertically scaled by increasing the number of levels of the broker-hierarchy. Nodes may subscribe to a broker in any level.
Flexible Communication
Atmosphere implements the DCI protocol of Section 3. To this end, each publisher sends COUNT messages to its broker. These messages are propagated up the hierarchy and once the root brokers are reached, distributed to the remote regions to identify entourages. Once suitable entourages are identified, überlays are established which are used to disseminate messages to interested subscribers with low latency.
When changes in subscriptions (e.g., joining/leaving of subscribers) arrive at brokers these may propagate corresponding notifications upstream even if subscriptions are covered by existing summaries; when arriving at brokers involved in direct connections these can notify publishers directly of changes, prompting these to re-trigger counts.
Advertisements
By wrapping it with the client library of Atmosphere the DCI protocol for publishers/subscribers is transparent to application components, at the exception of advertisements which publishers can optionally issue to make effective use of direct connections.
Advertisements are supported in many overlay-based CPS systems, albeit not strictly required. Similarly, publishers in Atmosphere are not forced to issue such advertisements as Atmosphere, although effective direct connection establishment hinges on accurate knowledge of publication spaces. Atmosphere can employ runtime monitoring of published messages if necessary. For such inference, the client library of Atmosphere compares messages published by a given publisher against the currently stored advertisement and adapts the advertisement if required. When witnessing significant changes, the new advertisement is stored and the DCI protocol is re-triggered.
Note that messages beyond the scope of a current advertisement are nonetheless propagated over the direct connections in addition to the overlay. The latter is necessary to deal with joining subscribers in general as mentioned, while the former is done for performance reasons -the directly connected nodes might be interested in the message since the publisher's range of publications announced earlier can be a subset of the ranges covered by any subscriptions.
The obvious downside of obtaining advertisements only by inference is that überlay creation is delayed and thus latency is increased until the ideal connections are established. To avoid constraining publishers indefinitely to previously issued advertisements, the Atmosphere client library offers API calls to issue explicit advertisement updates. Such updates can be viewed as the publisher-side counterpart of parametric subscriptions [START_REF] Jayaram | Parametric Content-Based Publish/Subscribe[END_REF] whose native support in a CPS overlay network have been shown to not only have benefits in the presence of changing subscriptions, but also to improve upstream propagation of changes in subscription summaries engendered via unsubscriptions and new subscriptions.
Evaluation
We demonstrate the efficiency and versatility of Atmosphere via several microbenchmarks and real-life applications.
Setup
We use two datacenters for our experiments, both from Amazon EC2. The datacenters are located in US east coast and US west coast respectively. From each of these datacenters we lease 10 small EC2 instances with 1.7GB of memory and 1 virtual core and 10 medium EC2 instances with 3.7GB of memory and 2 virtual cores each.
Our experiments are conducted using three publish/subscribe systems: (1) Atmosphere with DCI protocol disabled, representing a pure CPS system (referred to as CPS in the following); (2) Atmosphere with DCI protocol enabled (Atmosphere); (3) Apache ActiveMQ topic-based messaging system [START_REF]Active MQ[END_REF] (TPS). ActiveMQ is configured for fair comparison to use TCP just like Atmosphere and to not persist messages. All code is implemented in Java.
Microbenchmarks
We first assess the performance benefits of Atmosphere via micro-benchmarks.
Latency. We conduct experiments to observe the message transmission latency of Atmosphere with and without DCI protocol enabled. The experiment is conducted across two datacenters and use small EC2 instances. A single publisher is deployed in the first datacenter, while between 10 and 35 subscribers are deployed in the second datacenter. Each datacenter maintain three root brokers. As the graphs clearly show, when the number of interested subscribers is small, maintaining unicast channels between the publisher and the subscribers pays off, even considering that the relatively slow connection to the remote datacenter is always involved, and only local hops are avoided. This helps to dramatically reduce both the average message transmission latency and the variance of latency across subscribers. For message rates 50 and 200, when the number of subscribers is 10, maintaining direct connections reduce the latency by 11% and 31% respectively.
For message rates 50 and 200, the value of K p is determined to be 50 and 26 respectively. The Figure 5(b) and 6(b) show that both the message transmission latency and the variation of it considerably increase when the publisher reaches this limit. Also the figures show the benefit of not using the überlay after the number of subscribers exceed K p . For example, as shown in Figure 5(b) when publisher move from maintaining a überlay with its entourage to communicating using CPS (25 to 30 subscribers) the average message transmission latency reduce by 24%. The increase in latency at 35 subscribers is due to brokers being overloaded, which can be avoided in practical systems by adding more brokers to the overlay and distributing the subscribers among them. Also note that the broker overlay used for this experiment consist of only two levels which is the case where entourage überlays exhibit least benefits. show how message latency, throughput, and standard latency deviation, respectively, vary for these setups as the number of subscribers changes. The throughput of p 2 is significantly higher than that of p 1 . This is expected since the rate at which messages can be transmitted increases with the processing power within the relevant confines. Interestingly though, the average message transmission latency for p 2 is higher than the average transmission latency of messages published by p 1 . This suggests that the latency depends on the throughput and not directly on the processing power; the throughput itself of course depends on processing power.
The size of the transmitted messages has a substantial effect on both throughput and latency. The latter effect becomes significant as the number of subscribers increases. Additionally, Figure 7(c) shows that the variation in transmission latency can be significantly reduced by increasing the processing power of the publisher or by decreasing the size of the transmitted messages (e.g., by using techniques such as compression). Effect of ψ. To study the effects of the clustering factor (ψ) on latency, we deploy a system of one publisher and multiple subscribers. We generated subscribers with interest ranges (of size 20) starting randomly from a fixed set of 200 interests. The publisher publishes a message to one random interest at specific intervals. Brokers are organized into a fully complete binary tree with 3 levels and 40 subscribers are connected to leaf level brokers. On this setup, latency measurements are taken with different ψ values. Figure 8(a) shows the results. When ψ is high, entourages are not created because no broker has an interest match as high as ψ. This means messages get delivered to rootlevel brokers which causes higher delays as the messages need to travel through all the levels in the broker network. For a lower value of ψ, an entourage is established, reducing latency.
Case Studies
We developed three test applications to show how Atmosphere can be used to make real world applications operate efficiently.
Social network. Typical IM clients attached to social networking sites support the following two operations: (1) status updates, in which the current status (Busy/Active/Idle or a custom message) of a user is propagated to all users in his/her friend list; (2) the ability to start a conversation with another user in the friend list. Even when explicit status updates are infrequent, IM clients automatically update user status to Idle/Active generating a high number of status updates. We developed an instant messaging service that implements this functionality either on top of Atmosphere or ActiveMQ. Figure 8(b) shows latency measurements for status updates. Figures 8(c) shows latency measurements for a randomly selected friend. For conversations, we use actual conversation logs posted by users of Cleverbot [START_REF] Carpenter | Cleverbot[END_REF]. We evaluate this type of communication on Atmosphere, pure CPS with Atmosphere, and ActiveMQ. The results show that our system is 40% faster than pure CPS and 39% faster than ActiveMQ in delivering instant messages. For delivering status messages, in the worst case, Atmosphere is on par with both systems because our system distinguishes between the communication types required for status updates and instant message exchange and dynamically forms entourage überlays for delivering instant messages only. News service. We developed an Atmosphere-based news feed application that delivers news to subscribed clients. Our news service generates two types of messages: (1) messages containing news headlines categorized according to the type of news (e.g., sports, politics, weather); (2) messages containing detailed news items of a given category. This service can also operate on top of either Atmosphere or ActiveMQ.
In Figures 9(a), 9(b), and 9(c) we explore latency of the news application for three different communication patterns. The total number of subscribers varies from 200 to 1000 with a subset of 30 subscribers interested in sports-based news and a subset of 20 subscribers interested in weather reports. We measure the average latency for delivering sports news and weather reports to these 30 and 20 subscribers. Other subscribers receive all news. Here again our system delivers sports and weather reports 35% faster than a pure CPS system and around 25% faster than ActiveMQ. This is because Atmosphere automatically creates entourages for delivering these posts.
Geo-distributed lock service. We implemented a geo-distributed lock service that can be used to store system configuration information in a consistently replicated manner for fault tolerance. The service is based on Apache ZooKeeper [START_REF]Apache ZooKeeper[END_REF], a system for maintaining distributed configuration and lock services. ZooKeeper guarantees scalability and strong consistency by replicating data across a set of nodes called an ensemble and by executing consensus protocols among these nodes. We maintain a ZooKeeper ensemble per datacenter and interconnect the ensembles (i.e., handle the application requests over the ensembles) using Atmosphere. We compare the Atmospherebased lock service with a naïve distributed deployment of ZooKeeper (Distributed) where all ZooKeeper nodes participated in a single geodistributed ensemble. This experiment uses three datacenters. For each run, a constant number of ZooKeeper servers are started at each datacenter.
Our system provides the same guarantees as naïve ZooKeeper except the rare scenario of datacenter failure (in this case Atmosphere deployment may loose a part of the stored data).
We vary the percentage of read requests and observed the maximum load the systems could handle with 3, 6, and 9 total nodes forming ensembles. Figure 10 shows the results of the experiment for Atmosphere-based (Atmosphere) deployment and a distributed deployment of ZooKeeper where all ZooKeeper nodes participated in a single geo-distributed ensemble (Distributed). The figure shows that by establishing überlays, Atmosphere deployment can handle a larger load.
These case studies illustrate the general applicability of Atmosphere.
Conclusions
Developing and composing applications executing in the cloud-of-clouds requires generic communication mechanisms. Existing CPS frameworks -though providing generic communication abstractions -do not operate efficiently across communication patterns and regions, exhibiting large performance gaps to more specific solutions. In contrast, existing simpler TPS solutions cover fewer communication patterns but more effectively -in particular scenarios with few publishers and many subscribers which are wide-spread in cloud-based computing. We introduced the DCI protocol, a mechanism that can be used to adapt existing solutions to efficiently support different patterns, and presented its implementation in Atmosphere, a scalable and fault-tolerant CPS framework suitable for multi-region-based deployments such as cross-cloud scenarios. We illustrated the benefits of our approach through different experiments evaluating multi-region deployments of Atmosphere.
We are currently working on complementary techniques that will further broaden the range of efficiently supported communication patterns, for example the migration of subscribers between brokers guided by resource usage on these brokers. Additionally we are exploring the use of Atmosphere as the communication backbone for other systems including our Rout [START_REF] Jayalath | Efficient Geo-Distributed Data Processing with Rout[END_REF] framework for efficiently executing Pig/PigLatin workflows in geo-distributed cloud setups and our G-MR [START_REF] Jayalath | From the Cloud to the Atmosphere: Running MapReduce across Datacenters[END_REF] system for efficiently executing sequences of MapReduce jobs on geo-distributed datasets. More information about Atmosphere can be found at http://atmosphere.cs.purdue.edu.
Fig. 1 .
1 Bird's-eye View 1. a technique to dynamically identify topic-like entourages of publishers in a CPS
Fig. 3 .
3 Fig. 3. DCI Protocol
Fig. 4 .
4 Fig. 4. Graph vs Overlay
Fig. 5 .
5 Fig. 5. Latency and all Subscribers for 50 msgs/s
Figures 5 (
5 a) and 5(b) show the latency for message rates 50 msgs/s and 200 msgs/s while Figures6(a) and 6(b) show the standard latency deviations for the same rates. We separate latency from its standard deviation for clarity. Figures5(c) and 6(c) show the average message transmission latency to individual subscribers for message rates 50 msgs/s and 200 msgs/s respectively.
Fig. 6 .Fig. 7 .
67 Fig. 6. Standard Deviation of Latency and all Subscribers for 200 msgs/s
Fig. 8 .
8 Fig. 8. Effect of ψ and Evaluation of our Social Network App
Fig. 9 .
9 Fig. 9. News App Evaluation
Fig. 10 .
10 Fig. 10. Lock Service | 53,104 | [
"1003263",
"1003264",
"854680"
] | [
"147250",
"147250",
"147250"
] |
01480801 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01480801/file/978-3-642-45065-5_17_Chapter.pdf | Varun S Prakash
email: [email protected]
Xi Zhao
Yuanfeng Wen
Weidong Shi
Back to the Future: Using Magnetic Tapes in Cloud Based Storage Infrastructures
Keywords: [Data Storage, Backup, Archiving, Cloud, Data Centers, Cost Efficiency, Magnetic Tapes, Middleware, Read Probability Weight, Priority Queue]
Data backup and archiving is an important aspect of business processes to avoid loss due to system failures and natural calamities. As the amount of data and applications grow in number, concerns regarding cost efficient data preservation force organizations to scout for inexpensive storage options. Addressing these concerns, we present Tape Cloud, a novel, highly cost effective, unified storage solution. We leverage the notably economic nature of Magnetic Tapes and design a cloud storage infrastructure-as-a-service that provides a centralized storage platform for unstructured data generated by many diverse applications. We propose and evaluate a proficient middleware that manages data and IO requests, overcomes latencies and improves the overall response time of the storage system. We analyze traces obtained by live archiving applications to obtain workload characteristics. Based on this analysis, we synthesize archiving workloads and design suitable algorithms to evaluate the performance of the middleware and storage tiers. From the results, we see that the use of the middleware provides close to 100% improvement in task distribution efficiency within the system leading to a 70% reduction in overall response time of data retrieval from storage. Due to its easy adaptability with the state of the art storage practices, the middleware contributes in providing the much needed boost in reducing storage costs for data archiving in cloud and colocated infrastructures.
Introduction
The last decade has witnessed an explosion of data generated by individuals and organizations. For instance, the amount of video data captured by a single HD surveillance camera at 30fps in 14 days requires 1TB storage space [START_REF] Seagate | Video surveillance storage: How much is enough?" 2[END_REF]. The number of CCTV cameras in UK alone is estimated to be 1.85 million [2]. One of the major concerns that is correlated with managing such data is its storage and backup [START_REF] Chamness | Capacity forecasting in a backup storage environment[END_REF]. In cloud based storage services, there are usually more than one players involved, such as service providers and users. From the service user's Tape Cloud Backup Engine Fig. 1. Tape Cloud is a cloud storage service that uses magnetic tapes as the main storage media to store unstructured and big data unlike most of the commercial cloud storage solution available today.
perspective, the motives for choice of storage would be reduced costs per unit data stored, efficient retrieval, data criticality dependent support benefits and a secure, long term data storage. But, a service provider's considerations span operating cost efficiency, labor, scalability, support for different types of data, varied policies from multiple clients and managing workload uncertainty among others. A closer observation shows that the cost factor favors either players but rarely both. The likelihood of recovery of data after back up, also firmly influences both players. Varying archiving rates and backup needs of multiple clients is an eminently common feature leading to the need for multiple storage configuration. Thus, a sensible inclusion in the storage tiers to archive low-read/write-only data would be a low cost, low maintenance yet durable media [START_REF] Jackson | Most network data sits untouched[END_REF].
Magnetic tapes, which started of as a primary storage media decades ago, have been preferred for archiving data generated by organizations for a long time now. Despite the advantages of tapes, there has not been a steady increase in its usage due to high initial investment needed for the operating hardware and its inability to promise high data rate transactions [START_REF] Sandst | Improving the access time performance of serpentine tape drives[END_REF]. By addressing these key issues, it is possible to tap into the economic advantages that the tape media provides.
Tape Cloud (figure 1) is a venture that seeks to find suitable solutions to these issues. Tape Cloud is a cloud based, nearline storage Infrastructure-as-a-Service which makes use of magnetic tapes as the main backend storage media. The cloud model exempts users from the large initial investments needed for in-house backup infrastructure, external tiers for archiving legacy data and its maintenance. From the service providers perspective, using tapes allows hassle free scaling of systems and reduces the total cost of ownership due to its characteristic low power usage, durability and form factor per unit data.
Our principle intention is to 1. Reduce the average response times for read requests issued by applications; 2. Conjointly, ensure efficient data writes to the tapes tier of storage; and 3. Strengthen the infrastructure's support for a large and diverse client base [START_REF] Giurgiu | Enabling efficient placement of virtual infrastructures in the cloud[END_REF]. However, overcoming the latency offered by tapes is a complex problem to be solved. Even with the latest in tape technology, high performance in terms of fast data read and efficient data write cannot be achieved as delays caused due to seeking and winding of tapes is still persistent. There is also a delay induced by the stock robots and other ambulatory mechanics within the tape library which physically handle and move the tape cartridges.
The main contributions of our work are follows, -We propose and evaluate a middleware that is designed to work with Tape Cloud. The functions of the middleware includes the aggregation and batch processing of data, IO request management and efficient distribution of data over available resources. -The middleware, which is constituted by a FUSE based filesystem, implementation of priority based queuing of IO tasks and a latency preemptive, probabilistic data distribution scheme, acts between the backup application tier and proprietary filesystems that is commonly used with tapes. -We observe and record the common delays incurred in the operation of commercially available tape libraries. Some of the latencies of tape drives and tape filesystem are analysed using typical benchmarking tools. This data, along with delay is used to model the performance characteristics of unit hardware, which is later used to simulate large scale data centers. -Backup and archiving application traces are analysed to obtain typical workload characteristics. We employ methods to trace the operations at different stages in the infrastructure and aggregate them into meaningful statistics.
This not only provides information about backend storage media activity, but also provides data at the application server and filesystem levels. -We use synthetic workloads which emphasise prominent features of backup applications to evaluate the impact of using the proposed middleware in a simulation of a large scale deployment of Tape Cloud. In keeping with our goals, we demonstrate the improved data distribution ability, improved response time for read requests originating from each of the applications and regulation of write requests that the middleware provides. -The proposed tape cloud framework points to a new direction for creating service oriented, cost effective, massive scale infrastructure to meet the growing storage challenge in the coming era of big data enabled industries and research.
Analyzing and Modeling Tape Associated Latencies
The Tape, Library and the Drive
In order to design an infrastructure around a particular storage media, it is important to understand the characteristics, related costs, advantages and weaknesses that are associated with it. A clear understanding of the media and devices can lead to its large scale deployment in data centers. We evaluate the state of the art in tape technology with the use of a commercially available Tandberg T24 LTO5 tape library. The tape library has an HP tape drive and slots that can hold 12 LTO5 Ultrium tapes each of 2.5TB capacity and can be extended to 24 tapes. At full capacity, the library can hold 60TB of uncompressed data. The tape library depends on robotic carriers that grab tapes from the slots, carries them to the tape drive at the end of the library and loads the tape for IO operations. The robots instills a greater delay into the The results of a study performed for various block sizes show that tape drives have a uniform data transfer rate compared to three other hard disks shown in figure 2. However, a difference in performance can be seen when random reads and writes are performed. The time spent in changing tapes, loading and seeking to the correct point on the tape creates delays that are out of proportion as compared to the sequential performance of tapes. An important takeaway from the results is to assure the tape drive and the infrastructure spends most of the time either writing or reading to tapes and less time performing seek operations. This helps us in deciding important parameters such as rate of batch processing of data.
Generic Models for Tape Based Latency
Based on the facts obtained about the hardware and delays, we try to model the latency for generic cases [START_REF] Gulati | Modeling workloads and devices for io load balancing in virtualized environments[END_REF]. For the models, the following are some of the constants that need to be considered.
-T search (i) is the time taken by the robot to locate and move to the tape to execute the i th request in the task queue. -T load is the time taken to load the tape into the drive. -T unload is the time taken to unload the tape from the drive.
-T seek (average) is the time to wind the tape to seek to the position of the first byte to execute the new task. We consider the average time for LTO5 tapes in this case. -γ read is the data transfer rate for read operations of the tape. Similarly γ write is the data transfer rate for write operations of the tape. -The smallest unit of a data that is considered in this case is a block. A single read or write might involve transaction of a varying number of blocks. We represent a unit block as BLK.
We aim to employ able techniques to reduce the average response time T read for read tasks and furthermore, ensure that these read-friendly techniques, cause minimal distortion to the throughput and total time T write required to collect data and write it onto tapes. Thus, for a workload Θ,
T opt (Θ) = min(T read (Θ) + ∆T write (Θ)) (1)
Where T opt (Θ) is the minimal optimal time required to complete the execution of workload Θ. We analyse some of the latencies and overhead incurred in achieving this goal in different scenarios. These scenarios are commonly occurring cases in storage systems.
Scenario 1: Single Read/Write Task in Queue: When there is a single read task in the task queue, the total amount of time required to complete the task and obtain the data is given as the sum of times taken for a series of events.
Thus
T singleRead = T search + T load + T seek + n( BLK γ read )
where n is the total number of unit blocks that need to be read. BLK γ read is a constant, the total time required to read a single block and can be substituted by Γ read to get
T singleRead = T search + T load + T seek + nΓ read (2)
Similarly, a single write operation in a queue undergoes similar delays as read operations, the only difference being the rate at which data is written to tapes. The delay for a single write operation is given by
T singleW rite = T search + T load + T seek + nΓ write (3)
Scenario 2: Write task(s) before Read task in Queue: In scenarios where there are one or more write tasks in the queue before a read task, the total time required to obtain the data will include the time required to complete the write task too. For a single write task before the read task, the total time required to complete the task will be equal to T total = T search + T load + T seek + nΓ write + T unload + T search + T load + T seek + nΓ read . This can simply be written as T total = T singleW rite + T unload + T singleRead . Generalizing this, when we have N write tasks before a read task, we have
T total = N (T singleW rite ) + ξ(T unload ) + T singleRead ( 4
)
where 0 ≤ ξ ≤ (N -1). ξ is called the tape switch rational which determines the probable number of tape changes that need to be made and is based on BLK and n. Thus BLK is an important value that influences the efficiency of write operations and helps in deciding the maximum size of data that can be written as a continuous process on to a single tape.
Scenario 3: Other Read Task(s) before Read Task in Queue: The total time required for a particular read task to complete when there are one or more read task ahead of it differs from the previous scenarios in that, read requests are usually not localized to a single tape mostly due to replication and data striping. Continuous read requests mean more number of search, load and seek operations, thus increasing the overall time taken. In the worst case, the total time taken can be given by
T total = (N + 1)(T singleRead ) + (N )(T unload ) (5)
where there are N read requests ahead of the read task in question. This not only causes excessive delays in retrieving data but also leads to the pile up of write tasks at the queue in scenarios where there is an equal ratio of read to write requests.
3 Proposed System's Approach to Overcome Latency
Prioritizing Read Tasks over Write Tasks
From equation 4, we can see that a major share of the delay occurs due to the tasks ahead of the read task in the queue. In order to reduce the over all time taken for retrieving data, an approach that can be opted is biasing between read and write tasks. The read tasks can be given a higher preference over write tasks.
For this, we create a Priority Queue for read tasks for each tape drive. When a read task arrives at a tape drive, the subsequent write task is blocked and the tape drive immediately caters to the read task after finishing the current execution. Thus we have
T (P ri) total = T total -(N (T singleW rite )+ξ(T unload ))+(T unload +T singleRead ) (6)
T (P ri) total = ρ + T unload + T singleRead [START_REF] Gulati | Modeling workloads and devices for io load balancing in virtualized environments[END_REF] where T (P ri) total is the total time taken when priority queueing is applied. ρ is the time spent for completion of current task and ⌈ρ⌉ = T singleW rite and 0 ≤ ξ ≤ (N -1). By implementing the priority queuing, read tasks can be accelerated to be completed much faster.
Read Probability Weight (RPW) Based Data Distribution
Under the circumstances of scenario 3, applying priority queueing would not significantly reduce the total response time as subsequent read operations that need to be performed on different tapes still induce delay associated with the search and seeking processes. We propose a method to overcome this by considering the Balls into Bins problem [START_REF] Raab | balls into bins" -a simple and tight analysis[END_REF][9] [START_REF] Berenbrink | On weighted balls-into-bins games[END_REF].
Every block of data that needs to be written to tapes have a certain probability of being read again. This probability or "weight" is based on the type of application and its historic transactions with the storage system. Intuitively we can see that blocks of data with a higher weight causes higher delay when written to tapes by the same tape drive as compared to data with lower weight (because the read requests that come in eventually still have to be queued at the same tape drive). So the motive to reduce this delay has to be to distribute the data blocks of higher weight equally among the available tape drives such that a single tape drive need not take the entire burden of heavy weighted objects. This is similar to a Balls into Bins problem except that in our case, balls are of different weights. Assume that there are n types of data blocks, where W n = {P 1 r , P 2 r ...P n r } are its respective weights. Given m tape drives
T drive = {t 1 , t 2 ...t m }, the RPW data distribution makes sure that ∀tϵ(T drive ), ( k ∑ i=0 P i r )/k φ(W n ) ( 8
)
where φ(W n ) is the arithmetic mean of all the elements in the set (W n ) and
p/2 ∑ j=0 (( k ∑ i=0 P i r )/k) - m ∑ j=p/2 (( k ∑ i=0 P i r )/k) 0 ( 9
)
Where k is the total number of write tasks in a particular queue t. If data originating from an application q is assigned a weight P q at any point of time, then each queue will have a weight S q equivalent to P q / N ∑ n=1 P n of data pertaining to application q where N is the total number of weighted tasks in the queue. No single application can have all its data written to a single location. RPW based data distribution coupled with priority queueing not only improves average response time efficiency, but also contributes towards maintaining write throughput as it reduces the overall delay caused due to continuous blocking of write tasks by a series of read tasks. An evaluation of RPW usage has been shown in figure 12.
System and Middleware Design
Figure 3 shows a bird/s eye view of the Tape Cloud architecture. We propose a hybrid middleware that performs efficient hard disk caching, data block management, data distribution and IO task scheduling. This middleware functions as an agent arbitrating various components in order to reduce the overhead caused by using the slower backend media. Figure 4 provides the logical representation of the middleware and some of its functionalities. The data that needs to be written to tapes is collected and channelled suitably before it reaches its destination. Data is processed in batches. This helps in easy retrieval of data from collection servers and fixed set of parameters for efficient distribution.
Data Source or Clients
The focus of Tape Cloud is consistent with most cloud based services and provides an efficient storage service for a variety of data. Clients who wish to archive data on Tape Cloud, run a service to deliver data to the storage collection servers(see figure 3). One of the features of Tape Cloud is that it allows clients to deliver data in more than one ways. Large data sets(which is an unavoidable attribute of archive data) can also be delivered by mailing the media itself. From the storage system's perspective, each client is tagged and labelled based on the physical attributes of the data, relative storage activity over time, space requirements and the frequency of requests for data IO that is derived from the clients. This information serves as policies which is used by the middleware to make decisions on the location of data, level of security, distribution of data blocks and also provides the recipe to cook the read probability weight (RPW) information of data pertaining to particular clients. The data manager, with access to the central block database, updates and maintains mapping of blocks of data to its physical location on tapes, in libraries and section of the data center.
Data and Resource Manager
The Data Manager is the point of interaction between the clients and the storage infrastructure. More importantly, it is the interaction point between the client application and the middleware as no data is directly written to tapes without the data manager's consent. The data manager module runs on the load balancing server and manages the other parts of the middleware such as the filesystem, task queues and data distribution modules. To perform efficient management, the data manager relies on informative references to the actual client data. These references or metadata contains details about the blocks of data such as its location in the filesystem, size, type and RPW along with other client specific information. The metadata is used as representatives of data blocks in the queuing and the distribution modules of the middleware. This prevents the overhead of moving around large amounts of data within the system.
An important task the data manager undertakes is the grouping of data stored in the middleware's filesystem to be processed in batches. The data manager employs a specific technique to pick metadata pertaining to blocks of data which are most probable to be retrieved as a single unit from the filesystem, packages them and passes them to the data distribution module. Other responsibilities include the attestation of data deposition requests from and client and allocating suitable resources.
Multi Tier File System
FUSE [START_REF]Fuse filesystem project[END_REF] is a framework to help develop customized file system. FUSE module has been officially merged into the Linux kernel tree since kernel version 2.6.14. FUSE provides 35 interfaces to fully comply with POSIX file operations. We design a file system using FUSE to operate in the middleware of the architecture. The implementation presents a monolithic image of the filesystem, but internal divisions exist based on functionalities. Figure 5 shows the pathway taken by data to be written to tapes and the various operations that act upon it. The filesystem depends on external databases to maintain records of the locations of blocks data. In order to prevent loss of data due to server failure, the filesystem performs a replication of similar data in multiple location similar to HDFS.
The filesystem manages data and chunks based on a hierarchical partitioning technique of the data set. Tape Cloud follows an application centric approach to group data chunks to be written to tapes and a method called hierarchical partitioning that is used, contributes to this cause. Every file that needs to be written to or read from tapes is encrypted, optionally segmented(to avoid singular large files) and replicated to result in a unit entity or chunk. The chunks of data are grouped and bagged in structures called containers. Based on the load, these containers are then distributed to the tape interface machines to be written to tapes.
Probabilistic Data Distribution
The analysis of latencies that is performed leads to induction of a technique where some of the delays are preempted before data is written to tapes. As discussed in section 3, this is to ensure that a small group of task queues do not take the burden of a large number of discontinuous read tasks. The probabilistic data distribution module is an important part of the middleware that distributes blocks of data to the tape interface machines based on a particular weight associated with the data. The weight or the read probability weight (RPW) is the probability of the block of data being read once written onto tape. The probabilistic data distribution module is designed to obtain the RPW by two ways. It can be enclosed in the metadata that is handed down by the data manager. The other avenue that can be taken to deduce the RPW is over time, when the middleware notices that there are some blocks of data that have undergone access in a manner inconsistent with its knowledge about the RPW. In this scenario, the middleware updates the RPW of data incoming from the client and adapts to the workloads of different clients over time. After the references have been assigned specific tapes or drives, the references of data blocks are handed over to the task queuing module of the middleware.
Task Queueing
The large scale operation of the storage system involves the use of multiple tape drives. The entire tape storage facility is divided into sections, each of which can be serviced by a tape drive. Each of these tape drives have an exclusive queue assigned to it which holds the IO task to be performed on tapes which are in its logical vicinity. These tasks queues are maintained and used by the middleware and should not be confused with the ones that are used by the storage media or drivers. One of the approaches to decrease the delay in retrieving data is to prioritize between the read and write requests as discussed in section 3. The task queueing module caters to this need by assigning each tape drive with two virtual queues, one each for write and read requests. Read requests having higher priority over write requests are granted resources immediately after the completion of the current task regardless of the depth of the write queue. After completion of the read task, the system continues with the execution of other tasks in the write queues. Assuming an efficient distribution of data, the task queueing module ensures that read tasks are performed under strict time constraints while maintaining acceptable standards of throughput for write tasks. The task queues provide periodic feedbacks to the data manager about the overall time taken in performing tasks associated with a specific batch. This feedback is used by the data manager to assess the overall performance of the data distribution module and the distribution parameters in the system.
Synthesis of Workload for Middleware Evaluation
Characterizing Archive Workload from Traces
The accepted method to evaluate a storage infrastructure is by testing its performance with benchmark workloads. While a number of articles provide benchmarks and suggest methods to evaluate various aspects of storage such as the media, queues, IO charecterization [START_REF] Ahmad | Easy and efficient disk i/o workload characterization in vmware esx server[END_REF] and filesystem [START_REF] Agrawal | A five-year study of file-system metadata[END_REF][14], there has been a comparatively limited literature about performance of archival storage systems. Kavalanekar et. al. [START_REF] Kavalanekar | Characterization of storage workload traces from production windows servers[END_REF] provide elaborate results on storage workloads from production windows servers. But the variation in workload type between non archival and archival storage varies as suggested by Lee et. al. in [START_REF] Lee | Benchmarking and modeling diskbased storage tiers for practical storage design[END_REF], who make an attempt to create benchmarks. But their work is limited to providing a better understanding of the type of files and sizes rather than provide a complete set of results. Another important contribution has been provided by Wallace et. al. [START_REF] Wallace | Decoupling datacenter studies from access to large-scale applications: A modeling approach for storage workloads[END_REF] for EMC production servers. Although a large number of aspects have been covered, the impact of different types of archiving and application level transactions with the storage have not been projected.
In order to perform a bias free evaluation of the middleware, we subject it to a workload that has been characterized by traces obtained from live archiving applications. The traces are collected from the archiving infrastructure of IVigil, a company that provides video surveillance services to a local client base. The backed up data usually includes surveillance videos, security related data, virtual disks and documents that are wielded by the company on a daily bases. Aspects which are important to the working of the middleware such as rate of requests with respect time, inter arrival time of requests and a comparison of the rate of read to write request are recorded and analysed. Table 2 provides some information about the characteristics of the infrastructure. The applications show characteristics that prove common beliefs about archival data wrong [START_REF] Lee | Benchmarking and modeling diskbased storage tiers for practical storage design[END_REF]. The application level traces help in understanding the frequency with which IO requests are generated. This serves as a clear indicator of how backup types differ from each other. The filesystem level traces provide a defined understanding of what each IO request generated by the application demands. Each of the applications vary in infrastructure so it is important to co-relate traces obtained to reflect a common operation at each stage. The following are the results of the characteristic extraction from the traces.
Figure 6 is the total number of IO requests generated by the archiving applications and figure 7, the interarrival time of these requests. These have been recorded at the application level or at the first level of the storage infrastructure. The number of storage requests generated is an important feature to be considering as it provides valuable insight into the nature of application and guidelines on the capacity that the middleware needs to cope. Interarrival time helps in setting parameters such as the queue lengths, batch processing rate etc.
As discussed earlier, random IO is responsible for the major share of the delay in a tape infrastructure. Figure 8 and figure 9 provides a better understanding of the number of read requests obtained as a ratio of write requests and how frequently 200 individual "hot" files are accessed within the storage system.
. Application, type of data, file sizes and temporal locality are some of the factors influencing interarrival time. The asynchronous nature of some applications and storage system softwares also affect IA time.
Workload Modeling and Generation
There have been many projects in developing synthetic workload to test storage systems such as [18] [START_REF] Gulati | Modeling workloads and devices for io load balancing in virtualized environments[END_REF] which depend on models created by Markov chains of states and virtualized environments. The commendable results focus on workloads that vary from archiving workloads. We synthesize a workload using Vdbench [START_REF]Vdbench[END_REF] in order to test the middleware's performance. The workload generator is carefully designed by performing a sectional analysis of the results obtained in the real archive workload traces. The real time workloads are spliced on the basis of a user defined time interval and the features of each division such as number of requests, types of requests, file sizes and interarrival times are extracted. The newly created workload is essentially a time based, weighted aggregation hybrid of the workloads. The weighted aggregation provides the flexibility to produce workloads in any combination of amounts of the given traces. It depends on a workload aggregation scheme provided by the user which generates a Vdbench script based on the input. For example, an aggregation scheme (W1,W2,W3,W4) would produce a workload from the 4 participating workloads in equal proportion, ((2)W1,(0.5)W2,W3) would produce twice the amount of workload 1, half the amount of workload 2 with no change to workload 3 an no trace of workload 4. This type of modelling has proven to provide a wide range of options for generating workloads. The focus of this paper being the evaluation of the middleware, we use an equal proportion workload to record the difference in performance. We perform our evaluation experiments using the models and synthetic workload created on the basis of the actual archiving workloads. The performance of the middleware and its contribution in achieving the goals to minimize average response time and efficient data distribution, are assessed by subjecting the backend storage system to the synthetic workload in the absence and presence of the middleware on simulated, resource configurable data center test bed. In the former case, we make use of commonly preferred ways of task and data distribution at the application and middleware levels such as First Come First Serve (FCFS)+Round Robin and Application specific task queuing techniques.
To evaluate the middleware, we consider the Priority Queuing and evaluate its performance. As mentioned in section 2, the priority queueing technique has a few drawbacks which is then overcome with RPW Data distribution method. All tests are conducted along with the middleware filesystem. First of all, it is important to check for inconsistencies in the synthetic workloads as compared to the real time workloads obtained from traces. Figure 11 gives the error percentage of the synthetic workloads.
Read Probability Weight based Data Distribution
The novel idea of preempting delay caused due to large number of read requests especially in a system like Tape Cloud calls for preliminary evaluation of the technique. RPW considers the probability of a block of data being read once written to tapes and distributes blocks based on this probability. To verify the correctness of our assumption, we consider 10000 randomly weighted objects and distribute them into bins. Two tests are performed, where each has 500 and 1000 bins. This emulates blocks with different probabilities that need to be assigned to different tape drives. Figure 12 shows that RPW offers a distribution that is closer to the ideal case than other approaches like FCFS in both cases.
In evaluating the RPW using the synthetic workload, we consider two cases where we have 500 tape drives (figure 13) and 1000 tape drives (figure 14). We compare RPW with FCFS and Application Specific Queueing which distributes data blocks generated by specific applications to specific queues. The application specific approach has clear boundaries between queues for each application in the system. When we vary the number of total requests generated by the synthetic workload, we see that RPW provides a more efficient distribution where the gap between the queue with the largest average weight and the queue with smallest average weight is much lesser than that of the other approaches. The whiskers show the largest and smallest average weights of queues.
Average Response Time for Read Requests
The use of RPW based data distribution helps in avoiding long stretches of read operations that is localized to a small set of task queues. This in turn reduces the average delay caused at each of the queues. When we test Tape Cloud with the synthetic workload, the absence of the middleware leads us to use conventional data distribution and queueing techniques such as FCFS, Round Robin and application specific queueing of tasks. But with the middleware and enhanced task management, there is an overall reduction in the response time for read tasks generated by every application as shown in figure 15. The graphs have Log values in X axis which show the rate of change of average response time when number of requests are varied and the RPW have negligible rate of change of response time even for large number of requests. One of the notable differences that can be seen in the traces of the four application is the variation in number of requests over time. Theoretically, the induction of RPW based data distribution along with priority queueing must make the average response time immune to the number of total number of requests. We perform an hourly analysis of average response time for read requests from application 1 and application 2 because application 1 has the highest write requests and application 2 has the highest read requests. We see from figure 16 that, along with having the smallest response time, the combination of priority queueing and RPW distribution provides a nearly constant response time over the entire period of the test, making it independent of other requests.
Preserving Rate of Write Task Execution
In keeping with our goals, we test if the middleware brings about a negative impact on the write task completion rate of the workload. Figure 17 provides a comparison of the write performance before and after the deployment of the middleware. We test cases that present extreme scenarios such as application 1 which has the highest write requests and application 2 which has the highest read requests for the aggregation scheme in use and it is very clear that, along with dutifully improving data retrieval efficiency, the middleware also maintains that similar justice be done to write tasks as well. There is only a negligible reduction in the number of write tasks performed per minute in both cases proving the abilities of the middleware.
Conclusion and Future Work
We present and evaluate the design for a cost efficient, hybrid, cloud based storage which mainly makes use of magnetic tapes as backend storage media. Although tapes have been widely categorised as a slow and unpopular storage media, it outperforms magnetic disks in total cost of ownership and energy consumption (tapes don't consume power when stored in a tape library), which makes tape technology an ideal choice for cloud based archiving services. We explore the benefits of the state of the art in tape storage technology. The need for a managerial middleware, which is a combination of algorithms and data distribution policies, that contributes in overcoming the latency offered by tapes in order to improve performance of IO processes is proposed and evaluated. The middleware serves its purpose and by improving data distribution efficiency and decreasing the overall response time for read requests. The test cases have been generated using the extensive analysis of live archiving workloads and modelling techniques.
One of the most exciting aspects of our work is the doors of opportunity it opens for new research. Understanding the economics of revisiting a legacy system to solve the data explosion problems of today requires an overhaul of nearly every piece of technology associated with the storage system. Future plans of the project include the improvement of the middleware and the filesystem to support message passing enabled, adaptive data weight management and IO paralellization. Another area of focus is the elaboration of operation of Tape Cloud for a variety of data types, application and magnitude of serviceability.
Fig. 2 .
2 Fig. 2. Sequential read and write performance of an LTO5 tape drive in comparison with commercial hard disks.
Fig. 3 .
3 Fig. 3. Implementation Architecture of Tape Cloud. The arrows represent the direction of flow of data. The infrastructure is a hybrid structure which makes use of hard disk caches and databases.
Fig. 4 .
4 Fig. 4. The placement and interfacing of the functional blocks of the Middleware. The solid lines show the path taken by control statements and meta data while dotted lines show the path of the actual data blocks to be stored on tapes.
Fig. 5 .
5 Fig. 5. Stages and functions of each stage of the filesystem for Tape Cloud. Although distributed by functionality, the filesystem is monolithic across the storage system.
Fig. 6 .
6 Fig.6. The total number of requests generated by archiving applications 1(a), 2(b), 3(c) and 4(d). The number of requests are collected at the application level for discrete read or write requests to the underlying filesystems.
Fig. 7 .
7 Fig. 7. Interarrival(IA) time of requests generated by archiving applications 1(a), 2(b), 3(c) and 4(d).Application, type of data, file sizes and temporal locality are some of the factors influencing interarrival time. The asynchronous nature of some applications and storage system softwares also affect IA time.
Fig. 8 .Fig. 9 .
89 Fig. 8. Average number of read requests as a percentage of the total IO requests in 12 hour buckets by archiving applications 1(a), 2(b), 3(c) and 4(d). The whiskers show the maximum percent of read requests received during the particular 12 hour interval.
Fig. 10 .Fig. 11 .
1011 Fig. 10. The process of synthesizing a workload based on previously analysed application traces. The traces are divided based on a user defined time interval, features extracted and an aggregation performed to create a block of the new artificial workload
Fig. 12 .
12 Fig. 12. Verifying the correctness of the RPW approach. Compared to FCFS, RPW offers a higher convergence to the ideal case. Here Fig(a) is with 500 bins and Fig(b) is with 1000 bins. The arrow points to the queue ID which serves as the point of distribution balance.
Fig. 13 .Fig. 14 .
1314 Fig. 13. The gap between the average weights of the heaviest and lightest queues for different number of requests for 500 queues. FCFS (a) and Application Specific Queueing (b) show inefficient weight distribution as compared to RPW (c).
Fig. 15 .
15 Fig. 15. The average response time of read requests under the synthetic workload for application 1 (a), application 2 (b), application 3 (c) and application 4 (d). Note the clear difference and reduction of the average response time for each of the applications. Also, RPW based data distribution offers very small rate of increase of response time even over larger variations of the number of requests
Fig. 16 .Fig. 17 .
1617 Fig. 16. Time based average response time for application 1 (a) and application 2 (b). Applications 1 and 2 are considered because application 1 has the highest write requests and application 2 has highest read requests. Compared to the other methods such as FCFS and Application specific Queuing, RPW based data distribution maintains a stable average response time regardless of the density of the workload
Table 1 .
1 Tandgerg T24 Robot, Load and Unload Delays
Type From To Motion Load Type From To Motion Load
(slot) (sec) (sec) (slot) (sec) (sec)
LOAD 1 Drive 52.4 23.3 UNLOAD Drive 1 51.6 20.1
LOAD 2 Drive 52.9 21.9 UNLOAD Drive 2 52.3 20.6
LOAD 3 Drive 54.06 22.6 UNLOAD Drive 3 52.26 20.3
LOAD 4 Drive 55.2 24.6 UNLOAD Drive 4 54.0 20.3
LOAD 5 Drive 52.42 24.0 UNLOAD Drive 5 51.3 20.9
LOAD 6 Drive 53.3 23.6 UNLOAD Drive 6 51.76 21.01
LOAD 7 Drive 54.2 21.3 UNLOAD Drive 7 52.22 20.1
LOAD 8 Drive 55.45 23.9 UNLOAD Drive 8 53.8 19.62
LOAD 9 Drive 51.8 24.0 UNLOAD Drive 9 50.7 20.3
LOAD 10 Drive 52.3 21.6 UNLOAD Drive 10 51.4 20.34
LOAD 11 Drive 53.7 22.23 UNLOAD Drive 11 51.97 23.9
LOAD 12 Drive 54.02 23.8 UNLOAD Drive 12 53.6 22.59
Average - - 53.52 23.1 Average - - 52.24 21.21
system in addition to the one caused by tape drives. The averages from multi
trail recordings of the traverse time of the robots and loading time is provided
in table 1.
8000
6000
Mb per minute 4000 2000 Hitachi HDS725050KLA360 Write Hitachi HDS725050KLA360 Read Tandberg Tape Write Tandberg Tape Read
Fujitsu MPE3084AE Write
Fujitsu MPE3084AE Read
Seagate ST380215A Read
0 Seagate ST380215A Write
8 32 128 512 2048 8192
Transfer Size (Kb)
Table 2 .
2 Applications Contributing Workload Traces for Evaluation of Middleware
Sl. No. Archiving Type Description
1 Periodic Full Backup 10 disk array on 3 networked attached storage (NAS)
servers archiving surveillance video and security data.
Videos and related information is collected from local
systems once every 24 hours through a customized asyn-
chronous pull server based system. High churn rate.
2 Periodic Full Backup + LRU Archiving Application archived least recently used support files on
larger disk based backend storage with smaller churn rate.
Deployment details and infrastructure unknown.
3 Incremental+Full Backup Incremental backup of hard disks and virtual disks at the
end of every login session and periodic full backup of 22
computers on hard disk based NAS storage running Cryp-
toNAS software.
4 Non Periodic Mirroring Backup Document archiving of unknown number of computers.
Simple FreeNAS storage with a duplication based archiv-
ing client running on individual computers. | 43,753 | [
"1003265",
"1003266",
"1003267",
"1003268"
] | [
"365087",
"365087",
"365087",
"365087"
] |
01481058 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2016 | https://minesparis-psl.hal.science/hal-01481058/file/Le%20Masson%20Hatchuel%20Weil%202015%20RED%20v4%20accepted.pdf | Pascal Le Masson
email: [email protected]
Armand Hatchuel
Benoit Weil
Design Theory at Bauhaus: teaching "splitting" knowledge
Keywords: Generativity, design theory, splitting condition, Bauhaus, industrial design
Recent advances in design theory help clarify the logic, forms and conditions of generativity. In particular, the formal model of forcing predicts that high-level generativity (so-called generic generativity) can only be reached if the knowledge structure meets the 'splitting condition'. We test this hypothesis for the case of Bauhaus (1919Bauhaus ( -1933)), where we can expect strong generativity and where we have access to the structures of knowledge provided by teaching. We analyse teaching at Bauhaus by focusing on the courses of Itten and Klee. We show that these courses aimed to increase students' creative design capabilities by providing the students with methods of building a knowledge base with two critical features: 1) a knowledge structure that is characterized by non-determinism and nonmodularity and 2) a design process that helps students progressively 'superimpose' languages on the object. From the results of the study, we confirm the hypothesis deduced from design theory; we reveal unexpected conditions on the knowledge structure required for generativity and show that the structure is different from the knowledge structure and design process of engineering systematic design; and show that the conditions required for generativity, which can appear as a limit on generativity, can also be positively interpreted. The example of Bauhaus shows that enabling a splitting condition is a powerful way to increase designers' generativity.
Introduction
What is the logic of creative reasoning? Recent advances in design theory have provided answers to debates on the possibility of any logic of creation and have allowed the analysis, modelling, and even improvement of the generativity capacities of creative people. There are models of generativity [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]. They describe, for instance, generativity that involves mixing 'non-alignment'-based concepts [START_REF] Taura | A Systematized Theory of Creative Concept Generation in Design: First-order and high-order concept generation[END_REF], generativity that relies on duality inside the knowledge space (Shai et Reich 2004a;[START_REF] Shai | Creativity and scientific discovery with infused design and its analysis with C-K theory[END_REF], generativity that relies on closure spaces [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF], or generativity that involves adding to a concept attributes that break design rules (i.e., C-K expansion [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]).
Based on these models, design theories provide an enriched vocabulary for the creative 'outcome'; e.g., there are designed entities at the borders of different semantic fields (i.e., general design theory [START_REF] Taura | A Systematized Theory of Creative Concept Generation in Design: First-order and high-order concept generation[END_REF]), designed entities that fill in 'holes' (i.e., infused design (Shai et Reich 2004a;[START_REF] Shai | Creativity and scientific discovery with infused design and its analysis with C-K theory[END_REF])), and designed entities that create new identities and new definitions of things (i.e., C-K theory [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]). The models also provide enriched descriptions of how design unfolded to get these entities; e.g., knowledge provoking 'blending' (i.e., general design theory), the uncovering of 'holes' via duality (i.e., infused design), and the use expansive partitions (i.e., C-K theory).
The above works provide us with new approaches of creation and creative reasoning. In particular, the models predict that strong generativity (which we later call 'generic generativity') is associated to (and, more precisely, conditioned by) specific knowledge structures; i.e., the knowledge base has to follow a splitting condition. This proposition is counter-intuitive as we tend to rather consider that the only limits to generativity are cognitive fixations. Hence, the present paper addresses the issue of whether we can verify the splitting condition in design situations that are particularly generative. If the splitting condition is true, it should be, for instance, particularly visible in the case of so-called 'creative professions' like art and industrial design. We therefore ask: Relying on design theories, can we characterize a type of generativity of industrial designers-specific 'effects'-and specific conditions acting on the knowledge structure that help achieve these effects? We do not study all industrial designers and rather focus on industrial design schools because they are the places where industrial designers are educated (and thus provide favourable access to knowledge bases) and where a doctrine of what is industrial design, and particularly its logic of generativity, is discussed, practiced and diffused. We focus on one of the most famous industrial design schools, Bauhaus, for many the matrix of several industrial design schools of today.
How does Bauhaus relate to generativity? Indeed, teaching industrial design does not necessarily consist of increasing creative design capability as it can also involve teaching existing styles and processes (e.g., drawing and moulding). Bauhaus itself was from time to time assimilated in a new style (e.g., the functionalist style); one can be tempted to think that the school actually taught this functionalist style. We therefore first clarify whether Bauhaus teaching really consists of teaching creative design methods (and theories) or only involves teaching a new 'style'. More generally, we will characterize the kind of creative expansion that Bauhaus teaching is expected to generate. We will show that Bauhaus actually aimed at a form of style creation, and we will show that this style creation can be characterized as a form of 'generic generativity'. We will then uncover critical facets of the reasoning that leads to this 'generic generativity'. On the one hand, the creative craft of the industrial designer is often viewed as a mysterious talent, reserved to those that are naturally born 'creative' [START_REF] Weisberg | General Design Theory and a CAD System[END_REF], and we will try to shed some light on this 'magical' talent. On the other hand, one might claim that the specificity of industrial designers is only a result of the type of knowledge industrial designers use (e.g., knowledge about users, ergonomics, symbolic meaning, sociology, culture, and form), and we will challenge the idea that industrial design is limited to certain areas of expertise. We will show that there is something more specific and more universal in Bauhaus teaching. Specifically, at Bauhaus, the capacity for design generativity is based on the acquisition of one very specific knowledge structure, characterized by two properties: non-determinism and non-modularity. We show that this knowledge structure corresponds surprisingly well to the so-called splitting condition in formal design models of mathematics.
Hence, we will characterize Bauhaus teaching as a way of helping students to be 'generically creative' by building a knowledge structure that meets the splitting condition.
Finally, we show that this study of teaching in industrial design is also relevant to engineeringdesign. How can this be? Industrial design and engineering design are two clearly distinct traditions (see histories on engineering design [START_REF] Heymann | Kunst" und Wissenchsaft in der Technik des 20[END_REF]König 1999) and industrial design (Forty 1986) and the relationship between engineers and so-called 'artists' [START_REF] Rice | An engineer imagines[END_REF])), two different professions, not taught in the same schools and embodying two different social roles. The contrasting figures of industrial design and engineering design use different journals, rely on different epistemologies, and connect to different disciplines. Still, engineering design and industrial design today share common interests. Design research societies try to bring them together through joint conferences. Both communities share today a concern about creative design and innovative design capabilities. Furthermore, recent progress in design theory has helped uncover the universality of design beyond professional traditions (Le [START_REF] Laudien K ; ) Maschinenelemente | Design Theory: history, state of the arts and advancements[END_REF] (see also recent keynotes on design theory at the International Conference on Engineering Design 2015, Milan, and at the European Academy of Design, Paris 2015), thus supporting scientific exchanges between communities. The present paper aims at contributing to this trend. Specifically, by relying on Bauhaus teaching and design theory, we expect to learn about not only industrial design but also the relationship between industrial design and engineering design and, more generally, we expect to enhance our understanding of innovative design capabilities and critical aspects of design theory.
We briefly review the literature on generative processes to formulate our research hypotheses (part 1), before presenting our method (part 2), our analysis of Bauhaus teaching, compared with engineering design (part 3), and our research results (part 4).
Part 1: The logic of generativity and its formal conditions
Generativity as a unique feature of an ontology of design
Works on design theory in recent decades have revealed that generativity is a critical, even unique, feature of design theory; see, in particular, the 2013 special issue on design theory published under Research in Engineering Design(Le [START_REF] Laudien K ; ) Maschinenelemente | Design Theory: history, state of the arts and advancements[END_REF]. This logic of generativity was analysed both from an historical perspective (Le Masson et Weil 2013;Le Masson, Hatchuel et Weil 2011) and from a formal perspective [START_REF] Hatchuel | Towards an ontology of design: lessons from C-K Design theory and Forcing[END_REF]. It was shown that design theory is dealing with the emergence of new entities, previously unknown but designed by relying on known attributes; i.e., it addresses how to model the emergence of the new, the unknown, from the known. Different design theories proposed more or less generative models, relying on the specific language of the theory. As an historical example, one of the first design theories developed for machine design was the theory of ratios, developed by Ferdinand Redtenbacher [START_REF] Redtenbacher | Prinzipien der Mechanik und des Maschinenbaus. Bassermann[END_REF]König 1999). This theory is based on the language of each machine type (e.g., hydraulic wheels or a steam locomotive) and the generativity is thus limited to the machines described by the kind of language (e.g., the theory helps to generate previously unknown hydraulic wheels but cannot generate a turbine). Design theories have progressively increased their generative capacities by relying on abstract languages (or more precisely: on the abstract languages provided by the scientific advances of their time); e.g., general design theory relies on functions and attributes [START_REF] Tomiyama | Extended general design theory[END_REF]Yoshikawa 1981;Reich 1995), the coupled design process overcomes the limits of functions by enabling the emergence of new functions [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF], infused design relies on duality in knowledge structures (Shai et Reich 2004a, b), and C-K theory relies on the logical status of propositions [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF].
Generativity and creativity-towards a variety of forms of generativity
The different models highlight an overlooked area of research on creation and creativity: creative reasoning logic. Since the 1950s, psychologists have proposed measures of the effect of creative capacities (see Guilford criteria used to characterize a distribution of ideas-the fluency, diversity, originality of a set of ideas) [START_REF] Guilford | Creativity[END_REF]). In the following years, many factors of creativity were identified (see Rhodes' 4Ps (person,process,press,products)) (Rhodes 1961). Still the reasoning logic of the creative mind has long remained out of scope. Several processes of creative reasoning have been proposed, all based on Wallas's model (information, incubation, illumination, verification) [START_REF] Wallas | The Art of Thought[END_REF], itself already described by Poincaré (Poincaré 1908) (see also [START_REF] Hadamard | The psychology of invention in the mathematical field[END_REF]). In the 1990s, works on computer models of creativity were proposed. As underlined by (Boden 1999), they tended to distinguish between non-radical ideas, based on already known generative rules, and radically original ideas, which cannot 'be described and/or produced by the same set of generative rules as are other, familiar ideas ' (p.40). Meanwhile, research in the field of psychology has underlined forms of 'bias' in creative design reasoning, leading to 'fixation effects' [START_REF] Jansson | Design Fixation[END_REF]; i.e., distributions that are too narrow.
The above works focus on ideation and the psychology of ideation. Ideation is a part of design and often a phase in the design process. However, ideation does not account for all aspects of the generative process. In particular, ideation tends to rely on a 'closed-world assumption'; i.e., knowledge is given at the beginning of the ideation process. Hence, ideation cannot account for the generation of knowledge in design. Another limit is linked to the notion of an idea. Ideation focuses on the originality of one idea compared with other ideas, while generativity also accounts for the transformation induced by a designed entity; e.g., a newly designed entity might require/allow the re-ordering of the whole set of existing entities (i.e., new combinations between the new and the old are made possible and are accounted for by generativity). For instance, when Watt and Boulton designed a way to transform the parallel motion of the steam engine into a rotary motion, their design paved the way to new machines having several applications. This discussion underlines that there are several forms and facets of generativitybeyond the quantity and originality of ideas. Generativity can also be characterized by knowledge creation and knowledge reordering induced by design.
Forms of generativity: 'generic' vs 'frequency' generativity
Research that uses formal models helps uncover the variety of forms of generativity. The presentation of all these forms is beyond the scope of this paper. We discuss one of the most generative forms: generativity formalized by forcing.
Forcing is a method invented by Paul Cohen to create new models of sets [START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cohen P[END_REF][START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cohen P[END_REF] 1 . Cohen presented forcing as a generalization of extension techniques (e.g., the creation of a field of complex numbers from fields of real numbers) or a generalization of the Cantor diagonal method (e.g., the creation of new reals). This generalization is powerful because sets are basic mathematical structures on which it is possible to reconstruct all mathematical objects (e.g., numbers, functions, geometry, algebra, and topological structures) [START_REF] Dehornoy | Théorie axiomatique des ensembles[END_REF]) -hence the genericity of forcing. As shown by [START_REF] Hatchuel | Towards an ontology of design: lessons from C-K Design theory and Forcing[END_REF], forcing can be interpreted as a generic design method. Of course, its validity is limited to the design of new models of sets (while preserving some basis rules of sets (basically Zermello Fraenkel axioms)), but set theory is so general that it is possible to establish correspondences between the design of models of sets and the design of other entities, as shown by the correspondence between forcing and C-K theory [START_REF] Hatchuel | Towards an ontology of design: lessons from C-K Design theory and Forcing[END_REF].
Without going into every mathematical detail, let's underline a first main lesson from forcing: its generativity.
The logic of forcing is as follows (see [START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cohen P[END_REF][START_REF] Jech | Set Theory[END_REF][START_REF] Hatchuel | Towards an ontology of design: lessons from C-K Design theory and Forcing[END_REF]).
1) The first element of forcing is a so-called ground model M: a well formed collection of sets that is a model of the axiomatic of set theory, ie it follows Zermelo-Fraenkel axioms. Illustration: this corresponds to the 'knowledge base' of the designer (e.g., knowledge of 'furniture'). As explained by [START_REF] Dehornoy | Théorie axiomatique des ensembles[END_REF], the logic of set theory roughly correspond to the intuition we can have on objects and sets of objects.
2) The second element is the set of so-called forcing 'constraints' 2 built on M. To build new sets from M, we have to extract elements according to constraints that can be defined in M. Let us denote by (Q, <) a set of constraints Q and a partial order relation < on Q. This partially ordered set (Q, <) is completely defined in M. Illustration: a piece of furniture has a shape, can meet functional requirements, and is made of materials. These are the 'constraints'. From Q, we can extract constraints that can form series of compatible and increasingly refined constraints (q 0 , q 1 , q 2 ... q i ), where for any i, q i < q i-1 ; this means that each constraint q i refines the preceding constraint q i-1 . The result of each constraint is a subset of M. Hence, the series (q i ) builds series of nested sets, each one being included in its preceding set of the series. Such a series of constraints generates a filter F acting on Q. A filter can be interpreted as a step-by-step definition of some object of M. Q is the knowledgestructure used by the designer. Illustration: to define a certain piece of furniture, the designer can, for instance, describe the function, then the shape, then the materials (and hence there is a series of constraints that refine each other). Illustration: in the world of industrial design, Q can have colour, texture, and be made of certain matter. In the world of engineering design, one would speak of functions, technologies, and organs.
3) The third element of forcing is the dense subsets of (Q, <). A dense subset D of Q is a set of conditions so that any condition in Q can be refined by at least one condition belonging to this dense subset. One property of dense subsets is that they contain very long (almost 'complete') definitions of things (or sets) on M, because each condition in Q, whatever its 'length', can always be refined by a condition in D. Still, a dense subset contains only constraints so that it is a way to speak of all elements without 'having' one element and speaking of them only in terms of their 'properties'. Illustration: in art, the notion of the 'balance' of the composition of a piece of art could be interpreted as a dense subset defined by conditions such as lines, colours, and masses. The set of conditions leading to a balance is dense in the set of all conditions because, whatever a sequence of conditions (a partially defined piece), it is always possible to identify additional conditions with which to speak of the 'balance' of this partially defined object. In engineering design, usual 'integrative' dimensions such as cost or weight, energy consumption or reliability can be considered as dense subsets. Whatever the level of definition of the machine at stake, there will always be a constraint that refines this level of definition and is related to, for instance, cost (or energy consumption, reliability, and so on). For instance, the issue of cost can be discussed when only functional constraints are added or it can be discussed much later in the design process when a detailed design is produced.
4) The fourth element (and core idea) of forcing is the formation of a generic filter G, made of constraints of Q (hence from M), which step by step completely defines a new set. The exciting result of forcing is that, under certain conditions to be explained below, this new set defined by G is not in M. How is it possible to jump out of the box M? Forcing uses a very general technique in that it creates an object that has a property that no other object of M can have. Technically, a generic filter is defined as a filter that intersects all dense subsets. In general (see condition 1 below), this generic filter defines a new set that is not in M but is still defined by conditions from Q, defined on M. We can interpret G as a collector of all information available in M in order to create something new not in M. Illustration: in the case of industrial or engineering design, a new piece is only a filter (a series of constraints (i.e., lines, colours, and material), functions, technologies, organs, and dimensions). There is no guarantee that a series of constraints builds a generic filter; i.e., there is no guarantee that the series intersects all dense subsets and follows condition 1 below. There is thus no guarantee that the new piece is 'out-of-the-box'. However, conversely, as soon as the series meets condition 1 and intersects all dense subsets, one designs a new object that is made from the known constraints and is different from all the known objects.
5) The fifth element of forcing is the construction method for the extended model N. The new set G is used as the foundation stone for the generation of new sets combining systematically G with other sets of M (usually denoted M(G)). The union of M and M(G) is the extension model N. Illustration: in the case of industrial design, a new object can embody a new style, and this new style can be used to redesign the whole set of known products, services, fonts and so on. A known example is the 'streamline' style that was used to redesign all kinds of products in the 1920s and 1930s (from aircraft to buildings, hairdryers, toasters and advertisement typography) [START_REF] Engler | Streamlined, A Metaphor for Progress, The Esthetics of Minimized Drag[END_REF]. In the case of engineering design, the development of a new machine is not supposed to lead to a revisit and redesign of the whole range of machines. Still, this can happen for so-called generic technologies; e.g., the development of electric motors and digital control systems led to the redesign of many systems and machine tools. This leads us to the first powerful result of the mathematical model: it enables us to characterize 'generic' generativity. Let's explain this first point. Forcing creates a new set G that is built on M, and is, in general, different from all elements of M and is still coherent with the rules of M. Therefore, this set G is precisely 'generically' generative in that it is different from all elements of M but coherent and able to lead to the design of a whole collection of new entities, M(G). This 'generic generativity' can be distinguished from another type of generativity. Suppose that one distinguishes in M the elements made only with 'usual' constraints and the elements made with at least one 'original' (i.e., rarely used) constraint. The latter constraints might be said to be creative in the sense that they are original, since they use a 'rarely used constraints'. However, these elements are in M. This is a form of 'frequency' generativity, which is non-generic. Note that an 'exploration' logic in a complex search space leads to 'frequency' generativity; i.e., the new solution will rely on a rarely used routine (constraint) but this solution is still in the initial space of potential solutions.
If the set is in M, then the 'composition' (union, intersection, and so on of all operations allowed by Zermelo-Fraenkel axioms) of this set with sets of M is still in M; i.e., it is not 'new'. By contrast, if the set is not in M, then the composition of this 'new' set with sets of M is also a new set. Hence, there is the process of extending M to N = M(G). In summary, in the case of 'frequency' generativity, one stays in the box (i.e., the generativity is simply related to the fact that one uses an 'original', low-frequency constraint from the box M), and the new entity does not require the redesign of other entities. In the case of generic generativity, one uses constraints from the box M to go out of the box (G is not in M) and this leads to the design of all-new objects created from the combinations of the new entity G and the known entities in M.
This formal model clarifies two very different forms of generativity and leads to the first research hypothesis in our study of creative designers: H1: creative design aims at generic generativity. By contrast, designers who don't claim creative design rather rely on non-generic generativity.
Conditions of generativity: splitting condition and countable dense subsets
Forcing models are a powerful form of generativity-a form that seems to correspond to phenomena of strong generativity, such as the design of a new style in industrial design, the design of a generic technology in the realm of technical objects, or even the design (discovery) of new scientific principles in the realm of science (see the emergence of relativity theory or quantum theory in physics for instance).
Forcing also clarifies some conditions of this generativity. Note that this is not intuitive in that one tends to consider that there are only psychological limits to generativity, such as fixations. Forcing theory provides us with a characterization of the formal conditions associated to generic generativity. In technical terms, forcing clarifies the conditions required for a filter to be a generic filter that goes out of M.
There are two conditions sufficient to create a 'generic filter': the splitting condition and countability condition.
Condition 1: splitting condition (necessary condition) A generic filter does not necessarily go out of M. It has been shown that G is not in M as soon as Q follows the splitting condition; i.e., for every constraint p, there are two constraints q and q' that refine p but are incompatible (where the term 'incompatible' means that there is no constraint that refines q and q'). 3This formal expression corresponds to deep and general properties of the knowledge base of a designer (where we remember that M can be assimilated to the knowledge base of a designer and Q to the structure of this knowledge base). Let's clarify what the splitting condition means. It is easier to understand what a non-splitting knowledge base is. A knowledge base is non-splitting in two cases.
1-Deterministic rule: the knowledge base is non-splitting if there is one constraint p such that there is only one single series of constraints q 1 , q 2 … that refines p (see figure 1). This means that p determines immediately the set of constraints that follows. p is a deterministic rule that determines the entity. If there is such a deterministic rule, then the generic filter that contains p does not go out of M.
This kind of deterministic rule can be found when the designer relies on one specific know-how or considers that he or she applies scientific rules and principles. In both cases, the designer follows a unique predefined series of constraints after p. As a consequence, design can be generically generative only if the designer does not only rely on know-how.
2-Modularity: the knowledge base is non-splitting if there is one constraint p such that there are refinements q and q' of p such that there is a constraint r that refines q and q'. This means that q and q' are modules that can be added to the entity without making any difference to the following constraint r. r is insensitive to the choice between q and q'. q and q' are modular; i.e., they are interchangeable.
This kind of modularity can be found when the designer relies on building blocks that are interchangeable, such as Lego blocks. As a consequence, design can be generically generative only if the designer is not relying only on building blocks. As a consequence, generic generativity can be obtained only with a knowledge structure without determinism and modularity. Conversely, a knowledge structure with determinism and modularity prevents generic generativity. Hence, this formal model provides us with a clear hypothesis with which to analyse creative design:
H2: creative designers (aiming at generic generativity) will rely on a splitting knowledge base.
Conversely, in the case of non-generic generativity, the designer relies on a nonsplitting knowledge base.
Condition 2: countable condition (sufficient condition)
How can one build a generic filter? There is no single way. However, there is an interesting sufficient condition: if M is countable, then the collection of dense subsets of M is countable and there exists a generic filter on Q (in fact, there exists a generic filter G for every p * of Q such that p * in is G) 4 .
This second condition corresponds to a constructive procedure that creates a generic filter. Because the dense subsets of M are countable, they can be ordered D 1 , D 2 … Beginning at constraint p 0 , the designer can always find a constraint in D 1 that refines p 0 (because D 1 is dense); he or she takes p 1 and can then always find a constraint p 2 in D 2 that refines p 1 (because D 2 is dense), and so on. The sequence of constraints creates a generic filter G. If the knowledge base initially met the splitting condition, then the filter is not in M. This means that the design process is determined by the dense subsets and the countability logic that allows the classification of the dense subsets.
By contrast, what is the design process associated with a knowledge structure that does not meet the splitting condition? It can be shown that the generic filter is determined by the conditions where there is determinism and modularity 5 . The design process in the 4 Demonstration (see [START_REF] Jech | Set Theory[END_REF], p. 203): Let D 1 , D 2 … be the dense subsets of Q. Let p 0 = p * , a constraint in Q. For each n, let p n be such that p n < p n-1 and p n is in D n . The set G = {q ∈ P / q > p n for some n ∈ N} is then a generic filter acting on Q and p * is in G. 5 Demonstration: If Q is non-splitting, then there exists p 0 such that whatever q and q' are refining p 0 , there is r such that r < q and r < q'. We show that if p 0 is in G, then G refines all conditions stronger than p 0 . We want to show that, whatever q < p 0 , there is r in G that refines q. To this end, we introduce D q = {p in Q / p is not refined by p 0 or p < q}. D q is dense: for every p in Q, either p is not refined by p 0 and it is in Dq or p <p 0 ; we know that q < p 0 and Q is non-splitting, and hence, there is r < p and r < q. D q is therefore dense. G therefore intersects D q . Hence, for every q that refines p 0 , there is an r in D q . Moreover, we know that p 0 is in G, and hence, r in D q necessarily refines p 0 . Therefore, every constraint stronger than case of non-splitting conditions is not determined by the dense subsets but is structured by the constraints where the knowledge base is non-splitting; i.e., where determinism and modularity begin. One would then expect a design process based on constraints (deterministic or modular) in non-generic generativity and a process based on dense subsets in generic generativity.
Hence, the formal model provides a clear hypothesis with which to analyse creative design:
H3: creative designers (aiming at generic generativity) can follow a design process defined by the order of the dense subsets.
Conversely, in non-generic generativity, design will rely on constraints that are modular or deterministic.
Part 2: Research questions and method
Research questions
In brief, based on formal models of design like forcing, we formulate the following research hypotheses regarding creative design. H1: creative design aims at generic generativity; i.e., the design of an entity that is not in the initial knowledge base and that requires the reordering of the knowledge base by including all combinations of the newly designed entity and the previously known entities. H2: creative design relies on a splitting knowledge base to get generic generativity; hence, learning creative design should involve gaining the ability to create a splitting knowledge base. H3: the creative design process can follow a design process defined by the order of the dense subsets; hence, learning creative design should involve ordering dense subsets.
Said differently, formal design theory predicts that there are conditions that need to be met to realize generic generativity. This is intriguing. To check these conditions, it is interesting to analyse expert designers who are famous for their generativity, so as to check that their generativity can be considered a form of generic generativity, and then to analyse whether their knowledge base meets the conditions predicted by formal design theory.
Methods-material and analytical framework
To empirically study generic generativity and its conditions, we need an empirical situation where generic generativity is most likely (to check H1) and we need to be able to characterize the knowledge base of the designer. This second condition is particularly hard to meet; i.e., how can one access the designer's knowledge base? Our research method involves studying courses offered at design schools. The study of courses provides direct p 0 is refined by a constraint in G. Hence, every constraint stronger than p 0 is in G. Hence, G is determined by p 0 . Note that the splitting condition is sufficient but not necessary. A nonsplitting knowledge base Q can be used to create a generic filter G not in M, which is a consequence of the theorem above that states that G must "avoid" all p 0 where modularity or determinism begins. access to the knowledge acquired by the designer at school and hence, specifically, the knowledge structure built to do his/her designer task. We focus on courses offered at Bauhaus for two reasons. 1) Bauhaus is famous for its powerful generativity. Although it requires further investigation, there is a good chance that H1 holds true for Bauhaus designers. 2) Bauhaus is famous for its formal teaching, which provides us with an impressive corpus with which to study the knowledge structure and design processes invented by famous professors to meet the challenge of creative design.
Material: Itten and Klee courses
This paper does not address all aspects of Bauhaus teaching but focuses on the courses given by Klee and Itten. This corpus, often criticized to be too formal and 'scientific' to meet generativity challenges, will nevertheless provide strong elements for our research. Itten (1888Itten ( -1967) ) was invited by Walter Gropius to teach an introductory course at Bauhaus. Itten taught this course from 1919 to 1922 (i.e., the very first years of Bauhaus). He considered that 'imagination and creative ability must first of all be liberated and strengthened' and he proposed to do this by providing specific knowledge on the 'objective laws of form and colour', with the idea that it would 'help to strengthen a person's powers and to expand his creative gift' [START_REF] Itten | Design and Form, the Basic Course at the Bauhaus and Later[END_REF]. His theory of contrast had to 'open a new world to students'. His famous theory of colours intended to 'liberate the study of colours harmony from associations with forms' and to help discover 'expressive quality of the colours contrasts' [START_REF] Itten | The art of color[END_REF]. Hence, this course will be particularly helpful for our study of the kind of knowledge structure that can improve generic generativity.
We can go one step further to sharpen our analysis. It is interesting to note that the idea of providing knowledge to improve design capability was not new. Vitruvius had already (in the first century) insisted on the necessity for architects to master a large corpus of knowledge [START_REF] Vitruvius | Ten Books on Architecture[END_REF]. When Itten taught his courses, engineers in Germany learnt engineering design by learning machine elements and engineering sciences [START_REF] Heymann | Kunst" und Wissenchsaft in der Technik des 20[END_REF]. Still, machine elements or engineering sciences are not necessarily seen as sources of generativity. What is the difference between the kind of knowledge and learning capacities as taught by Itten and the machine elements and engineering sciences as taught in German machine construction courses at the same time? Klee (1879Klee ( -1940) ) was invited by Itten and Gropius in 1921 to teach at Bauhaus, where he remained as a professor for 10 years. His course 'Contribution to a pictorial theory of form' is described by Herbert Read as 'the most complete presentation of the principles of design ever made by a modern artist' (p. 186) (Read 1959). As he explains in the retrospective of his course (lesson 10), 'any work is never a work that is, it is first of all a genesis, a work that becomes. Any work begins somewhere close to the motive and grows beyond the organs to become an organism. Construction, our goal here, is not beforehand but is developed from internal or external motives to become a whole' [START_REF] Klee | Contributions à la théorie de la forme picturale[END_REF] [our translation]. His intention is hence to teach a process that creates an organism, a whole, which unfolds step by step. With Klee, it is particularly relevant to study design processes leading to generic generativity.
Here again we can go one step further. We know of such design processes that ensure that a coherent whole will emerge step by step. For instance, systematic design [START_REF] Pahl | Engineering design, a systematic approach[END_REF]) prescribes to develop a product through four main steps (i.e., functional requirements, then conceptual design, embodiment design and detailed design). Again, such a process is not particularly well known for its creative aspects, or more precisely, its capacity to break design rules. Hence, what is the difference between the Klee design process and a classical engineering design process?
Sources
To study the courses, we rely on primary sources [START_REF] Gropius | The Theory and Organization of the Bauhaus[END_REF][START_REF] Gropius | Neue Bauhauswerkstätten[END_REF][START_REF] Itten | Design and Form, the Basic Course at the Bauhaus and Later[END_REF][START_REF] Itten | The art of color[END_REF][START_REF] Kandinsky | Cours du Bauhaus[END_REF][START_REF] Klee | Beiträge zur bildnerischen Formlehre ('contribution to a pictorial theory of form[END_REF][START_REF] Klee | Contributions à la théorie de la forme picturale[END_REF][START_REF] Klee | On modern Art[END_REF] and secondary sources (Wick 2000;Whitford 1984;[START_REF] Droste | Bauhaus[END_REF]Schwartz 1996;[START_REF] Campbell | The German Werkbund[END_REF][START_REF] Friedewald | Life and Work[END_REF]. Note that the quality of primary sources is excellent. In particular, Klee said he was stressed by teaching so he wrote in his notebooks all the details of his courses, including sketches made during courses.
Analytical framework
In each case, we first present the courses, as described by the teacher and confirmed by former students. We then analyse the design logic in teaching from two perspectives: i) how does the teaching process affect (or attempt to affect) the knowledge structure of the students, and can this knowledge structure be related to the splitting condition (in particular, we will have to identify the 'constraints' for Bauhaus students, and the structure of these constraints) and ii) how does the course help the student learn a specific design process, and is this specific design process related to the countability of dense subsets? (In particular, we will identify dense subsets for Bauhaus students and analyse how they relate to each other, so that they can be considered 'countable'.)
To analyse the evolution of knowledge structures and the design process implied by design courses, we coded with C-K design theory [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]several Itten and Klee exercises. The theory provides us with an analytical framework that we can use to follow knowledge expansion resulting from design courses. In each case, we coded in K the knowledge acquired during the past courses, and in C the terms of the exercise. We then coded the answers to the exercises (i.e., the answer given by students when available, or the answer given by the professor) and the associated knowledge examples.
Part 3: H1: style creation and generic generativity at Bauhaus
Before analysing Bauhaus courses, we first need to discuss the logic of generic generativity at Bauhaus. We show that generic generativity at Bauhaus corresponds to a logic of teaching style creation. We establish this point in two steps. First, we review works on teaching in industrial design, showing that there has long been a tension between teaching style and teaching style creation, with style creation being a form of generic generativity. We then show how Bauhaus clearly took a position in favour of teaching style creation.
Tension between teaching style and teaching style creation
When looking at aspects of the history of industrial design education, there are recurring tensions about what should be taught.
1) United States and Germany, early 20 th century. At the end of the nineteenth century, countries such as Germany and the United States decided to deeply reform their teaching of fine art, in particular as a pragmatic consequence of the World Fairs where German and American products exhibited poor quality (e.g., see the reception of German products described by Reuleaux [START_REF] Reuleaux | Constructionslehre für den Maschinenbau, erster Band : die Construction der Maschinentheile. Fridriech Vieweh und Sohn, Braunschweig Rhodes M[END_REF]) and the poor reception of American applied arts at the 1889 Paris Exposition [START_REF] Jaffee | Before the New Bauhaus: From Industrial Drawing to Art and Design Education in Chicago[END_REF]). This decision corresponded also to a more utopian focus on 'art as an arena of social improvement' (Jaffee 2005) (p.41) and the use of applied art as a way to recreate culture and communities in an industrial era (Schwartz 1996).
The teaching of fine art was then reorganized to be more like that of the Art Institute of Chicago and its school [START_REF] Jaffee | Before the New Bauhaus: From Industrial Drawing to Art and Design Education in Chicago[END_REF]. Jaffee explains that the basis of the new teaching is twofold. On the one hand, a 'vigorous technical component' (e.g., ornamental design, woodcarving, frescoing, mosaicking and the use of stained glass) was added to the offering of traditional fine arts (e.g., drawing and anatomy), in a tendency to address 'all types of works of house decoration and industrial arts, including the "modern arts" of illustration and advertising'. On the other hand, the teaching tended to be based on scientific principles: 'many American educators believed that abstract laws or principles of arts existed which, once stabilized, would not only facilitate the production of art but raise it to a higher level' (Jaffee 2005) (p. 44). These principles ranged from Ross's works [START_REF] Ross | A Theory of Pure Design: Harmony, Balance, Rhythm[END_REF]) to develop a rational, scientific theory of the aesthetic of perception to Dow's principles of composition [START_REF] Dow | Composition: A Series of Exercises in Art Structure for the Use of Students and Teachers[END_REF].
For some professors like Sargent, a leading figure of design teaching at the University of Chicago Department of Arts, such a program could support the creation of new styles: 'after the war, said Sargent in 1918 (cited by Jaffee), the United States will have to depend upon its own resources more than in the past, not only for designers but also for styles of design'. These methods were rather principles for addressing a higher, well-established, scientifically grounded 'quality'. Hence, there was an ambiguity that industrial design teaching was not really addressing the creation of new styles but intended much more to teach students existing styles to enable them to improve product quality. As Jaffee concludes, the kind of teaching finally led to an extended vision of styles, as characterized in the famous book of Gardner, a former student of Sargent at University of Chicago, Art through the ages [START_REF] Gardner | Art Through the Ages -An Introduction to its History and Significance[END_REF]. Gardner presented a world panorama of styles, guided by the idea that 'it was the universal values in design that made it possible for art to have a history' and providing clear methods for their appreciation and understanding.
2) France, end of the 19 th century. Some decades earlier, in 1877 the old French school Ecole Gratuite de Dessin et de Mathématiques (created in 1766) was renamed Ecole des Arts Décoratifs, to signify a new logic in teaching. The new director, Louvrier de Lajolais (director from 1877 to 1906) explained that the school did not aim to teach technical skills (which were taught at another school, the Conservatoire des Arts et Métiers) or teach academic bases (which were taught at the Ecole des Beaux Arts) but aimed at educating a new generation of artists who were to master a large scope of technical knowledge (involving, for example, textile, ceramic, wood, and metal), with increased capacity to adapt to new tastes and to provide original models to industry. From this perspective, teaching has to consider interior design as a whole, with a 'style unity' that includes painting decorating as well as interior architecture, furniture, and so on [START_REF] Raynaud | Histoire de l'Ecole nationale supérieure des arts décoratfs 1ère partie (1766-1941)[END_REF].
How is it possible to build this style unity? As explained by Froissart-Pezone ( 2004), since the 1870s, style unity was based on the idea that there is 'a logical relationship that links material, function and form, structure and ornament, following the courses and theories of Eugene Viollet Leduc', who taught at the school in the 1850s and was the professor who taught many school professors at the end of the 19 th century (e.g., Victor Rupricht-Robert, Eugène Train, Charles Genuys, and Hector Guimard) [START_REF] Leniaud | Viollet-le-Duc ou les délires du système. Menges, Paris Lindemann U[END_REF]). According to [START_REF] Raynaud | Histoire de l'Ecole nationale supérieure des arts décoratfs 1ère partie (1766-1941)[END_REF][START_REF] Froissart-Pezone | L'Ecole à la recherche d'une identité entre art et industrie (1877-1914)[END_REF], this education program finally led, in the early 1900s and, above all, in the time following the First World War, to a large success in that, in this period, the Ecole des Arts Décoratifs reached a peak, embodied by the art déco style, which was a unique style with well-identified standards. Hence, the school was able to invent and teach one new style.
3) Germany, mid-20 th century. Some decades later, the tension between teaching style and teaching style creation was also at the heart of the debate that occurred at Ulm Hochschule für Gestaltung (Institute for Design) between the first director Max Bill and his successor Tomas Maldonado (Betts 1998). For Maldonado, 'Bill's venerable "good form" itself becomes just another design style among many'. Here again the idea was to avoid relying on past styles. Rejecting art-based heritage, Maldonado insisted on the capacity of the designer to 'coordinate in close collaboration with a large number of specialists, the most varied requirements of product fabrication and usage' [START_REF] Maldonado | New developments in Industry and the Training of Designers[END_REF]. Teaching had to be based on system analysis and new product management. Relying on Peirce semiotics and Max Bense teachings, the curriculum intended to 'replace cultural judgement (taste, beauty, morality) with more scientific evaluation criteria' (Betts 1998) (p.79). As Betts summarizes, Bill and his colleagues tried to 'develop a critical theory of modern consumer culture untainted by Madison Avenue machinations' (p. 80), they looked for a more "ethically-based critical semiotics" to address the relationship between people and (consumable) things'. For Bense, the issue was to 'follow the lead of the modern physicist who studies the "objective world" not by analysing its objects but rather its interactive semiotics effects' ( [START_REF] Bense | Science, semiotics and society: The Ulm Hochschule für Gestaltung in Retrospect[END_REF]) cited by (Betts 1998) p. 79). Still, this could also be interpreted as an extension of the logic of style to the interaction between the object and its environment. At the end of the 1960s, 'even the supposedly anti-aesthetic ethos of functionalism had become just another supermarket style, as the Braun design story attested' (Betts 1998). Here again the tension between style teaching and teaching style creation was a critical issue.
Interestingly, the extension from style to meaning also directly led to the famous proposition of Klaus Krippendorff, who graduated as a diplom-designer from Ulm, that 'design is making sense of things' or is a creation of meaning [START_REF] Krippendorff | On the Essential Contexts of Artifacts or on the Proposition That "Design Is Making Sense (Of Things)[END_REF]). However, the paper of Krippendorff precisely exhibits the same tension. In the first part, Krippendorff insists on the design ambition to be a capacity to create meaning, whereas in the second part (from p. 16), meaning creation is reduced to a referential of contexts (i.e., operational context, sociolinguistic context, context of genesis, and ecological context) that an engineer would consider a good list of functional requirements.
These elements give us two insights into the issue of design teaching. First, over time, there was a progressive extension from the design of objects (e.g., domestic objects and applied-art pieces) to multiple objects (e.g., trademarks, advertisements, and shop windows) and to styles and meaning (e.g., new icons, symbols, signs, new forms of interaction between objects and people and even today 'semiotic ideologies' [START_REF] Keane | Semiotics and the social analysis of material things[END_REF]). A similar evolution can be seen in the historiography of design [START_REF] Riccini | History from things: Notes on the history of industrial design[END_REF]. Second, teaching styles (or meaning) are a source of tension between two approaches: teaching (past and new) styles and teaching the creation of style(s).
We can now better characterize this tension. Teaching past and new styles can be characterized as teaching the values (or what engineering would call 'the functional requirements') of existing styles and the ways and means to acquire them (e.g., mastering drawing, composition laws, and material techniques such as woodcarving, frescoing, mosaicking, and the use of stained glass), whereas style creation (or even 'meaning creation') consists of creating an original culture that encompasses new 'objects' as well as new interactive receptions by people. Hence, a clear challenge for the new style is that it has to be 'significantly' original and new (i.e., removed from past styles) yet still has to be 'meaningful' to the (occasionally lay) 'user(s)', who should be able to 'make sense' of the new by relating it to the known. The new meaning is both original and strongly related to all of what is already known. The style has to be new and will affect very large types of artefacts (e.g., techniques, objects, environments, uses, individuals and social references). This is precisely a generic generativity-new on many facets and leading to revise a whole world of objects, uses, and ways of life.
Teaching style creation, a challenge at the roots of Bauhaus
The tension between teaching style and teaching style creation was at the root of Bauhaus. This was illustrated by (Schwartz 1996) in his study of the German Werkbund, the melting pot of the debates that would later shift to Bauhaus. From the 1890s onwards, the members of the Kunstgewerbe Bewegung and later the Werkbund (500 people at the Werbund creation in 1907 and 2000 in 1914, among them Hermann Muthesius, Peter Behrens, Henry Van de Velde, Richard Riemerschmid, and Werner Sombart) launched wide discussions and initiatives on German applied arts6 . They rejected the use of 'historical styles' (as used in Fachverbände, professional associations) and promoted the direct involvement of artists in the production of objects of everyday life, taking into account the industrial conditions of production and trade. The works of Peter Behrens at AEG illustrate the contrast between the 'historical style' approach and the Werkbund approach (see Figure 2 b). They also show that designers like Behrens not only coped with objects but with the complete environment (e.g., AEG trademarks, retail shop windows, product catalogues, and even the factory itself). As shown by Schwartz (Schwartz 1996), one of the great issues facing Werkbund was to create 'the style of our age', the so-called 'Sachlichkeit'. Sachlichkeit was not the aesthetic payoff of the functional form (and functionalism as such was widely discussed and rejected in the Werkbund) but rather the avoidance of form as Fashion (see Muthesius, 1902, Loos, the ornament as crime, 1910;[START_REF] Gropius | The Theory and Organization of the Bauhaus[END_REF]. Werkbund members remembered the story of Jugendstil: Van de Velde, Riemerschmid and others proposed a new style that was finally transformed into inconsistent fashionable ornaments (see Figure 3). In the social tensions created by the industrial revolutions in Germany, and following Tönnies works on the new Gemeinschaft (community) that counterbalanced the complexity of contemporary Gesellschaft (society) or Sombart on Kunstgewerbe and Kultur, they wanted to organize to create a new style; i.e., a new culture and new communities created through designed objects. Once again, this ambition was trapped by the debate between style and style creation. In 1914, the Werkbund was split between the Muthesius party of Typisierung arguing for the standardization of production and distribution of objects (protected by copyright) that would embody the new style (of the new society), and Van de Velde (supported among others by Gropius and Osthaus), who advocated a free capacity for designers to create their own 'style'.
Werkbund and the 1914 crisis laid the intellectual foundations of Bauhaus. 1) The designer should not subordinate himself to the law of any style, nor should he just make use of motifs (like the Jugendstil motifs) in designing fashionable products. 2) What has to be designed? Not a product, but a whole range of commodity products including trademarks, advertisement, shop windows, and catalogues so as to create the 'style of the age'. 3) This style creation is not reserved to a few happy designers protected by copyrights or standardized but should be made accessible to many designers through teaching.
In conclusion, we have established that Bauhaus aimed to convey to students a capacity of generic generativity. Bauhaus is thus a case in which creative design consists of generic generativity (H1). We will also verify our methodological assumption. Because teaching is considered a way to convey this generic generativity capacity, the analysis of courses is critical in testing hypothesis H2 and H3. Does the knowledge structure promoted by Bauhaus courses correspond to the structure predicted by design theory?
Part 4: Results: knowledge structure and design process for generic generativity (H2 and H3)
We now present the results of analysis of the Bauhaus courses. We analyse first the Itten course and then the Klee course. For each course, we give a brief description and analyse the course according to design theory and present the results for H2 and H3 hypotheses. Finally, we underline the differences between the two courses and apparently similar courses in engineering design.
Itten: a 'contrast'-based knowledge structure that better opens holes
Brief description of the Itten course
The Itten course is based on means of classical expression and has a chapter on each of lines and points, form, colour, material, and texture.
We focus on the chapter on texture as an example and analyse the series of exercises proposed by Itten to learn about textures [START_REF] Itten | Design and Form, the Basic Course at the Bauhaus and Later[END_REF]. In a first phase, students are told to draw a lemon. Beginning with the representation of an object, Itten wants the students to go from 'the geometrical problems of form' to the 'essence of the lemon in the drawing.' This is an 'unfixing' exercise, helping the students to avoid assimilating the object with a geometrical form.
In a second phase, the students are asked to touch several types of textures, to 'improve their tactile assessment, their sense of touch.' This is a learning phase in which students 'sharpen observation and enhance perception.' [START_REF] Itten | Design and Form, the Basic Course at the Bauhaus and Later[END_REF] In a third phase, students build 'texture montages in contrasting materials' (see figure 4). During this exercise, students begin to use textures as a means of design. The constraint (design only by contrasting textures) helps students learn about textures (i.e., to explore the contrasting dimensions of different textures and to improve their ability to distinguish between them). It also means that students are able to explore the intrinsic generative power of textures; i.e., the superimposition of textures that should create something new, such as 'roughly smooth', 'gaseous fibrous', 'dull shiny', and 'transparent opaque'. Moreover, students begin to learn the relationship between texture and a complete work, a composition, in contrast to the idea that texture could be secondary and 'optional', chosen independently of the rest of the piece. The exercise thus makes textures a critical part determining the whole. The fourth phase could be qualified as 'research'. As the students are by then more sensitive to the variety of attributes of a texture, they can 'go out' to find 'rare textures in plants.' It is interesting to underline that Itten does not begin with this phase. He begins by strengthening the students' capacity to recognize new things, just as a botanical researcher has first to learn the plant classification system and to discriminate features before being able to identify a new specimen. In particular, students are told to find new textures for a given material (see the figure 5 in which all textures are made from the same wood). Once again, this is an exercise of disentangling texture from other fixing facets (i.e., materials in this case). Note that, in this step, Itten does not teach a pre-formatted catalogue of textures but teaches the student how to learn textures, thereby building their personal 'palette'. The fifth phase consists of representing textures. Itten stipulates that students have to represent 'by heart', 'from their personal sensation', to go from 'imitation' to 'interpretation'. Instead of being an exercise of objective 'representation', this exercise is intended as a design exercise, as students had to combine textures with their own personality. Just as phase 4 aims at creating something new from the superimposition of contrasting textures, the idea in this phase is that the new should emerge from the superimposition of texture and the individual 'heart'. The phase is also intended to help improve sensitivity.
The sixth and final phase consists of characterizing environmental phenomena as textures. For instance, the figure shows a marketplace painted as a patchwork blanket. Itten urges students to use texture as an autonomous means of expression and not to just produce a 'constrained' ornament. By combining their enriched algebra of textures and the algebra of scenes, students can create new 'textured scenes' that are more than the scenes and more than the textures. As Itten [START_REF] Itten | Design and Form, the Basic Course at the Bauhaus and Later[END_REF]) explains, 'It stimulates the students to detach themselves from the natural subject, and search for and reproduce new formal relations'. We could repeat this analysis for other aspects of Itten's teaching (e.g., lines and points, form, and colour).
Analysis of the Itten course from a design perspective
We now turn to the analysis of the Itten course. We first need to underline one critical point: Itten does not teach a stabilized knowledge base (or a stabilized style associated to it) but rather teaches students how to build their own knowledge base (to create their own style). In all cases, one finds that Itten improves three facets of his students' design capabilities. a-Self-evidently students extend their knowledge base for the notion of interest (e.g., texture), knowing more about (texture) materials, (texture) descriptive languages, (texture) perception, and (texture) building techniques. In terms of colour, Itten teaches to increase the student's capacity to perceive 'distinct differences between two compared effects' and to 'intensify or weaken (colour) effects by contrast'. In that sense, there is no great difference from an engineer learning machine elements, their production processes, and their functionalities; i.e., learning what design theorists would call design parameters and functional requirements. In both cases, seen from this perspective, the knowledge structure appears as a well-ordered catalogue of recipes. Still, the knowledge structure is a highly complex one, for which only a few combinations have been explored. b-Students are ready to learn about the notion of interest. They know parts of what they don't know: the contrasts, the materials, the process, the perception and sensations they have tried to convey and those they could not try to convey involving unavailable materials, new combinations, and sharper sensations. As Itten writes, ' a theory of harmony does not tend to fetter the imagination but on the contrary provides a guide to discovery of new and different means of colour expression' [START_REF] Itten | The art of color[END_REF]. The industrial design students know the limit of what they know and the way to learn beyond. They not only know the state of the (their) art but also the state of the non (yet) art. The knowledge structure is closer to that of a very smart scientist-engineer, who not only knows the engineering sciences but also know their limits and is ready to follow the advances they make.
At this point, we can already underline that this knowledge structure enables a designer to extend his or her own design rules. It is closer to style creation than teaching the design parameters and functional requirements of pre-given styles. c-Beyond rules and the learning of rules, students are able to deal originally with briefs or to give themselves original briefs. This is the key logic of contrasts. Itten does not teach colours, forms, and textures but teaches the contrast between colours, forms, and textures. The juxtaposition provokes surprise, it creates 'holes' in the knowledge base, which have to be explored by the designer. A contrast does not correspond to a unique meaning with a one-to-one correspondence but instead paves the way to multiple elaborations. With Itten, students learn to formulate exercises (briefs) that can be oriented to explore new textures, new texture montages, and new texture contrasts. These briefs can also be oriented towards creating original works using textures (or colours or forms) in a unique way. In that sense, the teaching of Itten is much closer to educating a senior scientist, who has not only to answer exogenous research questions but has also to be able to construct his or her own, original, research program.
Up to this point, we understand that Itten's teaching is sophisticated, much more than just teaching the elements of an existing style or teaching a new technique or relying on a kind of 'project-based learning'. We have now to clarify how this kind of teaching can help deal with generic generativity.
It should first be noted that, despite apparent knowledge expansion, the knowledge base relies on classical motives (e.g., drawing, colour, material, and texture). Therefore, if there is generativity, it is not based on the use of radically new means. At the time, there were transformations in expression means, and Bauhaus was aware of them. For instance, photography was considered an applied art, as evidenced by a book published by Meurer [START_REF] Meurer | Die Ursprungsformen des griechischen Akanthusornamentes und ihre natürlichen Vorbilder[END_REF]) and photographs published by Karl Blossfeldt [START_REF] Stoots | Karl Blossfeldt, Indisputably Modern[END_REF][START_REF] Blossfeldt | Urformen der Kunst: photographische Pflanzenbilder[END_REF]. Bauhaus participated in this movement through the teachings and book of Moholy Nagy (Moholy-Nagy 1938). Bauhaus is also famous for the works done on new typography. However, Itten did not teach these new means and relied on a known set of means (e.g., textures and colours). Hence generativity won't come from new means but from the combination of known means.. Still, a combination is not necessarily creative and does not necessarily imply H2, that a knowledge base should meet the splitting condition. We therefore ask, how does the knowledge base enabled by the Itten course meet the splitting condition? To this end, we made an in-depth analysis of the design reasoning in Itten's exercises, to analyse how they lead to changes in the knowledge base of the students. We illustrate this analysis for one case, taken from the texture lesson (see figure 7). The exercise brief is given in C: 'texture montages of contrasting materials, bound by rhythmic forms'. In K, there is the knowledge acquired by students during the first courses, related to Itten's exercise: knowledge about materials, textures, and rhythmic forms. According to Itten, the exercise leads to 'fantastic structures with completely novel effects' (see two examples in the figure above), and hence a form of generativity ('fantastic') that might be said to be generic in the sense that it is not the structure but the 'effects' that are new. The exercise creates new effects and not only a new structure.
The consequence of the exercise on student's knowledge is summarized in K in the figure above. In this particular case, the expressions means (which correspond to the language of constraints in forcing) are unchanged. The exercise uses knowledge on materials, texture and forms gathered in the previous exercises (i.e., the lemon exercise, tactile assessment exercise and montage lesson). However, the structure of the relationship among them (which corresponds to the partial order of constraints in forcing) has strongly evolved. In the initial state, the relationship between material and texture is deterministic; e.g., wood implies fibrous texture. Additionally, the relationship between texture and form is modular, in that whatever the form, it is possible to add texture 1 or texture 2 without there being major changes to the final result. After the exercise, these two properties are changed. In the example, the material 'wicker' is related to shiny, smooth, and dry properties. Hence, the deterministic law is relaxed. Meanwhile, the form is made of and by textures, and it appears that there are new relationships between some textures and some form properties. A texture will reinforce slenderness or lightness or angularity. Therefore, a form with texture 1 will now differ from a form with texture 2. In this particular case, one exercise leads to the revision of the relationship between expression means (i.e., a partial order of constraints), resulting in two specific properties of the knowledge base: non-determinism and non-modularity. C-K analysis of the other exercises confirms this transformation. The knowledge structure built through Itten teaching can be characterized by two properties.
-Non-determinism: when confronted by a concept, the student cannot use a deterministic law. Because of the variety of contrasts, there is no law that links one colour to one material to one texture to one effect. At each step, the designer can always explore multiple paths. Itten fights against 'laws of harmony' or 'clichés' that tend to impose relations (e.g., warm fibrous wood or cold smooth shiny metal).
He wrote in his book on colours that we should 'liberate the study of colours' harmony from associations with forms.' For instance, the 'cliché' deterministically associates wood with a fibrous property, while Itten's teaching opens the way to smooth wood, which will differentiate the designer's work from all previous work using wood as a fibrous material. -Non-independence: not all attributes and not all combinations are equivalent. Itten does not advocate relativism. On the contrary, he states that 'subjective taste cannot suffice to all colour problems'. Relativism deletes the valued differences. If texture is only a 'secondary', 'modular' property, then all works with wood are similar; i.e., a work with smooth wood is indistinguishable from a work with fibrous wood. Against 'relativism', Itten teaches that one does not add a texture independently of the other aspects; if a scene or montage can be made of and by texture, then a scene or a sculpture is not 'insensitive' to the choice of texture. For Itten, each attribute (e.g., texture, colour, or material) affects the whole work and propagates to all other aspects. Here again, the notion of contrast is critical in that each juxtaposition is a source of meaningful contrast that has to be amplified, tamed, or counterbalanced by another. In concluding Itten's teaching, we state that non-determinism and non-independence are two critical properties of the knowledge structure provided by Itten. As a consequence, H2 is confirmed for the Itten course-a splitting knowledge base is a condition for generic generativity.
Comment on the Itten course: similarities and differences with engineering design approaches
Let's underline that the two properties stated above are much different from the logic of classical engineering design. Formally, we can associate the knowledge of expression means to machine elements [START_REF] Kesselring | Die "starke" Konstruktion, Gedanken zu einer Gestaltungslehre[END_REF][START_REF] Pahl | Engineering design, a systematic approach[END_REF][START_REF] Reuleaux | Constructionslehre für den Maschinenbau, erster Band : die Construction der Maschinentheile. Fridriech Vieweh und Sohn, Braunschweig Rhodes M[END_REF][START_REF] Bach | Die Maschinen-Elemente, ihre Berechnung und Konstruktion : Mit Rücksicht auf die neueren Versuche 5[END_REF][START_REF] Bach | Die Maschinen-Elemente, ihre Berechnung und Konstruktion : Mit Rücksicht auf die neueren Versuche 5[END_REF]Findeneisen 1950;[START_REF] Laudien K ; ) Maschinenelemente | Design Theory: history, state of the arts and advancements[END_REF][START_REF] Rötscher | Die Maschinenelemente[END_REF]) (these are 'constraints'); we can say that engineering design consists of combining machine elements just as industrial design consists of combining expression means, and we can associate the knowledge of the laws of contrast to engineering science (Rodenacker 1970;[START_REF] Hubka | Theory of technical systems. A total Concept Theory for Engineering Design[END_REF][START_REF] Dorst | John Gero's Function-Behaviour-Structure model of designing: a critical analysis[END_REF], in the sense that some laws determine the design parameters to be used.
This comparison reveals strong differences in the structure of constraints. 1) Modularity: we have seen that Itten teaches the student to combine expression means in a non-modular way, with each expression means being in strong relationship with all previous means, amplifying and expanding them. By contrast, in engineering design, machine elements are made to be modular. For instance, machine elements that have to meet a similar set of requirements are substitutable; or it is possible to use one machine element for one functional domain, independently of the type of object or the type of user. As soon as there is a rotating rod, it is possible to use a ball bearing, be it for a car or a power plant.
2)
Determinism: Itten teaches the laws of contrasts and the laws of colours, with the idea to show that there is no determinism and that there is a multiplicity of possibilities-there are seven types of contrasts and no rule that links colours in one single way. By contrast, engineering design tends to use laws to determine design parameters. Employing scientific laws, it possible to use the set of requirements to determine the technology to be used. Ideally, it is expected that knowledge of engineering science will be rich and precise enough to immediately determine one object for each list of requirements. These two contrasting structures of knowledge lead to contrasting forms of generativity. There is generativity in engineering (Lindemann 2010) that consists of, for instance, finding a new technique with which to address previously unmet requirements (e.g., energy harvesting in microelectronics would benefit from using energy dissipated by microprocessors). This generativity improves some aspects of the final design but keeps the others unchanged (e.g., the microprocessor with energy harvesting is a microprocessor that has one additional property in that, for instance, it still computes). It follows a modular logic and the knowledge base of the engineering designer remains non-splitting. As a consequence, the new object will be immediately compatible with other objects, without requiring the redesign of a whole set of entities.
By contrast, Itten's teaching enables students to build a splitting knowledge base. The newly designed entity will hence intersect all types of attributes. In the texture exercise, the creative effort finally implies material attributes (e.g., wood or wicker), texture attributes and form attributes. The newly designed entity paves the way to the redesign of complete sets of entities. Creating a new style, all existing objects could be redesigned with this new style.
Of course, as we will discuss in the conclusion, one can certainly find today design that is made by engineering and that is still generically creative, and conversely, we can certainly find design made by industrial designers that is not generically creative. Our result is not at the level of the professions but at the level of the structure of the knowledge base conveyed by Itten teaching and by machine elements and engineering science teaching.
In summary, Itten teaches students how to build their own knowledge base meeting the splitting condition (i.e., non-determinism and non-modularity). By contrast, classical engineering design enables students to build a knowledge base that is non-splitting.
B-Klee: composition as a genesis process, leading to out-of-the-box design
Brief description of the Klee course
We now study the Klee courses. We present three facets of the courses.
1-Even more so than Itten, Klee provides an extended language of the design object.
Beginning with 'lines', Klee introduces the notions of the active (vs passive) line, free line, and line 'with a delay' (befristet in German) (see figure 9). After lines, Klee addresses notions such as the rhythm of a piece, the spine of the piece, the piece as a weighing scale, the form as movement, the kinetic equilibrium, the organs and the organism. In particular, Klee proposes new languages for perception, considered as a 'moved form' with specific kinetics, ranging from pasturage to predation (see figure 10). 2-Each chapter of Klee's teaching not only investigates one dimension of the work (as did Itten for lines, surfaces, colour, textures, and so on) but discusses how one 'part' relates to the 'whole'. For instance, the 'line' is related to the 'perspective' of the whole piece, the 'weight' of each element is related to the 'balance' of the whole piece, the 'elemental structural rhythms' of the piece are related to the 'individual' that integrates all these rhythms, the 'joints' between elements are related to the 'whole organism', and the 'moved forms' are related to the 'kinetic equilibrium' of the received piece. This part-whole logic leads to a renewed logic of composition. In several exercises, Klee teaches composition. See the figures 11-12 for examples. Note that the composition criteria are not 'external' or stable evaluation criteria. They are enriched by the work. See, for instance, the example of 'balance' (figure 12). Klee considers that the 'balance' is a composition criterion, represented by the vertical cross (i.e., a balance with a vertical column and a beam). The superimposition of imbalanced situations creates a balance but this balance is not the initial vertical one but a 'cross-like' balance. The composition criteria create dense subsets of constraints. They are 'dense' in the sense that each composition has a balance (and weight is added (left); i.e., a weight is added instead of the previously mentioned unbalancing weight being removed. A new 'balance' then emerges, which is no more like a scale but is like the superimposition of two imbalanced situations; hence there is another cross. As underlined by Klee, 'what is new is the cross, we don't go back to the initial balance but we create a new balance.'
3-Klee also teaches how to shift from one aspect to another. One example is given in his second chapter. Teaching the 'weight' and balance of a piece, Klee shows that the imbalance of surfaces (see figure 13) calls for a new 'weight' to be balanced (e.g., the imbalance of surfaces is balanced by a colour). However, the introduction of coloured surfaces leads to a new imbalance. The scale thus 'oscillates' and creates rhythms in the whole. This is a shift from weight and balance to scales and rhythm, which creates the 'spine' of the piece (see figure 13). This transition is mediated through music, in which 'weights' and 'balances' correspond to rhythms, tempi and bars.
The Klee teaching structure corresponds to the presentation of transitions: from perspective to weight (via gravity), from balance to rhythm (via scales, space and music), from individual to joints (via physiology), from joined individuals to organisms and organs, and from organism to 'moved form' (from the eye's perception).
Formally speaking, this corresponds to the passage from one dense subset to another, and is hence a form of 'countability'. Analysis of the Klee course from a design perspective How does Klee improve the design capabilities of the students? Let's first confirm that Klee's teaching can be related to teaching style creation.
To begin, let's underline that, just like Itten, Klee does not teach radically new expression means. The expression means discussed in Klee's teaching are reduced to drawing and painting (and do not even address texture, material or shape). Building on this reduced set of means, Klee rather teaches how to enrich them in that he provides students with a new language for lines, forms, motives, and 'joints'. Does Klee teach a pre-existing style? Just like Itten, Klee does not follow the usual categories of applied art teaching or beaux art teaching (e.g., landscape, mythological scenes, and still life). He introduces a new language with which to speak of the composition and style of a piece of art: balance, rhythm, 'organic discussion', and 'kinetic equilibrium'. This language helps the artistraise questions about how to organize an 'organic discussion' between a line and a circle, how to build an organism that combines given organs (see the waterwheel exercise above), and how to provoke a predefined 'kinetic equilibrium' (i.e., not the work 'as such' but the work as seen by the viewer ('moved forms')); i.e., to integrate this 'moved form' into the composition of the fixed form. In all these exercises (and particularly the last example), the notion of style creation is at the heart of the teaching.
We thus confirm that Klee's courses deal with a form of generic generativity. Let's now analyse the kind of design capabilities taught by Klee to improve generic generativity. To this end, we conduct an in-depth analysis of the design reasoning in Klee's exercises. We illustrate this analysis for one case, taken from the lesson on joints and composition of an individual with structural motives. For the initial state (figure 14), K contains the knowledge acquired during the lesson on joints and motives, while C contains the brief given at the end of the lesson as homework. For the final state (figure 15), in the following course, Klee goes through the students proposals with them. His remarks are coded in C or K expansions.
This case reveals the following aspects of Klee teaching. 1-The exercise is limited to one type of composition issue (hence one dense subset) with one type of expression means (the constraints of the dense subset). Using 'joints', the artist is supposed to realize a composition via a discussion between the individual and a structural motive. Initially, the apprentice designer knows about two types of joints (rigid or loose) and about the composition of an individual based on structural motives (the previous lesson in the Klee course). The student explores how to create an 'individual' using rigid and loose joints. In his course, Klee discusses two alternatives, represented in C-space in figure 15: on the left side (in C-space, extreme far left solution), there are rigid joints between lines; on the right side (in C-space), there is an individual based on the 'discussion' between a line and a circle. The first answer ('rigid joints between lines') is said to be correct. Klee explains that there are rigid and loose joints and the articulation of rigid joints (between lines) and loose joints (in the variation of the lengths of the lines) creates an 'individual'. He proposes a variation-based on circles, where there are rigid joints between circles and loose joints in terms of the variation of circle diameters-and a variation of the variationwhere with a bolder line Klee underlines the rigid joint between the circle and the loose joint and improves the composition. Hence, even with these very limited means, it is possible to create a rigorous composition of one individual based on structural motives. 2-The exercise leads to an expansion of knowledge on expression means and composition criteria. Working on the 'incorrect answer', Klee explains that 'there is no discussion between the line and the circle'; i.e. the play on joints does not create an individual with structural motives. Still, Klee shows that it is possible to evolve the drawing to get a correct answer. In so doing, Klee expands the expression means in that rigid and loose joints result from 'a stick seen through glasses like bottle lenses or glass bowls' or they result from the 'fight between the line and the circle', which leads to 'a line that is no more a line' and 'a circle that is no more a circle'. These 'lines', 'circles', 'stick and glasses' are new expression means for rigid and loose joints. Meanwhile, the composition criteria are enriched in that the relationship individual unity/structural motive is now 'a more or less intensive fight', or a 'friendship or reciprocal or unilateral relationship'. The individual can be a battle, or a friendship, with various criteria (e.g., intensive and reciprocal criteria). Hence, the exercise leads to the enrichment of the expression means and the composition criteria. However, this is not a form of 'densification' because the type of expression means is the same (joint) and the type of composition dimension is also the same (individual vs structural motives). Nevertheless, knowledge of these two types is denser. 3-The exercise creates a shift to another dimension in composition. The 'fight between the line and the circle' is not only a structural motive that creates an individual but also a male/female relationship that creates an 'organic discussion'. This notion of 'organic' is a new type of composition criterion in that it is not on the level of 'individual unity/structural motives' but on the level of the 'organic body/organs'.
These three aspects are more or less present in all of Klee's exercises and contribute to the important issue of Klee's courses: teaching a design process that helps the student to be generically creative. Let's underline these three features. 1-First, Klee focuses always on the genesis of the whole, in a constantly refined partwhole relationship. Even ifeach step of teaching seems to address only one partial aspect of the final piece (e.g., perspective or balance), each of these aspects has to be consistent in itself at the level of the piece taken as a whole.In each step, Klee's teaching tends to validate a consistent part-whole relationship. Klee's lessons show that certain types of elements (e.g., lines, 'weights', rhythm, joints, and organs) are in deep correspondence with one aspect of the final piece (e.g., the perspective, balance, individual, and organism). Each lesson consists of working on the relationship between one type of language (e.g., the language of lines or, 'weight') and the aspect of the whole related to that language (e.g., the perspective or balance). This is the generalization of the exercises where Itten proposed to work on a whole montage only based on textures. Klee always teaches the whole, even if it is the whole related to its parts. In each step, Klee teaches the whole piece as expressed by one type of language (i.e., the work is seen as a perspective/lines; the work is seen as a balance/'weights'; or the work is seen as an organism/the organs and joints). One can consider this as a logic of robustness. By working in each step on the part-whole relationship, Klee ensures that each of the languages (e.g., the language of perspective or balance) expressed by specific means (e.g., lines or 'weights') is 'present' in the final piece. The languages are applicable to all known pieces and form a frame of references. Additionally, Klee ensures that the new piece that emerges can be understood in all these languages, in this frame of reference. Formally speaking, each type of language (in one step) appears as a dense subset, and this type of language (e.g., the language of perspective or balance) applies to all known pieces and each type of language corresponds to certain types of constraints (e.g., lines or weights). 2-The part-whole relationship is not a one-to-one relationship. Instead, work on the part-whole relationship expands the language of parts (involving new types of joints, line circles, and so on) and the language of the whole (involving new forms for the relationship between the individual unity and structural motives). Hence, each step of the process is also a step of creative expansion. Formally speaking, it means that Klee does not teach dense subsets as such but teaches the capacity to create dense subsets. 3-Klee proposes a logic of transitions between the process steps. Let's analyse some of these transitions. The first language is the language of lines (part) and perspective (whole). Klee suggests that these lines and perspective define horizontal and vertical and relate those to the physical notion of gravity. Having introduced that notion of gravity, lines and perspective lead to a second language, based on weights (parts) and balance (whole). In this new language, the emerging object inherits the dimensions designed with line to build perspective (i.e., hopefully original ways to treat lines and perspective) and the heritage will be expanded in the new language (where the original lines and perspective will give birth to original treatments of weights and balance). Klee then shifts from this language of weights and balance to the language of structural rhythms and the paced individual by showing that a series of weights and imbalances and balances creates forms of music. After physics and music, the third transition is based on physiology (where the rhythms and the paced individuals are animated by joints that build an organism). These transitions appear arbitrary and they are certainly. However, they ensure that the designer can shift from one language to the following one so that the genesis process leads to the accumulation of a growing number of languages on the object. These transitions contribute to increase the genericity of the final piece. Certainly, a master designer would not need such codified transitions and could invent his or her own. However, the designer should not neglect to invent such transitions, otherwise the genesis of his or her pieces would be limited to a (too) small number of languages, hence losing genericity. Formally speaking, this logic of transition from one language to another corresponds to a logic of countability of dense subsets. Klee teaches how to organize and walk the sequence of dense subsets. Finally, these three features show that Klee teaches a design process where each step makes a clear contribution to the final result (feature 1), where each step can be expansive (feature 2) and the steps are linked together to form a linear evolution (feature 3). Klee teaches a process that ensures that the apprentice designer can accumulate many general languages for his or her piece, hence improving the genericity. This accumulation is based on two principles. The first is a constant concern with the 'whole', caught by dense subsets. Even if each step of the genesis addresses 'parts', each step also addresses an aspect that is valid at the level of the whole (e.g., perspective or balance). Hence, each steps leads to the 'validation' of one dimension of the 'whole' piece. The second principle is a process of accumulation that is based on neither deterministic laws nor independence principles (as in the case of systematic design) but is based on transitions between languages that keep the possibility of originality at each level (i.e., multiple paths open) and propagate the originality won at one level to the following level (i.e., there is no modularity). These transitions ensure that the genesis will accumulate as many contrasting (and still coherent) languages on the emerging piece, while keeping and increasing the generativity. This explains why this process is a generic creative design process.
Formally speaking, H2 and H3 are confirmed for Klee's teaching: generic generativity can rely on countable dense subsets.
Comment on Klee's teaching: similarities and differences with engineering design approaches
Returning to engineering design, we can only be struck by the fact that the languages of the engineering design process can precisely appear as languages of the part-whole relationship. For instance, systematic design [START_REF] Pahl | Engineering design, a systematic approach[END_REF]) relies on four well-identified languages: functional, conceptual, embodiment, detailed. Validating a list of requirements finally consists of checking the consistency of the emerging object on the functional dimensions. The parts are functions, while the whole is the functionality of the final object. The part-whole relationship is acceptable when the list of functions corresponds to a functional object. The same holds at the conceptual level (where the consistent combination of technical principals is supposed to address the conceptual design of the product), at the embodiment design level (where the consistent arrangement of organs is supposed to build a coherent organism) and at the detailed design level (where the fine adaptation of industrial components builds an industrially feasible product).
Still, there is one major difference between the two processes. In the logic of systematic design, designers work with a knowledge base that is structured by determinism (i.e., engineering science laws) and independences (i.e., modules). In this case, the interactions between the levels are simplified and purely driven by the deterministic laws (because the relationship between the languages is either a pure determinism or an independence in that either a function determines a technical principle or, by constrast, whatever the function, one technical principle can be used, namely modularity). If the knowledge base is non-deterministic and non-independent, then the transition from one language to another is no longer defined by the deterministic rules. Additionally, Klee, just like Itten, builds a knowledge base that is non-deterministic and non-independent. We find that Klee makes the same effort to always propose multiple paths (i.e., there are no deterministic rules and not one solution to an exercise given by Klee) and to always show that the attributes and the effects created at any moment in the genesis affect the rest of the design process. If there are no deterministic rules with which to structure the design process, then how is it possible to shift from one type of language to the next language, and what is the order of the process steps? The magic of Klee might lie precisely here: the invention of a logic of transitions, based on a specific language (e.g., the language of physics, music, or physiology) that might appear far from the genesis of the object but provide at least one possible order to approach many different facets of a composition.
Part 5: Conclusion-discussion and further research
We can now conclude our work and answer our research questions.
1-The courses of Itten and Klee not only aimed at teaching the past style and a new style. They also aimed at increasing students creative design capabilities and even, more precisely, at providing them techniques with which to create their own style, in the sense of being able to be generically creative. We thus confirm H1: creative design corresponds to generic generativity. 2-The analyses of the two courses identify two features critical to having a generic creative design capability. a. A knowledge structure that is characterized by non-determinism and nonindependence. Hence, we confirm H2: a splitting knowledge base is required for generic generativity. b. A genesis process that helps to progressively 'accumulate' languages on the object in a robust way. This accumulation is based on step-by-step work on part-whole relationships and a series of transitions from one language to another one. Hence, we confirm H3: the countability of dense subsets can define a design process.
We thus confirm for Bauhaus courses the propositions that were predicted by theory. This is all the more interesting in that the propositions were not necessarily self-evident. At a time where one tends to assume that creative design is related to ideation and the birth of original ideas, design theory predicted that the knowledge structure plays an important role in generativity.
This work has an impact on several domains. 1-Regarding Bauhaus, this analysis, based on advances in design theory that today provide a unified analytical framework, helps underline that Bauhaus was neither a school that taught a particular style nor a school that taught design techniques but fundamentally a school that taught how to systematically invent new styles. From the perspective of style creation, we can discuss the role of technique and taste (i.e., new social trends) and their place in teaching. Surprisingly, neither Itten's nor Klee's teaching places strong emphasis on new techniques or new tastes. They more deeply focus on the reasoning logic that helps to create new style without even relying on new techniques or new 'tastes' or social trends. It was as if they were trying to teach in the 'worst case' situation. The rest of the Bauhaus program taught students how to deal with new techniques or new social trends. Based on the introductory courses, it was certainly easier to think of style creation in terms of a 'techno-push'; i.e., relying on a newly invented technique (see the work on texture, which students could freely extend to photography or today to new digital imaging) or in terms of 'market-pull' (i.e., relying on new composition dimensions as would do an artist working today on 'sustainability' or 'transparency'). More generally, this work provides a deeper understanding of the relationship between art and technique in design. The use of 'texture' or more generally 'expression means' is just a technique. However, they are not necessarily splitting or non-splitting. The art of designers is not limited to making use of a technique to design an object. More generally, design consists of mobilizing a technique to build a knowledge base that is splitting or not.
2-This work provides results for engineering design. The comparison helps show that systematic design is precisely characterized by knowledge structures that prevent the splitting condition and that are characterized by independence (modularity) and determinism (engineering science). This clarifies one critical aspect of systematic design, namely avoiding 'going out of the box'; i.e., avoiding generic generativity. Modular and deterministic generativity might be encouraged, as long as they create a knowledge base that remains non-splitting. From this perspective, we can wonder whether compatibility with the splitting condition could characterize professions. We should insist here that the logic of designing with (respectively without) the splitting condition is not intrinsically the logic of engineering design (respectively industrial design). Engineering design can also be driven by a logic of innovative design. Several works have long underlined a logic of breakthrough and unknown exploration in engineering design [START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF][START_REF] Kroll | Steepest-first exploration with learning-based path evaluation: uncovering the design strategy of parameter analysis with C-K theory[END_REF][START_REF] Shai | Creativity and scientific discovery with infused design and its analysis with C-K theory[END_REF][START_REF] Taura | Concept Generation for Design Creativity: A Systematized Theory and Methodology[END_REF]. This is deeply coherent with the results of this paper: in innovative design, engineers reverse the logic, they use engineering science and engineering techniques to build a knowledge base that follows the splitting condition (see in particular the analysis of breakthrough projects in military weapons published by [START_REF] Lenfle | Using design theory to characterize various forms of breakthrough R&D projects and their management: revisting manhattan and polaris[END_REF], 2015)).
Conversely, generic generativity might not necessarily be the logic of industrial design. In some cases, industrial design might favour the elaboration of knowledge bases that are nonsplitting. An interesting illustration of this situation is the very early integration of 'industrial designers' in industrial processes by Wedgewood, the famous earthenware inventor, in the late 18 th century (Forty 1986), where designers were actually in charge of inventing the forms of plates that would support several, varied ornaments. Today the talent of designers might precisely be to create knowledge bases that are locally splitting and non-splitting.
3-This work contributes to the debate on the relationship between engineering design and industrial design and their respective roles in the design processes. It underlines that the critical activity is not only the creation of a new artefact but it is also the moment where designers 'prepare' their knowledge base, to 'split' it (or to 'unsplit' it). Both actions (splitting and unsplitting) are important. It might be that industrial design could help engineers split their knowledge base, if necessary, to open paths to innovative design. Conversely, engineers might help industrial designers to 'unsplit' their knowledge base to facilitate rulebased design (see, [START_REF] Brun | Analyzing the generative effects of sketches with design theory: sketching to foster knowledge reordering[END_REF]).
4-Finally, this work contributes to design theory. We began the paper with a condition on generativity. This appears as a 'negative' result of the theory, whereas we tend to think that the only limit to generativity is fixation and imagination capacity, design theory predicts that there is also a condition on the structure of knowledge used in the design process-the knowledge base has to meet the splitting condition. The work on Bauhaus leads to the positive interpretation of this condition in that it shows that teachers in the field of design are actually able to help students build a knowledge base that meets the splitting condition. Teaching design (for generic generativity) finally consists of enabling the splitting condition. Hence, our study on Bauhaus teaching also raises a question on design education: does design education today (be it engineering design education or industrial design education) teach 'splitting knowledge' or, even more, does it provide students the capacity to themselves acquire and create new knowledge to meet the splitting condition?
Figure 1 :
1 Figure 1: Splitting condition-left: constraints that follow the splitting condition; middle: a deterministic constraint p (non-splitting knowledge base); right: q and q are interchangeable modules (non-splitting knowledge base)
Figure 2 :
2 Figure 2: 'Historical styles' vs Behrens works at AEG in the 1900s-1910s. Left: one or multiple existing styles are used to design objects (a museum and a clock). Right: Behrens creates a new style coherent with many new objects (a clock, kettles, and new AEG domestic electric appliances) but also with a work environment (a factory), a retail environment (shop window) and a marketing environment (brands). (Source: adapted from (Schwartz 1996))
Figure 3 :
3 Figure 3: Jugendstil-inventing a new style (left) or just a fashionable ornament (right)? (Source: adapted from (Schwartz 1996))
Figure 4 :
4 Figure 4: Texture montage exercise (source: (Itten 1975))
Figure 5 :
5 Figure 5: Several textures of the same material (source: (Itten 1975))
Figure 6 :
6 Figure 6: Characterization of environmental phenomena as textures (source: (Itten 1975))
Figure 7 :
7 Figure 7: C-K analysis of one Itten exercise ('texture montage')-initial state
Figure 8 :
8 Figure 8: C-K analysis of one Itten exercise ('texture montage')-final state (sources for the pictures: (Max Bronstein 1921))
Figure 9 :
9 Figure 9: A new language for lines (source: (Klee 2005))
Figure 13 :
13 Figure 13: Shifting from one aspect to the following one-the case of balance and rhythms (source: adapted from (Klee 2005))
Figure 14 :
14 Figure 14: C-K analysis of one Klee exercise ('joints and the individual')-initial state
As suggested by an anonymous reviewer (whom we warmly thank), we provide here complementary references on forcing -these sources explore forcing historically:(Kanamori
[START_REF] Kanamori | Cohen and Set Theory[END_REF][START_REF] Moore | The origins of forcing (Logic colloquium '86)[END_REF]; the reader can also refer to[START_REF] Chow | A beginner's guide fo Forcing[END_REF].[START_REF] Dickman | Mathematical Creativity, Cohen Forcing, and Evolving Systems: Elements for a Case Study on Paul Cohen[END_REF]) is a case study of creativity in science applied to the discovery of Forcing.2 In forcing theory, one uses interchangeably the terms "forcing constraint" and "forcing condition". In this paper, we favor the term "forcing constraint" to avoid confusion with the "splitting condition" that will be presented below.
Demonstration (see(Jech 2002), exercise 14.6, p. 223): Suppose that G is in M and consider D = Q \ G. For any p in Q, the splitting condition implies that there are q and q' that refine p and are incompatible; one of the two is therefore not in G andthus is in D. Hence, any condition of Q is refined by an element of D. Hence, D is dense. Therefore, G is not generic.
They sponsored lectures, exhibitions (Köln 1914), and publications (Werkbund Jahrbücher), helped found a museum of applied arts and were involved in Dürerbund-Werkbund Genossenschaft (publishing a catalogue of exemplary mass-produced goods 1915), linked to Werkstättenbewegung (Riemerschmid, Naumann). In parallel, they made great effortsto establish a theoretical basis, and Werkbund was a forum for discussion, with a wide cultural, economic, social and political audience.
can thus help characterize all possible objects). This balance is obtained through different forms of expression means (constraints). NOT "back" to balance but creation of a "new balance" by superimposing two imbalanced situations | 106,623 | [
"1111",
"3386",
"1099"
] | [
"39111",
"39111",
"39111"
] |
01481110 | en | [
"phys",
"spi",
"info"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01481110/file/OpenQBMM_PBE_Final.pdf | Alberto Passalaqua
email: [email protected]
Frédérique Laurent
E Madadi-Kandjani
J C Heylmun
Rodney O Fox
A Passalacqua
An open-source quadrature-based population balance solver for OpenFOAM
Keywords: Extended quadrature method of moments, log-normal kernel density function, population balance equation, aggregation and breakage, OpenFOAM, OpenQBMM
come
allow the addition of other kernel density functions defined on R + , and arbitrary kernels to describe physical phenomena involved in the evolution of the number density function. The implementation is verified with a set of zerodimensional cases involving aggregation and breakage problems. Comparison to the rigorous solution of the PBE provides validation for these cases. The
Introduction
The spatial and temporal evolution of a discrete population of particles can be described by a population balance equation (PBE) [START_REF] Ramkrishna | Population Balances: Theory and Applications to Particulate Systems in Engineering[END_REF], which is an evolution equation for the number density function (NDF) associated to the particle population. The NDF can evolve due to discontinuous phenomena such as nucleation, aggregation, breakage and evaporation, and due to continuous phenomena such as convection and diffusion. Examples of industrial processes, involving the evolution of a particle population include, but certainly are not limited to, precipitation, polymerization and combustion [START_REF] Becker | Coupled population balance-CFD simulation of droplet breakup in a high pressure homogenizer[END_REF], sprays [START_REF] Laurent | Multi-fluid Modeling of Laminar Polydispersed Spray Flames: Origin, Assumptions and Comparison of the Sectional and Sampling Methods[END_REF] and aerosols [START_REF] Mcgraw | Description of aerosol dynamics by the quadrature method of moments[END_REF][START_REF] Mcgraw | Chemically resolved aerosol dynamics for internal mixtures by the quadrature method of moments[END_REF].
In this work, we concentrate on the case of a NDF with only one internal coordinate, representing the particle size. Approximate solutions of the corresponding PBE can be determined using several approaches, including Monte-Carlo methods [START_REF] Lin | Solution of the population balance 29 equation using constant-number Monte Carlo[END_REF][START_REF] Meimaroglou | Monte Carlo simulation for the solution of the bi-variate dynamic population balance equation in batch particulate systems[END_REF][START_REF] Rosner | MC simulation of aerosol aggregation and simultaneous spheroidization[END_REF][START_REF] Smith | Constant-number Monte Carlo simulation of population balances[END_REF][START_REF] Zhao | Analysis of four Monte Carlo methods for the solution of population balances in dispersed systems[END_REF][START_REF] Zhao | A population balance-Monte Carlo method for particle coagulation in spatially inhomogeneous systems[END_REF], which, however, present challenges in practical applications due to their computational cost. Some authors introduce a discretization along the size variable, leading to the sectional method or the method of classes [START_REF] Alopaeus | Solution of population balances with breakage and agglomeration by high-order momentconserving method of classes[END_REF][START_REF] Balakin | Coupling STAR-CD with a population-balance technique based on the classes method[END_REF][START_REF] Bannari | Threedimensional mathematical modeling of dispersed two-phase flow using class method of population balance in bubble columns[END_REF][START_REF] Becker | Investigation of discrete population balance models and breakage kernels for dilute emulsification systems[END_REF][START_REF] Hounslow | Discretized population balance for nucleation, growth, and aggregation[END_REF][START_REF] Hounslow | A discretized population balance for continuous systems at steady state[END_REF]Kumar and Ramkrishna, 1996a,b;[START_REF] Muhr | Crystallization and precipitation engineering-VI. Solving population balance in the case of the precipitation of silver bromide crystals with high primary nucleation rates by using the first order upwind differentiation[END_REF][START_REF] Puel | Simulation and analysis of industrial crystallization processes through multidimensional population balance equations. part 1: a resolution algorithm based on the method of classes[END_REF][START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF]. Similarly to Monte Carlo methods, this approach is often computationally too demanding when applied to large-scale problems of industrial interest, as observed by Marchisio et al. (2003b). To overcome this issue, hybrid methods between sectional and moment methods are developed (see Nguyen et al. (2016) and references therein), but they are not the subject of this paper.
A widely adopted and sufficiently accurate approach to find approximate solutions of the PBE for engineering applications is the quadrature method of moments (QMOM), originally introduced by [START_REF] Mcgraw | Description of aerosol dynamics by the quadrature method of moments[END_REF], and extensively applied to several problems in chemical engineering (see [START_REF] Gavi | CFD modelling of nano-particle precipitation in confined impinging jet reactors[END_REF]; [START_REF] Marchisio | Computational Models for Polydisperse Particulate and Multiphase Systems[END_REF]; [START_REF] Petitti | Bubble size distribution modeling in stirred gas-liquid reactors with QMOM augmented by a new correction algorithm[END_REF] for examples). The QMOM approach considers a discrete set of moments of the NDF, constituted by an even number of moments. The NDF is then approximated with a discrete weighted sum of Dirac delta functions, uniquely determined by means of a moment inversion algorithm [START_REF] Gautschi | Orthogonal Polynomials: Computation and Approximation[END_REF][START_REF] Gordon | Error bounds in equilibrium statistical mechanics[END_REF][START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF]. The extended QMOM (EQMOM) [START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF] introduced the capability of using a basis of non-negative kernel density functions (KDF) to approximate the NDF in place of Dirac delta functions. This development allows some of the limitations of QMOM, that appear when dealing with problems that require the evaluation of the NDF at a particular value of the internal coordinate (i.e. problems involving evaporation term or any other continuous size decreasing term [START_REF] Massot | A robust moment method for evaluation of the disappearance rate of evaporating sprays[END_REF], to be addressed. [START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF] proposed the EQMOM procedure for β and Γ KDF, while Madadi-Kandjani and Passalacqua (2015) considered log-normal KDF. The EQMOM reconstruction can be done for every realizable moment set (i.e. moments of a positive NDF), just not reproducing eventually the last moment. In particular, it can deal with the degenerate cases, encountered when the moments are not strictly realizable: the only possible representation of the NDF is then a sum of weighted Dirac delta functions, thus describing a population of particles of only one or a few sizes, as in the case of nucleation. Numerically, this possibility was achieved with the moment inversion algorithm of Nguyen et al. (2016).
In this work we discuss the implementation of the EQMOM approach into the open-source toolbox for computational fluid dynamic (CFD) OpenFOAM R (OpenFOAM, 2015), as part of the OpenQBMM (2016a) project. We limit our attention to a univariate PBE, where the internal coordinate of the NDF is the particle size. We describe the implementation of EQMOM with lognormal KDF as an example, but without loss of generality in the presentation of the computational framework, which was designed to accommodate any KDF defined on the set of positive real numbers R + . We then discuss the implementation of realizable kinetic fluxes for advection, which guarantee the transported moments are realizable if the step used for time integration satisfies a realizability condition similar to the Courant-Friedrichs-Lewy condition. Particular attention is put in detailing the implementation of the procedure used to determine the approximate NDF, which always ensures that the maximum possible number of moments is conserved (Nguyen et al., 2016). The PBE solver is then verified considering aggregation and breakage problems studied by [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF], comparing the predicted results with both the rigorous solution from [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] and the numerical solution obtained with EQMOM by [START_REF] Madadi-Kandjani | An extended quadraturebased moment method with log-normal kernel density functions[END_REF]. Finally, a case involving spatial transport is considered for validation purposes, which consists of an aggregation and breakage problem in a Taylor-Couette reactor.
The system was experimentally studied by Serra and Casamitjana (1998a,b); [START_REF] Serra | Aggregation and breakup of particles in a shear flow[END_REF], considering the same test case discussed in Marchisio et al. (2003a). Numerical results obtained with the CFD-PBE solver developed as part of the present work are compared to experiments, showing satisfactory results.
The population balance equation
The PBE (Marchisio et al., 2003a;[START_REF] Marchisio | Computational Models for Polydisperse Particulate and Multiphase Systems[END_REF][START_REF] Ramkrishna | Population Balances: Theory and Applications to Particulate Systems in Engineering[END_REF] accounting for the evolution of a univariate NDF with internal coordinate ξ, representing the particle size, is
∂n (ξ, x, t) ∂t + ∇ x • [n (ξ, x, t) U] -∇ x • [Γ∇ x n (ξ, x, t)] + ∇ ξ • [G(ξ)n (ξ, x, t)] = Ba (ξ, x, t) -Da (ξ, x, t) + Bb (ξ, x, t) -Db (ξ, x, t) + N (ξ, x, t) (1)
where n(ξ, x, t) is the NDF, U is the velocity of the carrier fluid, Γ is the diffusivity, G(ξ) is the growth rate, B (ξ, x, t) and D (ξ, x, t) are, respectively, the rate of change of n due to birth and death, in the aggregation process when the a exponent is present and in the breakage process when a b exponent is present, and N (ξ, x, t) the rate of change due to nucleation. Let us notice that we assumed the particle size is sufficiently small to have negligible influence on the carrier fluid. This allows the velocity U to be assumed equal to the local fluid velocity, and independent of the particle size.
The diffusivity Γ, assumed to be independent from the particle size ξ, is defined as the sum of a laminar and a turbulent contribution: Γ = Γ l + Γ t .
The turbulent diffusivity is calculated as the ratio of the turbulent viscosity µ t and the turbulent Schmidt number σ t : Γ t = µ t /σ t .
Following [START_REF] Marchisio | Computational Models for Polydisperse Particulate and Multiphase Systems[END_REF]; Marchisio et al. (2003b); [START_REF] Randolph | Theory of Particulate Processes: Analysis and Techniques of Continuous Crystallization[END_REF], the terms describing aggregation and breakage phenomena are written in continuous form as:
Ba (ξ, x, t) = ξ 2 2 ξ 0 β (ξ 3 -ξ 3 ) 1/3 , ξ (ξ 3 -ξ 3 ) 2/3 n ξ 3 -ξ 3 1/3 , x, t n (ξ , x, t) dξ , (2) Da (ξ, x, t) = n (ξ, x, t) ∞ 0 β (ξ, ξ ) n (ξ , x, t) dξ , (3)
Bb (ξ, x, t) = ∞ ξ a (ξ ) b (ξ|ξ ) n (ξ , x, t) dξ , (4)
Db (ξ, x, t) = a (ξ) n (ξ, x, t) .
(5)
Growth and nucleation terms were not considered in the example applications presented in this work to verify and validate the implementation of the EQMOM procedure, however they have been implemented in the PBE solver, and their testing is left to future work. These terms and their numerical treatment are kept in the description of the theory presented in this work for completeness and as documentation of the code implementation for the interested reader.
The extended quadrature method of moments
The approximate solution of the PBE of Eq. ( 1) is obtained in this work by solving transport equations for a finite set of the moments of the NDF. In the case of a univariate NDF, the moments are defined as:
M k (t) = +∞ 0 n (ξ, x, t) ξ k dξ. ( 6
)
The transport equation for the moment of order k is obtained by multiplying the PBE (Eq. ( 1)) by ξ k and integrating over [0, +∞[. Under the previously discussed assumptions on the velocity and the diffusivity, such transport equation is
∂M k (x, t) ∂t + ∇ x • [M k (x, t) U] -∇ x • [Γ∇M k (x, t)] + Ḡk (x, t) = Ba k (x, t) -Da k (x, t) + Bb k (x, t) -Db k (x, t) + Nk (x, t) . (7)
The evaluation of the growth, aggregation and breakup terms in Eq. ( 7)
requires the NDF to be known, in addition to the kernel functions a and β. However, because the NDF evolves in space and time, it is not known a priori, and it has to be approximated from the transported moments.
This can be achieved by writing an approximant of the NDF as a weighted sum of non-negative KDF δ σ (ξ, ξ α ) [START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF])
n (ξ) ≈ p N (ξ) = N α=1 w α δ σ (ξ, ξ α ) ( 8
)
where N is the number of KDF used to approximate the NDF, the KDF δ σ (ξ, ξ α ) is chosen to formally tend to the Dirac delta function δ (ξ -ξ α )
when σ tends to zero, w α are the non-negative weights of each KDF (primary quadrature weights), and ξ α are the corresponding quadrature abscissae (primary abscissae). These parameters (w α , ξ α ) α=1,...,N and σ have to be computed from (M k ) 2N k=0 in such a way that this moment set represents the moments of the reconstructed NDF. We use log-normal KDF [START_REF] Madadi-Kandjani | An extended quadraturebased moment method with log-normal kernel density functions[END_REF] in all the problems considered in this work because the support of the NDF in these problems is R + :
δ σ (ξ, µ) = 1 ξσ √ 2π e -(ln ξ-ξα) 2 2σ 2 , ξ, ξ α , σ ∈ R + , (9)
however also the Γ KDF discussed in [START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF] is implemented in the OpenQBMM PBE solver. The primary quadrature is determined with the algorithm described in Nguyen et al. (2016), whose implementation is detailed in Sec. 4.4. Once the NDF in Eq. ( 8) is reconstructed, it is used to calculate integrals that appear in source terms for the moment transport equations, as described in [START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF]. To this purpose, a secondary quadrature, with weights w αβ and abscissae ξ αβ , is determined by considering the recurrence relation that defines the family of orthogonal polynomials with respect to the measure defined from the KDF, and solving the eigenvalue problem associated to the Jacobi matrix defined by this relationship [START_REF] Gautschi | Orthogonal Polynomials: Computation and Approximation[END_REF]. In performing this calculation, the Stieltjes-Wigert quadrature is considered for log-normal KDF [START_REF] Weisstein | CRC Concise Encyclopedia of Mathematics[END_REF][START_REF] Wilck | A general approximation method for solving integrals containing a lognormal weighting function[END_REF], while Laguerre quadrature is used for gamma KDF [START_REF] Gautschi | Orthogonal Polynomials: Computation and Approximation[END_REF]. While, in principle, the case of a log-normal KDF could be addressed with a change of variable, allowing Hermite quadrature to be adopted (Madadi-Kandjani and Passalacqua, 2015), it is worth noticing that only the Stieltjes-Wigert quadrature guarantees the correct preservation of the moments of the NDF, and as such it is used in this work.
The term describing molecular growth is given by
Ḡk (x, t) = -k ∞ 0 ξ k-1 G(ξ)n (ξ, x, t) dξ ≈ -k N α=1 w α Nα β=1 w αβ ξ k-1 αβ G(ξ αβ , x, t). ( 10
)
The source terms due to birth and death of particles because of aggregation and breakage are
Ba k (x, t) = 1 2 ∞ 0 n (ξ , x, t) ∞ 0 β (ξ, ξ ) ξ 3 + ξ 3 k/3 n (ξ, x, t) dξdξ ≈ 1 2 N α 1 =1 w α 1 Nα β 1 =1 w α 1 β 1 N α 2 =1 w α 2 Nα β 2 =1 w α 2 β 2 ξ 3 α 1 β 1 + ξ 3 α 2 β 2 k/3 β α 1 β 1 α 2 β 2 (11) Da k (x, t) = ∞ 0 ξ k n (ξ, x, t) ∞ 0 β (ξ, ξ ) n (ξ , x, t) dξ dξ ≈ N α 1 =1 w α 1 Nα β 1 =1 w α 1 β 1 ξ k α 1 β 1 N α 2 =1 w α 2 Nα β 2 =1 w α 2 β 2 β α 1 β 1 α 2 β 2 (12) Bb k (x, t) = ∞ 0 ξ k ∞ 0 a (ξ ) b (ξ|ξ ) n (ξ , x, t) dξ dξ ≈ N α=1 w α Nα β=1 w αβ a αβ bk αβ (13) Db k (x, t) = ∞ 0 ξ k a (ξ) n (ξ, x, t) dξ ≈ N α=1 w α Nα β=1 w αβ ξ k αβ a αβ . ( 14
)
The numerical solution procedure
The transport equations for the moments (Eq. ( 7)) are discretized using a finite-volume procedure [START_REF] Ferziger | Computational Methods for Fluid Dynamics[END_REF][START_REF] Leveque | Finite Volume Methods for Hyperbolic Problems[END_REF], with kinetic-based spatial fluxes [START_REF] Desjardins | A quadrature-based moment method for dilute fluid-particle flows[END_REF][START_REF] Perthame | Boltzmann type schemes for compressible Euler equations in one and two space dimensions[END_REF] to ensure moment realizability.
Spatial fluxes
The discretization of the moment advection term is key to ensure moment realizability, and achieve a robust solution procedure. The problem of moment realizability caused by convection schemes is well known in the literature [START_REF] Desjardins | A quadrature-based moment method for dilute fluid-particle flows[END_REF][START_REF] Wright | Numerical advection of moments of the particle size distribution in eulerian models[END_REF], and it appears when using conventional numerical schemes of order higher than one, since the application of limiters to the individual transported moments is not a sufficient condition to ensure the realizability of the transported moment set. [START_REF] Vikas | Realizable high-order finite-volume schemes for quadrature-based moment methods[END_REF][START_REF] Vikas | Realizable high-order finite-volume schemes for quadrature-based moment methods applied to diffusion population balance equations[END_REF] proposed to implement kinetic schemes [START_REF] Perthame | Boltzmann type schemes for compressible Euler equations in one and two space dimensions[END_REF] where the convection flux for each moment is computed from the quadrature approximation of the transported moment set. In these schemes, the high-order reconstruction is applied only to the quadrature weights, while the reconstructed quadrature abscissae are constant in the cell, hence the name of quasi high-order schemes. This approach allows the realizability of the moment set to be preserved if a constraint on the time-step size is satisfied. This realizability constraint becomes the Courant-Friedrichs-Lewy (CFL) condition, if weights and abscissae are reconstructed on cell faces with a first-order scheme, which, therefore, always ensures moment realizability if the CFL condition is satisfied.
The numerical scheme used to compute face values of a variable in Open-FOAM can be selected independently for any variable, or specifying the scheme as argument of the fvc::interpolate function provided by the OpenFOAM framework. As a consequence, the details of the implementation of high-order reconstruction schemes are not discussed here, and our description is limited to the general structure of kinetic-based schemes for quadrature-based moment methods.
We assume the flow velocity vector U is known at cell centers. We indicate with U own and U nei the interpolated face values of such velocity on the owner and neighbour side of the cell face, respectively. Similarly, we indicate with M k,own and M k,nei the moments computed on the owner and neighbour side of the cell face (see Fig. 1), defined as
M k,own = N f α=1 w α,own ξ k α,own , M k,nei = N f α=1 w α,own ξ k α,nei , (15)
where N f is the number of quadrature nodes used to define the quadrature approximation of the NDF used to compute the spatial fluxes, w α,own and w α,nei are the face, and ξ α,own , ξ α,nei the face values of the quadrature abscissae. Following [START_REF] Laurent | Realizable high-order finite-volume schemes for the advection of moment sets of the particle size distribution[END_REF], the spatial fluxes are calculated as a function of a different quadrature approximation with respect to the one determined with the EQMOM procedure. If an odd number of moments is realizable, this quadrature approximation is obtained using a N + 1 point Gauss-Radau quadrature formula, which allows the 2N + 1 moments to be preserved, with one quadrature abscissa set to zero. However, if the number of realizable moments is even, the regular Gauss quadrature is used to define the face flux. This approach is independent from the reconstruction of the NDF, and allows the fluxes to be defined without relying on the secondary quadrature approximation, leading to a more robust formulation of the numerical scheme, with the additional benefit of a slightly lower computational cost due to the reduced number of terms involved in the summations in Eq. 16 compared to EQMOM. Details on the rationale of the definition of the advection schemes for moment methods are provided in [START_REF] Laurent | Realizable high-order finite-volume schemes for the advection of moment sets of the particle size distribution[END_REF], where high-order schemes for moment methods are also developed.
The convective flux of the moment M k is then found computing
ϕ M k = M k,nei min(U nei • n, 0) + M k,own max(U own • n, 0), (16)
and finding the rate of change of the moment M k by integrating ϕ M k over the cell faces.
Diffusion term
The discretization of diffusion term in the moment transport equations does not present particular problems when a traditional second-order scheme [START_REF] Ferziger | Computational Methods for Fluid Dynamics[END_REF] is used, as explained in [START_REF] Vikas | Realizable high-order finite-volume schemes for quadrature-based moment methods applied to diffusion population balance equations[END_REF]. As a consequence, no special treatment is adopted to ensure moment realizability when the diffusion term is discretized, because second-order accuracy is considered sufficient for most practical applications. The diffusion term in the moment transport equations is therefore discretized with the fvm::laplacian(gamma, Mk) operator provided by OpenFOAM, where gamma is the diffusivity and Mk is the moment of order k. It is worth observing that this approach suffices for size-independent diffusion, however, if size-dependent diffusion needs to be considered, the diffusion term will have to be computed as a function of the quadrature approximation, and treated as an explicit source term in the transport equations for the moments, as discussed in [START_REF] Vikas | Realizable high-order finite-volume schemes for quadrature-based moment methods applied to diffusion population balance equations[END_REF].
Source terms
The source terms for aggregation, breakage and nucleation are introduced as explicit source terms in the moment transport equations, using the standard mechanism available in OpenFOAM. Moreover, to ensure realizability, explicit Euler methods with small enough time steps can be used, as well as any convex combination of Euler explicit time steps (Nguyen et al., 2016), such as the strong-stability-preserving (SSP) explicit Runge-Kutta methods [START_REF] Gottlieb | Strong Stability Preserving Runge-Kutta and Multistep Time Discretizations[END_REF], which are high-order ODE solvers. An example of multi-step adaptive scheme that takes advantage of this property is described and demonstrated in Nguyen et al. (2016).
Moment inversion
A key aspect of the numerical solution procedure is the moment inversion to find the primary quadrature weights w α , abscissae ξ α and the parameter σ of the KDF, in such a way that the moments of the reconstruction defined by Eq. ( 8
M K = A K (σ)M * K .
For a log-normal KDF, it is given by
M * k = M k e -k 2 σ 2 /2 , ( 17
)
in such a way that the matrix A K (σ) is diagonal. Then, from the transported moment vector M 2N = (M 0 , . . . , M 2N ), one can define the function M2N (σ)
in the following way: if
M * 2N-1 (σ) = A 2N-1 (σ) -1 M 2N-1
is strictly realizable, which corresponds to σ < σ max,2N , then one computes the corresponding quadrature weights and abscissae (w α (σ), ξ α (σ)) α=1,...,N and M 2N (σ) is the moment of order 2N of the reconstructed NDF defined by Eq. ( 8) with the parameters σ and (w α (σ), ξ α (σ)) α=1,...,N . When σ ≥ σ max,2N , the function M2N (σ) is set to a large value2 L. For a log-normal KDF, this function is given by
M2N (σ) = N α=1 w α (σ)ξ α (σ) 2N z (2N) 2 σ < σ max,2N L σ ≥ σ max,2N , (18)
where z = e σ 2 /2 . This helper function is used to define the target function J 2N , whose roots are the values of the parameter σ we seek, with
J 2N (σ) = M 2N -M2N (σ) M 2N . (19)
If Eq. 19 has no root, then the last moment will not be reproduced by the reconstructed NDF. One then will have to minimize the error on it, by an optimal choice of σ. In any case, at least all moments of orders 0 to 2N -1 will be well reproduced by the reconstruction.
It can happen that the transported moment vector M 2N is not strictly realizable (degenerate case), when it corresponds to a NDF that is necessarily a sum of a few weighted Dirac delta functions. This is the case for example,
H 2k = M 0 . . . M k . . . . . . M k . . . M 2k , H 2k+1 = M 1 . . . M k+1 . . . . . . M k+1 . . . M 2k+1 , (20)
as follows3
ζ k = H k+1 H k-2 H k H k-1 , k = 0, 1 . . . , (21)
with H -2 = H -1 = H 0 = 1. They are also related to the coefficients of the recurrence relation that defines the family of orthogonal polynomials (P k ) k≥0
with respect to a measure associated to the moments: in [START_REF] Dette | The Theory of Canonical Moments with Applications in Statistics, Probability and Analysis[END_REF], or, more effectively, with a modified version of the Chebyshev algorithm [START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF] to include the calculation of ζ k (Nguyen et al., 2016). Following [START_REF] Dette | The Theory of Canonical Moments with Applications in Statistics, Probability and Analysis[END_REF], the positivity of ζ k ensures the strict realizability of the moment set. Based on the largest value of k for which ζ k > 0, it is then possible to identify the subset of strictly realizable moments in the transported moment set.
P k+1 (X) = (X -a k )P k (X) -b k P k-1 (X), (22)
Depending on the value of strictly realizable moments N r , the following cases are possible:
1. If N r < 2, the procedure terminates because the number of strictly realizable moments is insufficient to define a quadrature formula.
2. If N r is an even integer, the [START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF] algorithm is applied to M Nr , and N r /2 primary quadrature weights and abscissae are calculated. In this case, the parameter σ is set to zero, and no attempt is made to reconstruct a continuum distribution because the direct application of the standard QMOM procedure, relying on Dirac delta functions, ensures the conservation of the N r strictly realizable moments.
3. If N r is an odd integer, the complete EQMOM procedure is applied to the N r first N r -1 moments, i.e. from M Nr-1 . This allows to determine the (N r -1)/2 primary quadrature weights and abscissas, the parameter σ, and the secondary quadrature. The objective of the procedure is to determine the value of σ ∈ R + as a root of the non-linear function J Nr-1 thus ensuring the conservation of the entire M Nr-1 , when it exists, or to minimize the error on the last moment of the set, whenever a value of σ that guarantees its exact conservation cannot be found. The procedure is iterative, and operates as follows:
(a) An initial interval where to search for the value of σ is determined considering the value of the σ parameter that ensures that M * 3 (σ) is strictly realizable. We will indicate this value of σ as σ max,4 , and consider I 0 =]0, σ max,4 [ as initial interval. For a log-normal KDF, we have
σ max,4 = min 2 log √ M 0 M 2 M 1 , 2 log √ M 1 M 3 M 2 (23)
It is important to observe, however, that σ max,4 is not a valid value for the σ parameter, because the condition on the strict realizability of M * k (σ) for k ≥ 4 would further reduce the right bound of the interval of σ. This however does not represent a problem in our procedure, since the interval we determine is only used to initially start the search for σ.
(b) The function described in Eq. ( 19) changes sign in I 0 by construction, as a consequence of the definition of M2N (σ) in Eq. ( 18).
We then apply Ridder's algorithm [START_REF] Ridders | A new algorithm for computing a single root of a real continuous function[END_REF] to the function J Nr-1 (σ) to find a root for the target function. If this root represents an actual solution that allows the moment set to be preserved, then the value of σ found from Ridder's algorithm will be used. If the computed value of σ does not preserve the moment set, a minimization procedure based on the golden search algorithm [START_REF] Press | Numerical Recipes: The Art of Scientific Computing[END_REF] is applied to J Nr-1 (σ) 2 , in order to find the value of σ which minimizes the distance of J Nr-1 (σ) from zero.
(c) Once the value of the parameter σ is determined, the secondary quadrature weights and abscissae are computed by means of the standard eigenvalue problem used to calculate the roots of polynomials orthogonal to the chosen KDF [START_REF] Gautschi | Orthogonal Polynomials: Computation and Approximation[END_REF].
The steps of the EQMOM inversion algorithm are schematically represented in Fig. 2.
Code structure
The solution procedure for univariate PBE was implemented into Open-FOAM creating a general data and implementation structure which allow the extension to multivariate problems, and the straightforward addition of sub-models such as kernels describing new physics. Additionally, the functionality of OpenFOAM was leveraged to couple the implementation of the PBE solver with fluid solvers, re-using the existing thermal and turbulence models. In the following subsections we provide a brief description of the code structures and implementation choices made during the development of OpenQBMM-PBE.
Basic moment inversion
We call basic moment inversion the process of calculating N quadrature weights and abscissae from a moment vector of dimension 2N. This pro-cess is achieved by means of the algorithm proposed by [START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF].
A specific C++ object, called univariateMomentSet, was derived for the uni-dimensional array of scalars object scalarDiagonalMatrix provided by the OpenFOAM library. The univariateMomentSet object combines the use of ζ k to verify the strict realizability of the moment set, with the Wheeler (1974) algorithm to perform the moment inversion. The procedure is schematically represented in Fig. 3. For each moment set that has to be inverted, the maximum even number of strictly realizable moments, which we call number of invertible moments4 N I , is determined by checking the positivity of ζ k and taking the largest even integer smaller than the number of strictly realizable moments N r . Once N I is found, the procedure verifies that N I ≥ 2, which is required to perform the inversion. If this condition is satisfied, the Jacobi matrix (see Appendix A) is constructed and the associated eigenvalue problem is solved to determine quadrature weights and abscissae.
Extended moment inversion
The extended moment inversion procedure is implemented in a general form, in order to accommodate different types of KDF with support on R + . The type of KDF can be selected by the user at run-time as conventionally done in OpenFOAM for model selection, by means of a keyword in an input file.
A base class named extendedMomentInversion implements the algorithm to find the value of the parameter σ, and the general parts of the procedure to define the secondary quadrature (Fig. 2). Specialized classes derived from extendedMomentInversion implement the part of the algorithm specific to a KDF such as the expression of the KDF itself, the relationship between M k and M * k , and the upper extremum σ max of the interval I 0 .
Population balance solver
The PBE is solved in a separate class, which also uses the run-time selection mechanism, allowing future extensions to multivariate problems to be implemented. A base class called populationBalanceModel implements the generic structure required for run-time selection of the type of PBE to be solved. The actual implementation of the PBE solver is contained in the derived class univariatePopulationBalance, which provides methods to calculate the convection, diffusion and source terms for the specific PBE.
Verification of the PBE solver
The PBE solver was verified considering a set of zero-dimensional problems involving aggregation and breakage [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF]. The solutions obtained in these cases were compared to the numerical solution obtained with a MATLAB code implementing EQMOM in Madadi-Kandjani and Passalacqua (2015), which relied on the solution approach detailed in Yuan et al.
(2012) extended to log-normal KDF, and to the rigorous solution of the same problems reported by [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF].
In this situation, Eq. ( 1) reduces to Eq. ( 24), where only the source terms due to aggregation and breakup remain, while diffusion and advection are not considered:
∂n (ξ, x, t) ∂t = Ba (ξ, x, t) -Da (ξ, x, t) + Bb (ξ, x, t) -Db (ξ, x, t) . (24)
The cases considered in this validation study are reported in Tab. 1, and correspond to those examined in [START_REF] Madadi-Kandjani | An extended quadraturebased moment method with log-normal kernel density functions[END_REF].
The table also indicates the number of primary and secondary quadrature nodes used in each case. The aggregation and breakage kernels used in the study are reported in Tab. 2 and 3, respectively. The expressions for the daughter distribution that appear in the breakage kernel are summarized in to the rigorous solution of [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] shows good agreement in all the cases under consideration, with the exception of case 3 (Fig. 6), in which some deviation of the numerical solution from the rigorous solution can be observed for t ∈ [1, 4] s.
Validation of CFD-PBE solver
The solution of the full PBE with an inhomogeneous velocity field is tested and validated using the case studied by Marchisio et al. (2003a) in a Taylor-Couette reactor made by two concentrical cylinders with diameter D 1 = 193 mm, and D 2 = 160 mm. The height of the system is H = 360 mm.
Kernels for the PBE
The aggregation kernel used in this case is given by the sum of the kernel for Brownian motion (Eq. ( 25)) [START_REF] Smoluchowski | Versuch einer mathematischen theorie der koagulationskinetik kolloider losunger[END_REF], combined with the kernel proposed by [START_REF] Adachi | Kinetics of turbulent coagulation studied by means of end-over-end rotation[END_REF] for particles whose size is smaller than the local Kolomogorov microscale (Eq. ( 26)):
β(ξ, ξ ) = 2k B 3µ (ξ + ξ ) 2 ξξ (25) β(ξ, ξ ) = 4 3 3πε 10ν (ξ + ξ ) 3 . ( 26
)
The breakage kernel of [START_REF] Luo | Theoretical model for drop and bubble breakup in turbulent dispersions[END_REF], which depends on the kinematic viscosity ν, the turbulent dissipation rate ε and the particle size ξ, was used:
a(ξ) = c Br ν p ε q ξ r . (27)
Following Marchisio et al. (2003a), we set p = 3/4, q = -5/4 and r = 1, and we adopt the symmetric fragmentation daughter distribution function (Tab. 4). The coefficient c Br was set equal to 6.0 × 10 -4 (Marchisio et al., 2003a).
Case setup
The Taylor-Couette reactor was modeled assuming symmetry, reducing the computational domain to a rectangular region whose width is (D 1 -D 2 )/2 = 16.5 mm, and its height is H = 180 mm, as schematized in Fig 9 . This domain was discretized using 18 cells in the horizontal direction, and 180 in the vertical direction (Marchisio et al., 2003a), leading to a total of 3240 computational cells. The initial particle size distribution is assumed to be a Dirac delta function (uniform size), with a mean particle volume fraction α p = 2.5 × 10 -5 , and an initial particle size ξ 0 = 2 µm (Marchisio et al., 2003a). The first five moments of the NDF are considered and, as a consequence, N = 2 primary quadrature nodes are used. The number of secondary quadrature nodes was set equal to N α = 10. It is worth observing that the initial condition leads to a quadrature representation with only one quadrature node, and null value of the σ parameter. Therefore, the moment inversion procedure will use the standard [START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF] algorithm to perform the inversion. However, the EQMOM procedure is used as soon as the particle size distribution is not a Dirac delta anymore, and some variance in the particle size appears. The transition between the two algorithms is managed automatically, based on the realizability of the moment set and of the number of moments required to define the quadrature representation of the NDF. Marchisio et al. (2003a) considered four rotational speeds (75, 125, 165 and 211 RPM), however we limit our study to two rotational speeds for brevity, since the purpose of our investigation is to verify the robustness and accuracy of the computational framework. We consider then ω 1 = 75 RPM, and ω 2 = 165 RPM.
Results and discussion
The velocity field of the fluid phase in the Taylor-Couette reactor is reported in Fig. 10
Conclusions
The EQMOM was implemented into OpenFOAM to solve univariate population balance equations. The implementation was successfully verified against the numerical results of Madadi-Kandjani and Passalacqua (2015) involving aggregation and breakage problems in a homogoeneous system. The results were validated against the rigorous solutions obtained by [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF]. The Rigorous solution [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] OpenQBMM Rigorous solution [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] OpenQBMM
) are the considered moment set. The approach implemented in the OpenQBMM-PBE solver is based on the work ofNguyen et al. (2016), which represents an improvement of the method proposed in[START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF] and also used in[START_REF] Madadi-Kandjani | An extended quadraturebased moment method with log-normal kernel density functions[END_REF]. The key difference between the method in[START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF] and the new one proposed inNguyen et al. (2016) consists in an efficient procedure to check the strict realizability of the moment set, and the conservation of the largest possible subset of transported moments, which is not always achieved using the approach of[START_REF] Yuan | An extended quadrature method of moments for population balance equations[END_REF].The algorithm is based on the possibility to use an efficient quadrature algorithm based, for example, on the[START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF] algorithm. Introducing the variables M * k = N α=1 w α ξ k α , the KDF is chosen in such a way that, for a fixed value of σ, there exists a linear relation between the moments M K = (M 0 , . . . , M K ) of the reconstruction and M * K = (M * 0 , . . . , M * K ):
in problems involving nucleation, and at the beginning of aggregation processes. Only one representation is then possible and the moment inversion has to be adapted in this case. The first step of the moment inversion procedure is then to study the realizability of the moment vector. Differently from what done in Madadi-Kandjani and Passalacqua (2015); Yuan et al. (2012), which directly evaluated the Hankel determinants associated with the set of transported moments, this is achieved computing the values of the ζ k quantities defined as in[START_REF] Dette | The Theory of Canonical Moments with Applications in Statistics, Probability and Analysis[END_REF], which allow the size N r ≤ 2N + 1 of the largest subset of strictly realizable moments M Nr-1 to be found. This approach is more computationally efficient than evaluating the Hankel determinants directly, as briefly discussed later, and explained in detail inNguyen et al. (2016). The quantities ζ k are directly related to the Hankel determinants
with a 0 =
0 ζ 1 and for k ≥ 1: b k = ζ 2k-1 ζ 2k and a k = ζ 2k +ζ 2k+1 . Thanks to this relation, the values of ζ k are computed using the Q-D algorithm described
Tab. 4 .
4 The simulation results shown in Figs. 4-8 show the time evolution of the value of M 0 and of the average particle size d 43 = M 4 /M 3 . The agreement between the numerical solution obtained with MATLAB (Madadi-Kandjani and Passalacqua, 2015), and the one obtained with the OpenFOAM implementation of the EQMOM procedure is excellent: in all the cases under examination, the two solutions match. This comparison serves as verification of the EQMOM procedure implemented into OpenFOAM. The comparison
Fig. 11(a) shows that the numerical simulation provides a good estimate of the steady-state value of d 43 for the case with rotational speed of 75 RPM, which is underpredicted by the results obtained with the standard QMOM procedure inMarchisio et al. (2003a). Both the EQMOM and the QMOM approach predict the particle diameter reaches its steady-state value significantly earlier (t ≈ 2000 s) than what observed in the experiments (t ≈ 12000 s).The EQMOM simulation slightly over-predicts the steady-state value of the mean normalized particle diameter (≈ 11) compared to the experimental results (≈ 10) in the case with rotational speed of 165 RPM. The steady state value of the particle size is reached after about 1000 s from the beginning of the simulation, however experimental data show that a slower evolution of the value of d 43 , with a time required to reach steady state of about 2000 s.
Figure 1 :Figure 2 :Figure 3 :
123 Figure1: Schematic representation of a computational cell and its neighbour cells, to illustrate the concept of owner and neighbour used in the calculation of moment spatial fluxes.
Figure 5 :
5 Figure 5: Time evolution of M 0 and of d 43 in case 2.
Figure 9 :Figure 10 :Figure 11 :
91011 Figure 9: Schematic representation of the Taylor-Couette reactor and of the computational domain used in the simulations.
A value of 1.0 × 10 16 was used in the implementation, which corresponds to the definition of the GREAT constant in OpenFOAM.
Note that ζ k is here re-defined to start the numbering from zero rather than from one, for consistency with the implementation.
The number of invertible moments coincides with the number of strictly realizable moments, if this is even, and it is equal to N r -1, if the N r is odd.
Acknowledgments
The authors would like to gratefully acknowledge the support of the US National Science Foundation under the SI 2 -SSE award NSF-ACI 1440443 and the support of the French National Research Agency (ANR) under grant ANR-13-TDMO-02 ASMAPE for the ASMAPE project.
Appendix A. Inversion algorithm
We briefly summarize here the moment inversion algorithm, based on the work of [START_REF] Wheeler | Modified moments and Gaussian quadratures[END_REF], to perform the moment inversion using the same notation adopted in the implementation. We assume the number of invertible moments is N I . Let us denote with a i and b i , i ∈ {0, 1, . . . , N I /2} the coefficients of the recurrence relation that define the family of orthogonal polynomials with respect to a measure associated to the moments. We also introduce the matrix S ∈ R 2N I +1,2N I +1 , which is defined as follows in the code:
The matrix z ∈ R N I /2,N I /2 is then defined as
Quadrature weights and abscissae are then found as Rigorous solution [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] OpenQBMM Rigorous solution [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] OpenQBMM Rigorous solution [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] OpenQBMM Rigorous solution [START_REF] Vanni | Approximate population balance equations for aggregationbreakage processes[END_REF] OpenQBMM Case N N α
Table 1: Cases examined for the aggregation and breakage process.
Kernel | 45,710 | [
"12895"
] | [
"416",
"208994",
"302747",
"302747",
"302747"
] |
01362839 | en | [
"math"
] | 2024/03/04 23:41:48 | 2019 | https://hal.science/hal-01362839v2/file/gardes_stupfler_final_version_HAL.pdf | Laurent Gardes
Gilles Stupfler
An integrated functional Weissman estimator for conditional extreme quantiles
Keywords: AMS Subject Classifications: 62G05, 62G20, 62G30, 62G32 Heavy-tailed distribution, functional random covariate, extreme quantile, tail index, asymptotic normality
It is well-known that estimating extreme quantiles, namely, quantiles lying beyond the range of the available data, is a nontrivial problem that involves the analysis of tail behavior through the estimation of the extreme-value index. For heavy-tailed distributions, on which this paper focuses, the extreme-value index is often called the tail index and extreme quantile estimation typically involves an extrapolation procedure. Besides, in various applications, the random variable of interest can be linked to a random covariate. In such a situation, extreme quantiles and the tail index are functions of the covariate and are referred to as conditional extreme quantiles and the conditional tail index, respectively. The goal of this paper is to provide classes of estimators of these quantities when there is a functional (i.e. possibly infinite-dimensional) covariate. Our estimators are obtained by combining regression techniques with a generalization of a classical extrapolation formula. We analyze the asymptotic properties of these estimators, and we illustrate the finite-sample performance of our conditional extreme quantile estimator on a simulation study and on a real chemometric data set.
INTRODUCTION
Studying extreme events is relevant in numerous fields of statistical applications. In hydrology for example, it is of interest to estimate the maximum level reached by seawater along a coast over a given period, 1 or to study extreme rainfall at a given location; in actuarial science, a major problem for an insurance firm is to estimate the probability that a claim so large that it represents a threat to its solvency is filed.
When analyzing the extremes of a random variable, a central issue is that the straightforward empirical estimator of the quantile function is not consistent at extreme levels; in other words, direct estimation of a quantile exceeding the range covered by the available data is impossible, and this is of course an obstacle to meaningful estimation results in practice.
In many of the aforementioned applications, the problem can be accurately modeled using univariate heavy-tailed distributions, thus providing an extrapolation method to estimate extreme quantiles.
Roughly speaking, a distribution is said to be heavy-tailed if and only if its related survival function decays like a power function with negative exponent at infinity; its so-called tail index γ is then the parameter which controls its rate of convergence to 0 at infinity. If Q denotes the underlying quantile function, this translates into: Q(δ) ≈ [(1 -β)/(1 -δ)] γ Q(β) when β and δ are close to 1. The quantile function at an arbitrarily high extreme level can then be consistently deduced from its value at a typically much smaller level provided γ can be consistently estimated. This procedure, suggested by Weissman [START_REF] Weissman | Estimation of parameters and large quantiles based on the k largest observations[END_REF], is one of the simplest and most popular devices as far as extreme quantile estimation is concerned.
The estimation of the tail index γ, an excellent overview of which is given in the recent monographs by Beirlant et al. [START_REF] Beirlant | Statistics of extremes -Theory and applications[END_REF] and de Haan and Ferreira [START_REF] De Haan | Extreme value theory: An introduction[END_REF], is therefore a crucial step to gain understanding of the extremes of a random variable whose distribution is heavy-tailed. In practical applications, the variable of interest Y can often be linked to a covariate X. For instance, the value of rainfall at a given location depends on its geographical coordinates; in actuarial science, the claim size depends on the sum insured by the policy. In this situation, the tail index and quantiles of the random variable Y given X = x are functions of x to which we shall refer as the conditional tail index and conditional quantile functions.
Their estimation has been considered first in the "fixed design" case, namely when the covariates are nonrandom. Smith [START_REF] Smith | Extreme value analysis of environmental time series: an application to trend detection in ground-level ozone (with discussion)[END_REF] and Davison and Smith [START_REF] Davison | Models for exceedances over high thresholds[END_REF] considered a regression model while Hall and Tajvidi [START_REF] Hall | Nonparametric analysis of temporal trend when fitting parametric models to extreme-value data[END_REF] used a semi-parametric approach to estimate the conditional tail index. Fully nonparametric methods have been developed using splines (see Chavez-Demoulin and Davison [START_REF] Chavez-Demoulin | Generalized additive modelling of sample extremes[END_REF]), local polynomials (see Davison and Ramesh [11]), a moving window approach (see Gardes and Girard [START_REF] Gardes | A moving window approach for nonparametric estimation of the conditional tail index[END_REF]) and a nearest neighbor approach (see Gardes and Girard [START_REF] Gardes | Conditional extremes from heavy-tailed distributions: an application to the estimation of extreme rainfall return levels[END_REF]), among others.
Despite the great interest in practice, the study of the random covariate case has been initiated only recently. We refer to the works of Wang and Tsai [START_REF] Wang | Tail index regression[END_REF], based on a maximum likelihood approach, Daouia et al. [START_REF] Daouia | Kernel estimators of extreme level curves[END_REF] who used a fixed number of non parametric conditional quantile estimators to estimate the conditional tail index, later generalized in Daouia et al. [START_REF] Daouia | On kernel smoothing for extremal quantile regression[END_REF] to a regression context with conditional response distributions belonging to the general max-domain of attraction, Gardes and Girard [START_REF] Gardes | Functional kernel estimators of large conditional quantiles[END_REF] who introduced a local generalized Pickands-type estimator (see Pickands [START_REF] Pickands | Statistical inference using extreme order statistics[END_REF]), Goegebeur et al. [START_REF] Goegebeur | Nonparametric regression estimation of conditional tails -the random covariate case[END_REF], who studied a nonparametric regression estimator whose strong uniform properties are examined in Goege-beur et al. [START_REF] Goegebeur | Uniform asymptotic properties of a nonparametric regression estimator of conditional tails[END_REF]. Some generalizations of the popular moment estimator of Dekkers et al. [START_REF] Dekkers | A moment estimator for the index of an extreme-value distribution[END_REF] have been proposed by Gardes [START_REF] Gardes | A general estimator for the extreme value index: applications to conditional and heteroscedastic extremes[END_REF], Goegebeur et al. [START_REF] Goegebeur | A local moment type estimator for the extreme value index in regression with random covariates[END_REF][START_REF] Goegebeur | A local moment type estimator for an extreme quantile in regression with random covariates[END_REF] and Stupfler [START_REF] Stupfler | A moment estimator for the conditional extreme-value index[END_REF][START_REF] Stupfler | Estimating the conditional extreme-value index under random right-censoring[END_REF]. In an attempt to obtain an estimator behaving better in finite-sample situations, Gardes and Stupfler [START_REF] Gardes | Estimation of the conditional tail index using a smoothed local Hill estimator[END_REF] worked on a smoothed local Hill estimator (see Hill [START_REF] Hill | A simple general approach to inference about the tail of a distribution[END_REF]) related to the work of Resnick and Stȃricȃ [START_REF] Resnick | Smoothing the Hill estimator[END_REF]. A different approach, that has been successful in recent years, is to combine extreme value theory and quantile regression: the pioneering paper is Chernozhukov [START_REF] Chernozhukov | Extremal quantile regression[END_REF], and we also refer to the subsequent papers by Chernozhukov and Du [START_REF] Chernozhukov | Extremal quantiles and Value-at-Risk, The New Palgrave Dictionary of Economics[END_REF], Wang et al. [START_REF] Wang | Estimation of high conditional quantiles for heavy-tailed distributions[END_REF] and Wang and Li [START_REF] Wang | Estimation of extreme conditional quantiles through power transformation[END_REF].
The goal of this paper is to introduce integrated estimators of conditional extreme quantiles and of the conditional tail index for random, possibly infinite-dimensional, covariates. In particular, our estimator of the conditional tail index, based on the integration of a conditional log-quantile estimator, is somewhat related to the one of Gardes and Girard [START_REF] Gardes | A moving window approach for nonparametric estimation of the conditional tail index[END_REF]. Our aim is to examine the asymptotic properties of our estimators, as well as to examine the applicability of our conditional extreme quantile estimator on numerical examples and on real data. Our paper is organized as follows: we define our estimators in Section 2. Their asymptotic properties are stated in Section 3. A simulation study is provided in Section 4 and we revisit a set of real chemometric data in Section 5. All the auxiliary results and proofs are deferred to the Appendix.
FUNCTIONAL EXTREME QUANTILE: DEFINITION AND ESTIMATION
Let (X 1 , Y 1 ), . . . , (X n , Y n ) be n independent copies of a random pair (X, Y ) taking its values in E × R + where (E, d) is a (not necessarily finite-dimensional) Polish space endowed with a semi-metric d. For instance, E can be the standard p-dimensional space R p , a space of continuous functions over a compact metric space, or a Lebesgue space L p (R), to name a few. For y > 0, we denote by S(y|X) a regular version of the conditional probability P(Y > y|X). Note that since E is a Polish space, such conditional probabilities always exist, see Jiřina [START_REF] Jiřina | On regular conditional probabilities[END_REF].
In this paper, we focus on the situation where the conditional distribution of Y given X is heavy-tailed.
More precisely, we assume that there exists a positive function γ(•), called the conditional tail index, such
that lim y→∞ S(λy|x) S(y|x) = λ -1/γ(x) , (1)
for all x ∈ E and all λ > 0. This is the adaptation of the standard extreme-value framework of heavytailed distributions to the case when there is a covariate. The conditional quantile function of Y given
X = x is then defined for x ∈ E by Q(α|x) := inf {y > 0 | S(y|x) ≤ α}. If x ∈ E is fixed, our final aim is
to estimate the conditional extreme quantile Q(β n |x) of order β n → 0. As we will show below, this does in fact require estimating the conditional tail index γ(x) first.
Estimation of a functional extreme quantile
Recall that we are interested in the estimation of Q(β n |x) when β n → 0 as the sample size increases. The natural empirical estimator of this quantity is given by
Q n (β n |x) := inf y > 0 | S n (y|x) ≤ β n , (2)
where
S n (y|x) = n i=1 I{Y i > y}I{d(x, X i ) ≤ h} n i=1 I{d(x, X i ) ≤ h}
and where h = h(n) is a nonrandom sequence converging to 0 as n → ∞. Unfortunately, denoting by m x (h) := nP(d(x, X) ≤ h) the average number of observations whose covariates belong to the ball
B(x, h) = {x ∈ E | d(x, x
) ≤ h} with center x and radius h, it can be shown (see Proposition 1) that the
condition m x (h)β n → ∞ is required to obtain the consistency of Q n (β n |x).
This means that at the same time, sufficiently many observations should belong to the ball B(x, h) and β n should be so small that the quantile Q(β n |x) is covered by the range of this data, and therefore the order β n of the functional extreme quantile cannot be chosen as small as we would like. We thus need to propose another estimator adapted to this case. To this end, we start by remarking (see Bingham et al. [4, Theorem 1.5.12]) that ( 1)
is equivalent to lim α→0 Q(λα|x) Q(α|x) = λ -γ(x) , (3)
for all λ > 0. Hence, for 0 < β < α with α small enough, we obtain the extrapolation formula x) which is at the heart of Weissman's extrapolation method [START_REF] Weissman | Estimation of parameters and large quantiles based on the k largest observations[END_REF]. In order to borrow more strength from the available information in the sample, we note that, if µ is a probability measure on the interval [0, 1], another similar, heuristic approximation holds:
Q(β|x) ≈ Q(α|x)(α/β) γ(
Q(β|x) ≈ [0,1] Q(α|x) α β γ(x)
µ(dα).
If we have at our disposal a consistent estimator γ n (x) of γ(x) (an example of such an estimator is given in Section 2.2), an idea is to estimate Q(β n |x) by:
Q n (β n |x) = [0,1] Q n (α|x) α β n γn(x) µ(dα). (4)
In order to obtain a consistent estimator of the extreme conditional quantile, the support of the measure µ, denoted by supp(µ), should be located around 0. To be more specific, we assume in what follows that supp(µ) ⊂ [τ u, u] for some τ ∈ (0, 1] and u ∈ (0, 1) small enough. For instance, taking µ to be the Dirac
measure at u leads to Q n (β n |x) = Q n (u|x) (u/β n ) γn(x)
, which is a straightforward adaptation to our conditional setting of the classical Weissman estimator [START_REF] Weissman | Estimation of parameters and large quantiles based on the k largest observations[END_REF]. If on the contrary µ is absolutely continuous, estimator ( 4) is a properly integrated and weighted version of Weissman's estimator. Due to the fact that it takes more of the available data into account, we can expect such an estimator to perform better than the simple adaptation of Weissman's estimator, a claim we investigate in our finite-sample study in Section 4.
Estimation of the functional tail index
To provide an estimator of the functional tail index γ(x), we note that equation (3) warrants the approximation γ(x) ≈ log[Q(α|x)/Q(u|x)]/ log(u/α) for 0 < α < u when u is small enough. Let Ψ(•, u) be a measurable function defined on (0, u) such that 0 < u 0 log(u/α)Ψ(α, u)dα < ∞. Multiplying the aforementioned approximation by Ψ(•, u), integrating between 0 and 1 and replacing Q(•|x) by the classical estimator Q n (•|x) defined in (2) leads to the estimator:
γ n (x, u) := u 0 Ψ(α, u) log Q n (α|x) Q n (u|x) dα u 0 log(u/α)Ψ(α, u)dα. (5)
Without loss of generality, we shall assume in what follows that u 0 log(u/α)Ψ(α, u)dα = 1.
Particular choices of the function Ψ(•, u) actually yield generalizations of some well-known tail index estimators to the conditional framework. Let k x := uM x (h), where M x (h) is the total number of covariates whose distance to x is not greater than h:
M x (h) = n i=1 I{d(x, X i ) ≤ h}.
The choice Ψ(•, u) = 1/u leads to the estimator:
γ H n (x) = 1 k x kx i=1 log Q n ((i -1)/M x (h)|x) Q n (k x /M x (h)|x) , (6)
which is the straightforward conditional adaptation of the classical Hill estimator (see Hill [START_REF] Hill | A simple general approach to inference about the tail of a distribution[END_REF]). Now, taking Ψ(•, u) = u -1 (log(u/•) -1) leads, after some algebra, to the estimator:
γ Z n (x) = 1 k x kx i=1 i log k x i log Q n ((i -1)/M x (h)|x) Q n (i/M x (h)|x) .
This estimator can be seen as a generalization of the Zipf estimator (see Kratz and Resnick [31], Schultze and Steinebach [START_REF] Schultze | On least squares estimates of an exponential tail coefficient[END_REF]).
MAIN RESULTS
Our aim is now to establish asymptotic results for our estimators. We assume in all what follows that Q(•|x) is continuous and decreasing. Particular consequences of this condition include that S(Q(α|x)|x) = α for any α ∈ (0, 1) and that given X = x, Y has an absolutely continuous distribution with probability density function f (•|x).
Recall that under (1), or equivalently (3), the conditional quantile function may be written for all t > 1 as follows:
Q(t -1 |x) = c(t|x) exp t 1 ∆(v|x) -γ(x) v dv
∆(λy|x) ∆(y|x) = λ ρ(x) .
The constant ρ(x) is called the conditional second-order parameter of the distribution. These conditions on the function ∆(•|x) are commonly used when studying tail index estimators and make it possible to control the error term in convergence [START_REF] Billingsley | Probability and measure[END_REF]. In particular, it is straightforward to see that for all z > 0,
lim t→∞ 1 ∆(t|x) Q((tz) -1 |x) Q(t -1 |x) -z γ(x) = z γ(x) z ρ(x) -1 ρ(x) , (7)
which is the conditional analogue of the second-order condition of de Haan and Ferreira [START_REF] De Haan | Extreme value theory: An introduction[END_REF] for heavytailed distributions, see Theorem 2.3.9 therein.
Finally, for 0 < α 1 < α 2 < 1, we introduce the quantity:
ω (α 1 , α 2 , x, h) = sup α∈[α1,α2] sup x ∈B(x,h) log Q(α|x ) Q(α|x) ,
which is the uniform oscillation of the log-quantile function in its second argument. Such a quantity is also studied in Gardes and Stupfler [START_REF] Gardes | Estimation of the conditional tail index using a smoothed local Hill estimator[END_REF], for instance. It acts as a measure of how close conditional distributions are for two neighboring values of the covariate.
These elements make it possible to state an asymptotic result for our conditional extreme quantile estimator:
Theorem 1 Assume that conditions (3) and (H SO ) are satisfied and let u n,x ∈ (0, 1) be a sequence converging to 0 and such that supp(µ) ⊂ [τ u n,x , u n,x ] with τ ∈ (0, 1]. Assume also that m x (h) → ∞ and that there exists a(x) ∈ (0, 1) such that:
c 1 ≤ lim inf n→∞ u n,x [m x (h)] a(x) ≤ lim sup n→∞ u n,x [m x (h)] a(x) ≤ c 2 (8)
for some constants
0 < c 1 ≤ c 2 , z 1-a(x) ∆ 2 (z a(x) |x) → λ(x) ∈ R as z → ∞ and [m x (h)] 1-a(x) ω 2 [m x (h)] -1-δ , 1 -[m x (h)] -1-δ , x, h → 0 ( 9
)
for some δ > 0. If moreover [m x (h)] (1-a(x))/2 ( γ n (x) -γ(x)) d -→ Γ with Γ a non-degenerate distribution,
then, provided we have that
β n [m x (h)] a(x) → 0 and [m x (h)] a(x)-1 log 2 ([m x (h)] -a(x) /β n ) → 0, it holds that [m x (h)] (1-a(x))/2 log([m x (h)] -a(x) /β n ) Q n (β n |x) Q(β n |x) -1 d -→ Γ.
Note that [m x (h)] 1-a(x) → ∞ depends on the average number of available data points that can be used to compute the estimator. More precisely, under condition [START_REF] Chernozhukov | Extremal quantiles and Value-at-Risk, The New Palgrave Dictionary of Economics[END_REF], this quantity is essentially proportional to u n,x m x (h), which is the average number of data points actually used in the estimation. In particular, the conditions in Theorem 1 are analogues of the classical hypotheses in the estimation of an extreme quantile. Besides, condition [START_REF] Daouia | Kernel estimators of extreme level curves[END_REF] ensures that the distribution of Y given X = x is close enough to that of Y given X = x when x is in a sufficiently small neighborhood of x. Finally, taking µ to be the Dirac measure at u n,x makes it possible to obtain the asymptotic properties of the functional adaptation of the standard Weissman extreme quantile estimator. In particular, as in the unconditional univariate case, the asymptotic distribution of the conditional extrapolated estimator depends crucially on the asymptotic properties of the conditional tail index estimator used.
We proceed by stating the asymptotic normality of the estimator γ n (x, u) in [START_REF] Cai | Bias correction in extreme value statistics with index around zero[END_REF]. To this end, an additional hypothesis on the weighting function Ψ(•, u) is required.
(H Ψ ) The function Ψ(•, u) satisfies for all u ∈ (0, 1] and β ∈ (0, u]:
u β β 0 Ψ(α, u)dα = Φ(β/u) and sup 0<υ≤1/2 υ 0 |Ψ(α, υ)|dα < ∞,
where Φ is a nonincreasing probability density function on (0, 1) such that Φ 2+κ is integrable for some κ > 0. In addition, there exists a positive continuous function g defined on (0, 1) such that for any k > 1 and i ∈ {1, 2, . . . , k},
|iΦ (i/k) -(i -1)Φ ((i -1)/k)| ≤ g (i/(k + 1)) , (10)
and the function g(•) max(log(1/•), 1) is integrable on (0, 1).
Note that for all t ∈ (0, 1), 0 ≤ tΦ(t) ≤ t/2 0 |Ψ(α, 1/2)|dα. Since the right-hand side converges to 0 as t ↓ 0, we may extend the definition of the map t → tΦ(t) by saying it is 0 at t = 0. Hence, inequality [START_REF] Daouia | On kernel smoothing for extremal quantile regression[END_REF] is meaningful even when i = 1.
Condition (H Ψ ) on the weighting function Ψ(•, u) is similar in spirit to a condition introduced in Beirlant et al. [START_REF] Beirlant | On exponential representations of logspacings of extreme order statistics[END_REF]. This condition is satisfied for instance by the functions Ψ(•, u) = u -1 and Ψ(•, u) = u -1 (log(u/•) -1) with g(•) = 1 for the first one and, for the second one, g(•) = 1 -log(•). In particular, our results shall then hold for the adaptations of the Hill and Zipf estimators mentioned at the end of Section 2.2.
The asymptotic normality of our family of estimators of γ(x) is established in the following theorem.
Theorem 2 Assume that conditions (3), (H SO ) and (H Ψ ) are satisfied, that m x (h) → ∞ and u = u n,x → 0. Assume that there exists a(x) ∈ (0, 1) 9) holds and that there are two ultimately decreasing functions
such that z 1-a(x) ∆ 2 (z a(x) |x) → λ(x) ∈ R as z → ∞, con- dition (
ϕ 1 ≤ ϕ 2 such that z 1-a(x) ϕ 2 2 (z) → 0 as z → ∞ and ϕ 1 (m x (h)) ≤ u n,x [m x (h)] a(x) -1 ≤ ϕ 2 (m x (h)). Then, [m x (h)] (1-a(x))/2 ( γ n (x, u n,x ) -γ(x)) converges in distribution to N λ(x) 1 0 Φ(α)α -ρ(x) dα, γ 2 (x) 1 0 Φ 2 (α)dα .
Our asymptotic normality result thus holds under generalizations of the common hypotheses on the standard univariate model, provided the conditional distributions of Y at two neighboring points are sufficiently close. We close this section by pointing out that our main results are also similar in spirit to results obtained in the literature for other conditional tail index or conditional extreme-value index estimators, see e.g. Gardes and Stupfler [START_REF] Gardes | Estimation of the conditional tail index using a smoothed local Hill estimator[END_REF] and Stupfler [START_REF] Stupfler | A moment estimator for the conditional extreme-value index[END_REF][START_REF] Stupfler | Estimating the conditional extreme-value index under random right-censoring[END_REF].
SIMULATION STUDY
Hyperparameters selection
The aim of this paragraph is to propose a selection procedure of the hyperparameters involved in the estimator Q n (β n |x) of the extreme conditional quantile and in the estimator γ n (x, u) of the functional tail index. Assuming that the measure µ used in ( 4) is such that supp(µ) ⊂ [τ u, u] for some τ ∈ (0, 1) fixed by the user (a discussion of the performance of the estimator as a function of τ is included in Section 4.2 below), these hyperparameters are: the bandwidth h controlling the smoothness of the estimators and the value u ∈ (0, 1) which selects the part of the tail distribution considered in the estimation procedure. The criterion used in our selection procedure is based on the following remark: for any positive and integrable
weight function W : [0, 1] → [0, ∞), E W := E 1 0 W (α) (I{Y > Q(α|X)} -α) 2 dα = 1 0 W (α)α(1 -α)dα.
The sample analogue of E W is given by
1 n n i=1 1 0 W (α) (I{Y i > Q(α|X i )} -α) 2 dα,
and for a good choice of h and u, this quantity should of course be close to the known quantity E W . Let then W
n and W
n be two positive and integrable weight functions. Replacing the unobserved variable Q(α|X i ) by the statistic Q n,i (α|X i ) which is the estimator (2) computed without the observation (X i , Y i ) leads to the following estimator of E W (1)
n : E (1) W (1) n (h) := 1 n n i=1 1 0 W (1) n (α) I{Y i > Q n,i (α|X i )} -α 2 dα.
Note that E
W
(h) only depends on the hyperparameter h. In the same way, one can also replace Q(α|X i ) by the statistic Q n,i (α|X i ) which is the estimator (4) computed without the observation (X i , Y i ). An
estimator of E W (2)
n is then given by:
E (2) W (2) n (u, h) := 1 n n i=1 1 0 W (2) n (α) I{Y i > Q n,i (α|X i )} -α 2 dα.
Obviously, this last quantity depends both on u and h. We propose the following two-stage procedure to choose the hyperparameters u and h. First, we compute our selected bandwidth h opt by minimizing with respect to h the function CV (1) (h) := E
W (1) n (h) - 1 0 W (1) n (α)α(1 -α)dα 2 . (1)
Next, our selected sample fraction u opt is obtained by minimizing with respect to u the function CV (2) (u, h opt ) where CV (2) (u, h) := E
(2)
W (2) n (u, h) - 1 0 W (2) n (α)α(1 -α)dα 2 .
Note that the functions CV (1) and CV (2) can be seen as adaptations to the problem of conditional extreme quantile estimation of the cross-validation function introduced in Li et al. [START_REF] Li | Optimal bandwidth selection for nonparametric conditional distribution and quantile functions[END_REF].
Results
The behavior of the extreme conditional quantile estimator (4), when the estimator (5) of the functional tail index is used together with our selection procedure of the hyperparameters, is tested on some random
pairs (X, Y ) ∈ C 1 [-1, 1] × (0, ∞), where C 1 [-1, 1]
is the space of continuously differentiable functions on [-1, 1]. We generate n = 1000 independent copies (X 1 , Y 1 ), . . . , (X n , Y n ) of (X, Y ) where X is the random curve defined for all t ∈ [-1, 1] by X(t) := sin[2πtU ] + (V + 2π)t + W , where U , V and W are independent random variables drawn from a standard uniform distribution. Note that this random covariate was used for instance in Ferraty et al. [START_REF] Ferraty | Nonparametric regression on functional data: inference and practical aspects[END_REF]. Regarding the conditional distribution of Y given
X = x, x ∈ C 1 [-1, 1]
, two distributions are considered. The first one is the Fréchet distribution, for which the conditional quantile is given for all α ∈ (0, 1) by
Q(α|x) = [-log (1 -α)] -γ(x)
. The second one is the Burr distribution with parameter r > 0, for which Q(α|x) = (α -rγ(x) -1) 1/r . For these distributions, letting x be the first derivative of x and
z(x) = 2 3 1 -1 x (t)[1 -cos(πt)]dt - 23 2 ,
the functional tail index is given by
γ(x) = exp - log(3) 9 z 2 (x) I{|z(x)| < 3} + 1 3 I{|z(x)| ≥ 3}.
In this setup, it is straightforward to show that z(x) ∈ [-3. The space C 1 [-1, 1] is endowed with the semi-metric d given for all x 1 , x 2 by
d(x 1 , x 2 ) = 1 -1 (x 1 (t) -x 2 (t)) 2 dt 1/2 , i.e. the L 2 -distance between first derivatives. To compute γ n (x, u), we use the weight function Ψ(•, u) = u -1 (log(u/•) -1)
, and the measure µ used in the integrated conditional quantile estimator is assumed to be absolutely continuous with respect to the Lebesgue measure, with density
p τ,u (α) = 1 u(1 -τ ) I{α ∈ [τ u, u]}.
In what follows, this estimator is referred to as the Integrated Weissman Estimator (IWE). Other absolutely continuous measures µ, with different densities with respect to the Lebesgue measure, have been tested, with different values of τ . It appears that the impact of the choice of the parameter τ is more important than the one of the measure µ. We thus decided to present in this simulation study the results
for the aforementioned value of the measure µ only, but with different tested values for τ .
The hyperparameters are selected using the procedure described in Section 4.1. Since we are interested in the tail of the conditional distribution, the supports of the weight functions W
n and W
n should be located around 0. More specifically, for i ∈ {1, 2}, we take
W (i) n (α) := log α β (i) n,1 I{α ∈ [β (i) n,1 , β (i) n,2 ]},
where β
(1)
n,1 = 2 √ n log n /n, β (1)
n,2 = 3
√ n log n /n, β (2)
n,1 = 5 log n /n and β
n,2 = 10 log n /n. The cross-validation function CV (1) (h) is minimized over a grid H of 20 points evenly spaced between 1/2 and 10 to obtain the optimal value h opt , while the value u opt is obtained by minimizing over a grid U of 26 points evenly spaced between 0.005 and 0.255 the function CV (2) (u, h opt ).
For the Fréchet distribution and two Burr distributions (one with r = 2 and one with r = 1/20), the conditional extreme quantile estimator (4) is computed with the values u opt and h opt obtained by our selection procedure. The quality of the estimator is measured by the Integrated Squared Error given by:
ISE := 1 n n i=1 β (2) n,2 β (2) n,1 log 2 Q n,i (α|X i ) Q(α|X i ) dα.
This procedure is repeated N = 100 times. To give a graphical idea of the behavior of our estimator (4), we first depict, in Figure 1, boxplots of the N obtained replications of this estimator, computed with τ = 9/10, for the Fréchet distribution and for some values of the quantile order β n and of the covariate
x. More precisely, we take here
β n ∈ {β (2) n,1 , (β (2)
n,1 + β (2) n,2 )/2, β (2)
n,2 } and three values of the covariate are considered: x = x 1 with z(x 1 ) = -2 (and then γ(x 1 ) ≈ 0.64), x = x 2 with z(x 2 ) = 0 (giving γ(x 2 ) = 1) and x = x 3 with z(x 3 ) = 2 (which entails γ(x 3 ) ≈ 0.64). As expected, the quality of the estimation is strongly impacted by the quantile order β n but also by the actual position of the covariate and, of course, by the value of the true conditional tail index γ(x).
Next, the median and the first and third quartiles of the N values of the Integrated Squared Error are gathered in Table 1. The proposed estimator is compared to the adaptation of the Weissman estimator obtained by taking for the measure µ in (4) the Dirac measure at u. This estimator is referred to as the Weissman Estimator (WE) in Table 1. In the WE estimator, the functional tail index γ(x) is estimated either by [START_REF] Chavez-Demoulin | Generalized additive modelling of sample extremes[END_REF] or by the generalized Hill-type estimator of Gardes and Girard [START_REF] Gardes | Functional kernel estimators of large conditional quantiles[END_REF]: for J ≥ 2, this estimator is given by
γ GG (x, u) = J j=1 (log Q n (u/j 2 |x) -log Q n (u|x)) J j=1 log(j 2 ) .
Following their advice, we set J = 10. Again, the median and the first and third quartiles of the N values of the Integrated Squared Error of these two estimators are given in Table 1. In this It appears that the IWEs outperform the two WEs in the case of the Fréchet and Burr (with r = 1/20) distributions. It also seems that the choice of τ has some influence on the quality of the estimator but, unfortunately, an optimal choice of τ apparently depends on the unknown underlying distribution. It is interesting though to note that the optimal IWE estimator among the three tested here always enjoys a smaller variability than the WE estimator: for instance, in the case of the Burr distribution with r = 2, even though the IWE with τ = 9/10 does not outperform the WE (with γ(x) estimated by ( 6)) in terms of median ISE, the interquartile range of the ISE is 27.7% lower for the IWE compared to what it is for the WE. Finally, as expected, the value of ρ(x) has a strong impact on the estimation procedure: a value of ρ(x) close to 0 leads to large values of the Integrated Squared Error.
REAL DATA EXAMPLE
In this section, we showcase our extreme quantile Integrated Weissman Estimator on functional chemometric data. This data, obtained by considering n = 215 pieces of finely chopped meat, consists of pairs of observations (x n , z n ), where x i is the absorbance curve of the ith piece of meat, obtained at 100 regularly spaced wavelengths between 850 and 1050 nanometers (this is also called the spectrometric curve), and z i is the percentage of fat content in this piece of meat. The data, openly available at http://lib.stat.cmu.edu/datasets/tecator, is for instance considered in Ferraty and Vieu [START_REF] Ferraty | The functional nonparametric model and application to spectrometric data[END_REF][START_REF] Ferraty | Nonparametric Functional Data Analysis: Theory and Practice[END_REF].
Figure 2 is a graph of all 215 absorbance curves.
Because the percentage of fat content z i obviously belongs to [0, 100], it has a finite-right endpoint and therefore cannot be conditionally heavy-tailed as required by model (1). We thus consider the "inverse fat content" y i = 100/z i in this analysis. The top left panel of Figure 3 shows the Hill plot of the sample (y 1 , . . . , y n ) without integrating covariate information. It can be seen in this figure that the Hill plot seems to be stabilizing near the value 0.4 for a sizeable portion of the left of the graph, thus indicating the plausible presence of a heavy right tail in the data (y 1 , . . . , y n ), see for instance Theorem 3.2.4 in de Haan and Ferreira [START_REF] De Haan | Extreme value theory: An introduction[END_REF]. The other panels in Figure 3 show exponential QQ-plots for the log-data points whose covariates lie in a fixed-size neighborhood of certain pre-specified points in the covariate space.
It is seen in these subfigures that these plots are indeed roughly linear towards their right ends, which supports our conditional heavy tails assumption.
On these grounds, we therefore would like to analyze the influence of the covariate information, which is the absorbance curve, upon the inverse fat content. While of course the absorbance curves obtained are in reality made of discrete data because of the discretization of this curve, the precision of this discretization arguably makes it possible to consider our data as in fact functional. This, in our opinion, fully warrants the use of our estimator in this case.
Because the covariate space is functional, one has to wonder about how to measure the influence of the covariate and then about how to represent the results. A nice account of the problem of how to represent results when considering functional data is given in Ferraty and Vieu [START_REF] Ferraty | Nonparametric Functional Data Analysis: Theory and Practice[END_REF]. Here, we look at the variation of extreme quantile estimates in two different directions of the covariate space. To this end, we consider the semi-metric
d(x 1 , x 2 ) = 1050 850 (x 1 (t) -x 2 (t)) 2 dt 1/2
, also advised by Ferraty and Vieu [START_REF] Ferraty | The functional nonparametric model and application to spectrometric data[END_REF], and we compute:
• a typical pair of covariates, i.e. a pair (x med 1 , x med 2
) such that
d(x med 1 , x med 2 ) = median{d(x i , x j ), 1 ≤ i, j ≤ n, i = j}.
• a pair of covariates farthest from each other, i.e. a pair (x max 1 , x max 2
) such that
d(x max 1 , x max 2 ) = max{d(x i , x j ), 1 ≤ i, j ≤ n, i = j}.
For the purpose of comparison, we also compute the "average covariate" x = n -1 n i=1 x i . In particular, we represent on Figure 4 our two pairs of covariates together with the average covariate, the same scale being used on the y-axis in both figures. Recall that since the semi-metric d is the L 2 -distance between second-order derivatives, it acts as a measure of how much the shapes of two covariate curves are different, rather than measuring how far apart they are.
We compute our conditional extreme quantile estimator at the levels 5/n and 1/n, using the methodology given in Section 4.2. In particular, the selection parameters β
(1) n,1 , β (1) n,2 , β (2)
n,1 and β
(2)
n,2 used in the crossvalidation methodology were the exact same ones used in the simulation study, namely 0.437, 0.655, 0.035 and 0.069, respectively. The bandwidth h is selected in the interval [0.00316, 0.0116], the lower bound in this interval corresponding to the median of all distances d(x i , x j ) (i = j) and the upper bound corresponding to 90% of the maximum of all distances d(x i , x j ), for a final selected value of 0.00717. The value of the parameter u is selected exactly as in the simulation study, and the selection procedure gives the value 0.185. Finally, we set τ = 0.9 in our Integrated Weissman Estimator.
Results are given in Figure 5; namely, we compute the extreme quantile estimates Q n (β|x), for β ∈ {5/n, 1/n}, and x belonging to either the line [x med ], while roughly stable for 60% of the line and approximately equal to the value of the estimated quantiles at the average covariate, very sharply drop afterwards, the reduction factor being close to 10 from the beginning of the line to its end in the case β = 5/n. This conclusion suggests that while in typical directions of the covariate space the tail behavior of the fat content is very stable, there may be certain directions in which this is not the case. In particular, there appear to be certain values of the covariate for which thresholds for the detection of unusual levels of fat should differ from those of more standard cases.
PROOFS OF THE MAIN RESULTS
Before proving the main results, we recall two useful facts. The first one is a classical equivalent of
M x (h) := n i=1 I{d(X i , x) ≤ h}. If m x (h) → ∞ as n → ∞ then, for any δ ∈ (0, 1): [m x (h)] (1-δ)/2 M x (h) m x (h) -1 P -→ 0 as n → ∞, (11)
see Lemma 1 in Stupfler [START_REF] Stupfler | A moment estimator for the conditional extreme-value index[END_REF]. For the second one, let {Y * i , i = 1, . . . , M x (h)} be the response variables whose associated covariates {X * i , i = 1, . . . , M x (h)} are such that d(X * i , x) ≤ h. Lemma 4 in Gardes and Stupfler [START_REF] Gardes | Estimation of the conditional tail index using a smoothed local Hill estimator[END_REF] shows that the random variables
V i = 1 -F (Y * i |X * i ) are such that, for all u 1 , . . . , u p ∈ [0, 1], P p i=1 {V i ≤ u i }|M x (h) = p = u 1 . . . u p , (12)
i.e. they are independent standard uniform random variables given M x (h).
Proof of Theorem 1
The following proposition is a uniform consistency result for the estimator Q n (β n |x) when β n goes to 0 at a moderate rate.
Proposition 1 Assume that conditions (3), (H SO ), ( 8) and ( 9) are satisfied. If
m x (h) → ∞, then sup α∈[τ un,x,un,x] Q n (α|x) Q(α|x) -1 = O P [m x (h)] (a(x)-1)/2 .
Proof. Let M n := M x (h), {U i , i ≥ 1} be independent standard uniform random variables,
V i := S(Y * i |X * i ) and Z n (x) := sup α∈[τ un,x,un,x] Q n (α|x) Q(α|x) -1 .
We start with the following inequality:
Z n (x) ≤ T n (x) + R (Q) n (x), with T n (x) := sup α∈[τ un,x,un,x] Q(V αMn +1,Mn |x) Q(α|x) -1 (13)
and
R (Q) n (x) := sup α∈[τ un,x,un,x] Q n (α|x) -Q(V αMn +1,Mn |x) Q(α|x) . (14)
Let us first focus on the term T n (x). For any t > 0,
P(v n,x T n (x) > t) = n j=0 P(v n,x T n (x) > t|M n = j)P(M n = j),
where v n,x := [m x (h)] (1-a(x))/2 . From [START_REF] Davison | Local likelihood smoothing of sample extremes[END_REF], letting
I n := [m x (h)(1 -[m x (h)] [a(x)/4]-1/2 ), m x (h)(1 + [m x (h)] [a(x)/4]-1/2 )], (15)
one has P(M n / ∈ I n ) → 0 as n → ∞. Hence,
P(v n,x T n (x) > t) ≤ sup p∈In P(v n,x, T n (x) > t|M n = p) + o(1).
Using Lemma 1,
sup p∈In P(v n,x T n (x) > t|M n = p) = sup p∈In P(v n,x T p (x) > t),
where
T p (x) := sup α∈[τ un,x,un,x] Q(U pα +1,p |x) Q(α|x) -1 .
Using condition [START_REF] Chernozhukov | Extremal quantiles and Value-at-Risk, The New Palgrave Dictionary of Economics[END_REF], it is clear that there are constants d 1 , d 2 > 0 with d 1 < d 2 such that for n large enough, we have for all p ∈ I n :
T p (x) ≤ sup α∈[d1p -a(x) ,d2p -a(x) ] Q(U pα +1,p |x) Q(α|x) -1 .
Thus, for all t > 0, P(v n,x T n (x) > t) is bounded above by
sup p∈In P v n,x sup α∈[d1p -a(x) ,d2p -a(x) ] Q(U pα +1,p |x) Q(α|x) -1 > t + o(1).
Furthermore, for n large enough, there exists κ > 0 such that for all p ∈ I n , v n,x ≤ κp (1-a(x))/2 and thus, for all t > 0, P(v n,x T n (x) > t) is bounded above by
sup p∈In P κp (1-a(x))/2 sup α∈[d1p -a(x) ,d2p -a(x) ] Q(U pα +1,p |x) Q(α|x) -1 > t + o(1). Since p (1-a(x))/2 sup α∈[d1p -a(x) ,d2p -a(x) ] Q(U pα +1,p |x) Q(α|x) -1 = O P (1),
(see Lemma 2 for a proof), it now becomes clear that T n (x) = O P (v -1 n,x ). Let us now focus on the term R (Q) n (x). As before, one can show that for all t > 0,
P(v n,x R (Q) n (x) > t) ≤ sup p∈In P(v n,x R (Q) n (x) > t|M n = p) + o(1).
Lemma 1 and condition ( 9) yield for any t > 0 and n large enough:
sup p∈In P(v n,x R (Q) n (x) > t|M n = p) ≤ sup p∈In P(v n,x ω(U 1,p , U p,p , x, h) exp(ω(U 1,p , U p,p , x, h))(1 + T p (x)) > t) ≤ sup p∈In P(p (1-a(x))/2 ω(U 1,p , U p,p , x, h) exp(ω(U 1,p , U p,p , x, h))(1 + T p (x)) > t/κ) ≤ sup p∈In P(U 1,p < [m x (h)] -1-δ ) + P(U p,p > 1 -[m x (h)] -1-δ ) .
Since for n large enough
sup p∈In P(U 1,p < [m x (h)] -1-δ ) + P(U p,p > 1 -[m x (h)] -1-δ ) (16) = 2 sup p∈In 1 -[1 -[m x (h)] -1-δ ] p ≤ 2 1 -[1 -[m x (h)] -1-δ ] 2mx(h) → 0, we thus have proven that R (Q) n (x) = o P (v -1 n,x
) and the proof is complete.
Proof of Theorem 1. The key point is to write
Q n (β n |x) = un,x τ un,x Q(α|x) α β n γ(x) Q n (α|x) Q(α|x) α β n γn(x)-γ(x)
µ(dα).
Now, by assumption
v n,x ( γ n (x) -γ(x)) d -→ Γ where v n,x := [m x (h)] (1-a(x))/2 . Since β n /u n,
log α β n γn(x)-γ(x) ≤ | γ n (x) -γ(x)| log u n,x β n = o P (1),
since by assumption v -1 n,x log(u n,x /β n ) → 0. A Taylor expansion for the exponential function thus yields
α β n γn(x)-γ(x) -1 -log(α/β n )( γ n (x) -γ(x)) = O P v -1 n,x log 2 (u n,x /β n ) , uniformly in α ∈ [τ u n,x , u n,x ].
We then obtain
Q n (β n |x) = un,x τ un,x Q(α|x) α β n γ(x) G n,x (α)µ(dα)
where
G n,x (α) := Q n (α|x) Q(α|x) 1 + log(α/β n )( γ n (x) -γ(x)) + O P v -1 n,x log 2 (u n,x /β n ) . By Proposition 1, sup α∈[τ un,x,un,x] Q n (α|x) Q(α|x) -1 = O P (v -1 n,x ),
and therefore:
G n,x (α) = 1 + log(α/β n )( γ n (x) -γ(x)) + O P v -1 n,x log 2 (u n,x /β n ) . (17) By Lemma 3, sup α∈[τ un,x,un,x]
Q(α|x) Q(β n |x) α β n γ(x) -1 = O ∆(u -1 n,x |x) , (18)
and thus, ( 17) and ( 18) lead to
Q (β n |x) Q(β n |x) -1 = ( γ n (x) -γ(x)) un,x τ un,x log(α/β n )µ(dα) 1 + O ∆(u -1 n,x |x) + O ∆(u -1 n,x |x) + O P v -1 n,x log 2 (u n,x /β n ) . Since u n,x /β n → 0 and µ([τ u n,x , u n,x ]) = 1, one has un,x τ un,x log(α/β n )µ(dα) = un,x τ un,x [log(u n,x /β n ) + log(α/u n,x )] µ(dα) = log(u n,x /β n )(1 + o(1)),
and thus
Q (β n |x) Q(β n |x) -1 = ( γ n (x) -γ(x)) log(u n,x /β n ) [1 + o(1)] + O ∆(u -1 n,x |x) + O P v -1 n,x log 2 (u n,x /β n ) .
Using the convergence in distribution of γ n (x) completes the proof.
Proof of Theorem 2
For the sake of brevity, let
v n,x = [m x (h)] (1-a(x))/2 , M n = M x (h) and K n = u n,x M n . The cu-
mulative distribution function of a normal distribution with mean λ(x)
1 0 Φ(α)α -ρ(x)
dα and variance
γ 2 (x) 1 0 Φ 2 (α)dα is denoted by H x in what follows. Let t ∈ R and ε > 0. Denoting by E n (t) the event {v n,x ( γ n (x, u n,x ) -γ(x)) ≤ t}, one has |P [E n (t)] -H x (t)| ≤ n p=0 P(M n = p) |P [E n (t)|M n = p] -H x (t)| .
Recall that from [START_REF] Davison | Local likelihood smoothing of sample extremes[END_REF], P(M n / ∈ I n ) → 0 as n → ∞ where I n is defined in [START_REF] Ferraty | Nonparametric Functional Data Analysis: Theory and Practice[END_REF]. Hence, for n large enough,
|P [E n (t)] -H x (t)| ≤ sup p∈In |P [E n (t)|M n = p] -H x (t)| + ε 8 . (19)
Now, using the notation
V i := S(Y * i |X * i ) for i = 1, .
. . , M n , let us introduce the statistics:
γ n (x, u n,x ) := Kn i=1 W i,n (u n,x , M n ) log Q(V i,Mn |x) Q(V Kn +1,Mn |x) and R (γ) n (x) := γ n (x, u n,x ) -γ n (x, u n,x ), (20)
where
W i,n (u n,x , M n ) := i/Mn (i-1)/Mn Ψ(α, u n,x )dα. (21)
It is straightforward that for all κ > 0,
sup p∈In |P [E n (t)|M n = p] -H x (t)| ≤ T (1) n,x + T (2) n,x , (22)
where
T (1) n,x := sup p∈In P E n (t) ∩ v n,x |R (γ) n (x)| ≤ κ |M n = p -H x (t)
and T (2) n,x := sup
p∈In P v n,x |R (γ) n (x)| > κ|M n = p .
Let us first focus on the term T
n,x . Let E n (t) := {v n,x ( γ n (x, u n,x ) -γ(x)) ≤ t}. For all p ∈ I n , P[E n (t) ∩ {v n,x |R (γ) n (x)| ≤ κ}|M n = p] ≤ P[ E n (t + κ)|M n = p] and P E n (t) ∩ v n,x |R (γ) n (x)| ≤ κ |M n = p ≥ P E n (t -κ) ∩ v n,x |R (γ) n (x)| ≤ κ |M n = p ≥ P E n (t -κ)|M n = p -P v n,x |R (γ) n (x)| > κ|M n = p . (1)
Using the inequality |x| ≤ |a| + |b| which holds for all x ∈ [a, b], it is then clear that for all κ > 0,
T
P E n (t + κ)|M n = p -H x (t + κ) + sup p∈In P E n (t -κ)|M n = p -H x (t -κ) + |H x (t) -H x (t + κ)| + |H x (t) -H x (t -κ)| + T (2) n,x . (1) n,x ≤ sup p∈In
Since H x is continuous, we can actually choose κ > 0 so small that
|H x (t) -H x (t + κ)| ≤ ε 8 and |H x (t) -H x (t -κ)| ≤ ε 8
and therefore
T (1) n,x ≤ sup p∈In P E n (t + κ)|M n = p -H x (t + κ) (23)
+ sup
p∈In P E n (t -κ)|M n = p -H x (t -κ) + T (2) n,x + ε 4 .
We now focus on the two first terms in the left-hand side of the previous inequality. From Lemma 4, the distribution of γ n (x, u n,x ) given M n = p is that of
γ p (x, u n,x ) := 1 pu n,x pun,x i=1 Φ i pu n,x i log Q(U i,p |x) Q(U i+1,p |x) .
Hence, for all s ∈ R and
p ∈ I n , P[ E n (s)|M n = p] = P[v n,x (γ p (x, u n,x ) -γ(x)) ≤ s]. Furthermore, for n large enough we have p/2 ≤ p 1 + [m x (h)] [a(x)/4]-1/2 ≤ m x (h) ≤ p 1 -[m x (h)] [a(x)/4]-1/2 ≤ 2p
for all p ∈ I n , so that for n large enough:
ξ (+) (p) ≤ m x (h) ≤ ξ (-) (p), (24)
with
ξ (+) (p) := p[1 + (2p) [a(x)/4]-1/2 ] -1 and ξ (-) (p) := p[1 -(p/2) [a(x)/4]-1/2 ] -1 .
Under our assumptions on the sequence u n,x , the previous inequalities lead to
k 1 (p) ≤ pu n,x ≤ k 2 (p) where k 1 (p) := p[ξ (-) (p)] -a(x) [1 + ϕ 1 (ξ (-) (p))] and k 2 (p) := p[ξ (+) (p)] -a(x) [1 + ϕ 2 (ξ (+) (p))].
Since Φ is a nonincreasing function on (0, 1), we then get that:
γ p (x, u n,x ) ≤ 1 k 1 (p) pun,x i=1 Φ i k 2 (p) + 1 i log Q(U i,p |x) Q(U i+1,p |x) ≤ 1 k 1 (p) k2(p) +1 i=1 Φ i k 2 (p) + 1 i log Q(U i,p |x) Q(U i+1,p |x) = γ p (x, k 1 (p), k 2 (p)) with γ p (x, k, k ) := 1 k k i=1 Φ i k + 1 i log Q(U i,p |x) Q(U i+1,p |x) . (25)
A similar lower bound applies and thus shown that for all s ∈ R,
γ p (x, k 2 (p), k 1 (p) -1) ≤ γ p (x, u n,x ) ≤ γ p (x, k 1 (p), k 2 (p))
sup p∈In P E n (s)|M n = p -H x (s) ≤ sup p∈In |P [v n,x (γ p (x, k 1 (p), k 2 (p)) -γ(x)) ≤ s] -H x (s)| + sup p∈In |P [v n,x (γ p (x, k 2 (p), k 1 (p) -1) -γ(x)) ≤ s] -H x (s)| .
Since from [START_REF] Goegebeur | A local moment type estimator for an extreme quantile in regression with random covariates[END_REF], [ξ
(+) (p)] (1-a(x))/2 ≤ v n,x ≤ [ξ (-) (p)]
(1-a(x))/2 for all p ∈ I n and since by assumption on the ϕ i ,
k 1 (p) k 2 (p) = 1 + O p [a(x)/4]-1/2 + O ϕ 1 (ξ (+) (p)) + O ϕ 2 (ξ (+) (p)) = 1 + o(p (a(x)-1)/2 ),
one can apply Lemmas 6 and 7 to show that for n large enough
sup p∈In P E n (t + κ)|M n = p -H x (t + κ) (26)
+ sup
p∈In P E n (t -κ)|M n = p -H x (t -κ) ≤ ε 2 .
It remains to study the term T and thus for n large enough, using (16):
T (2) n,x ≤ sup p∈In P v n,x ω(U 1,p , U p,p , x, h) > κ 4C ≤ 2 1 -[1 -[m x (h)] -1-δ ] 2mx(h) ≤ ε 8 . (27)
Collecting ( 19), ( 22), ( 23), ( 26) and ( 27) concludes the proof.
and, given M
x (h) = p, R (Q) n (x) is bounded from above by ω(U 1,p , U p,p , x, h) exp[ω(U 1,p , U p,p , x, h)] 1 + T p (x) .
Proof. Recall the notation M n := M x (h) and
V i := S(Y * i |X * i ). First, given M n = p, equation (12) entails that {V i , 1 ≤ i ≤ M n }|{M n = p} d = {U i , 1 ≤ i ≤ p}
where U 1 , . . . , U p are independent standard uniform variables. It thus holds that
Q(V αMn +1,Mn |x), α ∈ [0, 1) |{M n = p} d = Q(U pα +1,p |x), α ∈ [0, 1) .
As a direct consequence
T n (x)|{M n = p} d = T p (x). ( 28
)
Let us now focus on the term R
(Q) n (x). Since Q(•|x) is continuous and decreasing, one has, for i = 1, . . . , M n , log Q(V i |x) -ω(V 1,Mn , V Mn,Mn , x, h) ≤ log Y * i = log Q(V i |X * i ) ≤ log Q(V i |x) + ω(V 1,Mn , V Mn,Mn , x, h).
It follows from Lemma 1 in Gardes and Stupfler [START_REF] Gardes | Estimation of the conditional tail index using a smoothed local Hill estimator[END_REF] that for all i ∈ {1, . . . , M n }, log Y * Mn-i+1,Mn -log Q(V i,Mn |x) ≤ ω(V 1,Mn , V Mn,Mn , x, h).
Since Q n (α|x) = Y * Mn-i+1,Mn for all α ∈ [(i -1)/M n , i/M n ), the mean value theorem leads to sup α∈[τ un,x,un,x]
Q n (α|x) Q(V αMn +1,Mn |x) -1 ≤ ω(V 1,Mn , V Mn,Mn , x, h) exp [ω(V 1,Mn , V Mn,Mn , x, h)] .
Hence,
R (Q) n (x) = sup α∈[τ un,x,un,x] Q n (α|x) Q(V αMn +1,Mn |x) -1 Q(V αMn +1,Mn |x) Q(α|x) ≤ ω(V 1,Mn , V Mn,Mn , x, h) exp [ω(V 1,Mn , V Mn,Mn , x, h)] (1 + T n (x)).
Use finally [START_REF] Davison | Models for exceedances over high thresholds[END_REF] and [START_REF] Hall | Nonparametric analysis of temporal trend when fitting parametric models to extreme-value data[END_REF] to complete the proof.
The next lemma examines the convergence of T n (x), defined in the above lemma, given M x (h).
Lemma 2 Let U 1 , . . . , U p be independent standard uniform variables. Assume that (3) and (H SO ) hold.
If a(x) ∈ (0, 1) is such that p 1-a(x) ∆ 2 (p a(x) |x) → λ ∈ R as p → ∞ then, for all
d 1 , d 2 > 0 with d 1 < d 2 ,
we have:
p (1-a(x))/2 sup α∈[d1p -a(x) ,d2p -a(x) ] Q(U pα +1,p |x) Q(α|x) -1 = O P (1).
Proof. Recall that (H SO ) entails that [START_REF] Chernozhukov | Extremal quantile regression[END_REF] holds. Then, one can apply [START_REF] De Haan | Extreme value theory: An introduction[END_REF]Theorem 2.4.8] to the independent random variables {Q(U i |x), i = 1, . . . , p} distributed from the conditional survival function
S(•|x): because inf α∈[d1p -a(x) ,d2p -a(x) ] α d 2 p -a(x) = d 1 d 2 > 0, it holds that p (1-a(x))/2 sup α∈[d1p -a(x) ,d2p -a(x) ] Q(U pα +1,p |x) Q(d 2 p -a(x) |x) - αp a(x) d 2 -γ(x) = O P (1). ( 30
)
Since [START_REF] Chernozhukov | Extremal quantile regression[END_REF] must in fact hold locally uniformly in z > 0 (see [START_REF] De Haan | Extreme value theory: An introduction[END_REF]Theorem B.2.9]) and [d 1 , d 2 ] is a compact interval, it is clear that
p (1-a(x))/2 sup α∈[d1p -a(x) ,d2p -a(x) ] Q(α|x) Q(d 2 p -a(x) |x) - αp a(x) d 2 -γ(x) = O(1). (31)
Combine ( 30) and [START_REF] Kratz | The QQ-estimator and heavy tails[END_REF] to conclude the proof.
Lemma 3 below controls a bias term appearing in the proof of Theorem 1.
Q(α|x) Q(β n |x) α β n γ(x) -1 = O P ∆(u -1 n,x |x) .
Proof. Recall
α γ(x) Q(α|x) = c(x) exp α -1 1 ∆(v|x) v dv ,
and therefore
Q(α|x) Q(β n |x) α β n γ(x) = exp α -1 β -1 n ∆(v|x) v dv .
Furthermore, since α ≤ u n,x ,
α -1 β -1 n ∆(v|x) v dv ≤ |∆(u -1 n,x |x)| ∞ 1 ∆(yu -1 n,x |x) ∆(u -1 n,x |x) dy y .
As the function y → y -1 ∆(y|x) is regularly varying with index ρ(x) -1 < -1, we may write, according to [4, Theorem 1.5.2],
α -1 β -1 n ∆(v|x) v dv ≤ 2|∆(u -1 n,x |x)| ∞ 1 y ρ(x)-1 dy = - 2 ρ(x) |∆(u -1 n,x |x)|.
Since the right-hand side converges to 0 and does not depend on α, it follows by a Taylor expansion of the exponential function that sup α∈(τ un,x,un,x]
Q(α|x) Q(β n |x) α β n γ(x) -1 = O P ∆(u -1 n,x |x) ,
which is the required conclusion.
The next result is dedicated to the statistics γ n (x, u n,x ) and R (γ) n (x) introduced in the proof of Theorem 2, equation [START_REF] Gardes | Conditional extremes from heavy-tailed distributions: an application to the estimation of extreme rainfall return levels[END_REF].
Lemma 4 Let U i , i ≥ 1 be independent standard uniform random variables. For any x ∈ E such that m x (h) > 0, the conditional distribution of γ n (x, u n,x ) given M x (h) = p is that of
γ p (x, u n,x ) = 1 pu n,x pun,x i=1 Φ i pu n,x i log Q(U i,p |x) Q(U i+1,p |x) , and given M x (h) = p, R (γ)
n (x) is bounded from above by
2ω(U 1,p , U p,p , x, h) un,x 0 |Ψ(α, u n,x )|dα.
Proof. Set again M n = M x (h). Equation ( 12) entails that the conditional distribution of γ n (x, u n,x )
given
M n = p is that of pun,x i=1 W i,n (u n,x , p) log Q(U i,p |x) Q(U pun,x +1,p |x) = pun,x i=1 W i,n (u n,x , p) pun,x j=i log Q(U j,p |x) Q(U j+1,p |x) ,
where {U i , i ≥ 1} are independent standard uniform random variables, and this is equal to γ p (x, u n,x ) by switching the summation order and using assumption (H Ψ ). Now, since Q n (α|x) = Y * Mn-i+1,Mn for all α ∈ [(i -1)/M n , i/M n ), one has
γ n (x, u n,x ) = un,xMn i=1 W i,n (u n,x , M n ) log Y * Mn-i+1,Mn Y * Mn-un,xMn ,Mn
, where W i,n (u n,x , M n ) was defined in [START_REF] Gardes | Functional kernel estimators of large conditional quantiles[END_REF]. Hence the identity
R (γ) n (x) = un,xMn i=1 W i,n (u n,x , M n ) log Q(V un,xMn +1,Mn |x) Q(V i,Mn |x) Y * Mn-i+1,Mn Y * Mn-un,xMn ,Mn
.
Using the bound (29) yields to
R (γ) n (x) ≤ 2ω(V 1,Mn , V Mn,Mn , x, h) un,xMn i=1 |W i,n (u n,x , M n )| ≤ 2ω(V 1,Mn , V Mn,Mn , x, h) un,x 0 |Ψ(α, u n,x )|dα.
Using equation ( 12) completes the proof.
Our next result studies some particular Riemann sums. It shall prove useful when examining the convergence of γ n (x, u n,x ) given M x (h), see Lemma 6.
Lemma 5 Let f be an integrable function on (0, 1). Assume that f is nonnegative and nonincreasing.
For any nonnegative continuous function g on [0, 1] we have that:
lim m→∞ 1 m -1 m-1 i=1 f i m g i m = 1 0 f (t)g(t)dt.
If moreover f is square-integrable then:
lim m→∞ √ m 1 m -1 m-1 i=1 f i m - 1 0 f (t)dt = 0.
Proof. To prove the first statement, it suffices to show that |S m (f, g) -S(f, g)| → 0 as m → ∞ where
S m (f, g) := 1 m m-1 i=1 f i m g i m and S(f, g) := 1 0 f (t)g(t)dt.
Note first that:
|S(f, g) -S m (f, g)| ≤ m-1 i=1 i/m (i-1)/m f (t)g(t) -f i m g i m dt + 1 (m-1)/m f (t)g(t)dt. Since g is nonnegative on [0, 1] and f is nonincreasing, it is straightforward that for all t ∈ [(i-1)/m, i/m) |f (t)g(t) -f (i/m)g(i/m)| ≤ f (t) sup |s-s |≤1/m |g(s) -g(s )| + g ∞ (f (t) -f (i/m n )) ,
where g ∞ is the finite supremum of g on [0, 1]. The fact that f is nonincreasing yields f (t) -f (i/m) ≤ f ((i -1)/m) -f (i/m) for all i = 2, . . . , m and thus the previous inequality leads to
|S(f, g) -S m (f, g)| ≤ 1 0 f (t)dt sup |s-s |≤1/m |g(s) -g(s )| + g ∞ 1/m 0 f (t)dt - f (1) m + g ∞ 1 (m-1)/m f (t)dt → 0 ( 32
)
by the uniform continuity of g on [0, 1] and the fact that f is an integrable function. This proves the first statement of the result. To prove the second one, remark that:
√ m 1 m -1 m-1 i=1 f i m - 1 0 f (t)dt ≤ √ m m -1 S m (f, 1) + √ m|S(f, 1) -S m (f, 1)|.
Using the first statement with g = 1 entails that the first term of the left-hand side converges to 0 as m → ∞. Now, taking g = 1 in (32) leads to
√ m|S(f, 1) -S m (f, 1)| ≤ √ m 1/m 0 f (t)dt + √ m 1 (m-1)/m f (t)dt.
By the Cauchy-Schwarz inequality,
√ m 1/m 0 f (t)dt ≤ 1/m 0 f 2 (t)dt 1/2 → 0 and √ m 1 (m-1)/m f (t)dt ≤ 1 (m-1)/m f 2 (t)dt 1/2 → 0
since f 2 is integrable on (0, 1). The proof is complete.
The next lemma establishes the asymptotic normality of the random variable γ p (x, k, k ) introduced in the proof of Theorem 2, equation [START_REF] Goegebeur | Nonparametric regression estimation of conditional tails -the random covariate case[END_REF].
Lemma 6 Assume that conditions (3), (H SO ) and (H Ψ ) are satisfied. Let k(p) and k (p) be two sequences satisfying, for some a(x) ∈ (0, 1), p a(x)-1 k(p) → 1 and p The final lemma is a technical tool we shall need to bridge the gap between the convergence of our estimators and that of their conditional versions. Hence, for all κ > 0, the inequality:
(1-a(x))/2 [k(p)/k (p) -1] → 0 as p → ∞. Let U 1 , . . , U p be independent standard uniform random variables. If p 1-a(x) ∆ 2 (p a(x) |x) → λ(x) ∈ R, then the random variable γ p (x, k(p), k (p)) := 1 k(p) k (p) i=1 Φ i k (p) + 1 i log Q(U i,p |x) Q(U i+1,p |x) is such that p (1-a(x))/2 (γ p (x, k(p), k (p)
D n,
14, 3.07] approximately, and therefore the range of values of γ(x) is the full interval [1/3, 1]. Let us also mention that the second order parameter ρ(x) appearing in condition (H SO ) is then ρ(x) = -1 for the Fréchet distribution and ρ(x) = -rγ(x) for the Burr distribution; in the latter case, the range of values of ρ(x) is therefore [-r, -r/3].
1 , x med 2 ] = {tx med 1 + ( 1 1 , x max 2 ] 1 , x max 2
12111212 -t)x med 2 , t ∈ [0, 1]} or to the line [x max . It can be seen in these figures that the estimates in the direction of a typical pair of covariates are remarkably stable; they are actually essentially indistinguishable from the estimates at the average covariate, which are 42.41 for β = 5/n and 93.86 for β = 1/n. By contrast, the estimates on the line [x max
for all p ∈ I n . As a first conclusion, using the inequality |x| ≤ |a| + |b| which holds for all x ∈ [a, b], we have
PFrom
2v n,x ω(U 1,p , U p,p , x, h) un,x 0 |Ψ(α, u n,x )|dα > κ . , u)|dα = C < ∞
Lemma 3
3 Assume that conditions (3) and (H SO ) are satisfied. If m x (h) → ∞ and β n /u n,x → 0 we have that: sup α∈[τ un,x,un,x]
1 0 1 0converges to a centered normal distribution with variance σ 2 Φ := γ 2 (x) 1 0 Φ 2
11212 ) -γ(x)) converges in distribution to a normal distribution with mean λ(x) Φ(α)α -ρ(x) dα and variance γ 2 (x) Φ 2 (α)dα.Proof. For the sake of brevity, let γ p (x) := γ p (x, k(p), k (p)). Let v p := p (1-a(x))/2 and for j ∈ {1, . . . , k (p)} ∆j (p|x) := ∆ p + 1 k (p) + 1 x j k (p) + 1 -ρ(x)Under conditions (3), (H SO ) and (H Ψ ), one can apply Theorem 3.1 in Beirlant et al.[START_REF] Beirlant | On exponential representations of logspacings of extreme order statistics[END_REF] to prove that v p (α)dα. As a direct consequence of Lemma 5, the previous convergence can be rewrittenv p k(p) k (p) γ p (x) -γ(x) v p [γ p (x) -γ(x)] = v p k (p) k(p) -1 k(p) k (p) γ p (x) + v p k(p) k (p) γ p (x) -γ(x) ,a combination of convergence[START_REF] Pickands | Statistical inference using extreme order statistics[END_REF] and of the fact that v p [k(p)/k (p) -1] → 0 as p → ∞ concludes the proof.
Furthermore, since ξ 1 6 for
16 (p) ≤ a n ≤ ξ 2 (p) for any p ∈ I n , using the inequality |x| ≤ |a| + |b| which holds for all x ∈ [a, b], one has for all p ∈ I n that |a n -1| ≤ |ξ 1 (p) -1| + |ξ 2 (p) -1|; besides, since Z p = O P (1) and ξ 1 , ξ 2 converge to 1 at infinity, we have |ξ 1 (p) -1|Z p = o P (1) and |ξ 2 (p) -1|Z p = o P (1). Therefore, for all ε > 0, sup p∈In P(|(a n -1)Z p | > κ) ≤ sup p∈In P(|ξ 1 (p) -1||Z p | + |ξ 2 (p) -1||Z p | > κ) ≤ ε n large enough. Now remark that for all p ∈ I n , P({a n Z p ≤ t} ∩ {|(a n -1)Z p | ≤ κ}) ≤ P(Z p ≤ t + κ) and that P({a n Z p ≤ t} ∩ {|(a n -1)Z p | ≤ κ}) ≥ P({Z p ≤ t -κ} ∩ {|(a n -1)Z p | ≤ κ}) ≥ P(Z p ≤ t -κ) -P(|(a n -1)Z p | > κ).
Figure 2 :
2 Figure 2: Spectrometric curves for the data.
Figure 3 :
3 Figure 3: Top left: Hill plot for the sample (y 1 , . . . , y n ). On the x-axis at the top of the panel is the value of the lower threshold for the computation of the Hill estimator, i.e. the lowest order statistic. Top right, bottom left and bottom right: local exponential QQ-plots for the log-data points whose covariates belong to a neighborhood of certain pre-specified points in the covariate space.
Figure 4 :
4 Figure 4: Top picture, solid lines: a pair of typical covariates. Bottom picture, solid lines: the pair of covariates farthest from each other. In both pictures the dotted line is the average covariate.
Figure 5 :
5 Figure 5: Solid line: extreme quantile estimate in the direction of a typical pair of covariates, dashed line: extreme quantile estimate in the direction of a pair of covariates farthest from each other. Top picture: case β = 5/n, bottom picture: β = 1/n.
Table 1 :
1 Table, optimal median errors among the five tested estimators are marked in boldface characters. Comparison of the Integrated Squared Errors of the following extreme conditional quantile
Fréchet dist. Burr dist. (r = 2)
IWE (τ = 1/10) [0.0060 0.0077 0.0132] [0.0063 0.0099 0.0147]
IWE (τ = 1/2) [0.0060 0.0077 0.0112] [0.0058 0.0095 0.0128]
IWE (τ = 9/10) [0.0058 0.0076 0.0107] [0.0059 0.0093 0.0119]
WE (with (6)) [0.0054 0.0078 0.0115] [0.0054 0.0088 0.0137]
WE (Hill-type) [0.0068 0.0094 0.0120] [0.0071 0.0103 0.0137]
Burr dist. (r = 1/20)
IWE (τ = 1/10) [0.6427 0.9504 1.3982]
IWE (τ = 1/2) [0.6040 0.8343 1.2018]
IWE (τ = 9/10) [0.8010 1.0870 1.2725]
WE (with (6)) [0.5848 0.8909 1.3372]
WE (Hill-type) [0.7679 1.1314 1.4599]
[first quartile median third quartile].
estimators: IWE with τ ∈ {1/10, 1/2, 9/10} (lines 1 to 3), WE when γ(x) is estimated by (6) (line 4) and WE when γ(x) is estimated by the Hill-type estimator (line 5). Results are given in the following form:
Lemma 7 Let {Z p , p ∈ N} be a sequence of random variables such that for all t ∈ R, P(Z p ≤ t) → H(t)where H is a continuous cumulative distribution function. For n ∈ N, letI n := [u n , v n ] where u n → ∞as n → ∞ and let (a n ) be a sequence such that there exist two functions ξ 1 and ξ 2 converging to 1 at
infinity with
sup p∈In ξ 1 (p) a n ≤ 1 ≤ inf p∈In ξ 2 (p) a n .
Then, for all t ∈ R,
lim n→∞ sup
p∈In |P(a n Z p ≤ t) -H(t)| = 0.
Proof. We start by remarking that for all κ > 0,
sup p∈In |P(a n Z p ≤ t) -H(t)| ≤ D n,p + sup p∈In P(|(a n -1)Z p | > κ), where D n,p := sup p∈In |P({a n Z p ≤ t} ∩ {|(a n -1)Z p | ≤ κ}) -H(t)| .
Now, since H is continuous, there exists κ > 0 such that for n large enough, |H(t) -H(t + κ)| ≤ ε 6 and |H(t) -H(t -κ)| ≤ ε 6 .
APPENDIX
The first lemma is dedicated to the statistics T n (x) and R (Q) n (x) defined in the proof of Proposition 1, equations ( 13) and ( 14).
Lemma 1 Let {U i , i ≥ 1} be independent standard uniform random variables. For x ∈ E such that m x (h) > 0, the conditional distribution of T n (x) given M x (h) = p is that of
By assumption, for n large enough:
It is now straightforward to conclude the proof.
n,1 (top left),
n,2 (bottom). In each picture, the covariate x is respectively (from left to right) such that z(x) = -2 (γ(x) ≈ 0.64), z(x) = 0 (γ(x) = 1) and z(x) = 2 (γ(x) ≈ 0.64). In each case, the true value of the conditional quantile to be estimated is represented by a cross. | 61,976 | [
"911294",
"741320"
] | [
"93707",
"209503"
] |
01481216 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://hal.science/hal-01481216/file/masschroq_article.pdf | Benoit Valot
email: [email protected]
Olivier Langella
Edlira Nano
Michel Zivy
Benoit Valot
Dr Benoît Valot
MassChroQ : A versatile tool for mass spectrometry quantification
Keywords: MassChroQ, Mass Chromatogram Quantification, HR, High Resolution, LR, Low Resolution, RT, Retention Time, XIC, eXtracted Ion Chromatogram Bioinformatics, Extracted Ion Chromatogram, Mass spectrometry, Proteomics , Quantification, Software
published or not. The documents may come
In the last years, the continuous improvement of quantitative mass spectrometry methods has opened new perspectives in proteomics. The amount and complexity of the data to be processed has grown, evidencing the need for new automatic computational tools. While spectral counting has been used for semi-quantitative analysis, quantitative experiments are mostly based on quantification of the MS signal. Several tools using various algorithms have been developed (e.g [START_REF] America | Comparative LC-MS: a landscape of peaks and valleys[END_REF][START_REF] Mueller | An assessment of software solutions for the analysis of mass spectrometry based quantitative proteomics data[END_REF][START_REF] Neilson | Less label, more free: approaches in label-free quantitative mass spectrometry[END_REF]). These tools handle the different steps of the analysis : signal denoising, peak detection, peak area measurement, de-isotoping, LC-MS runs alignment, etc. However, most of them deal only with a specific problem or type of data, and are for example restricted to high-resolution (HR) spectrometers, to isotope labelling or to label-free quantification. In addition, they often present platform specificities, proprietary data format dependencies and do not allow integration in proteomics pipelines like TPP [START_REF] Deutsch | A guided tour of the Trans-Proteomic Pipeline[END_REF] or TOPP [START_REF] Kohlbacher | TOPP--the OpenMS proteomics pipeline[END_REF].
We have developed MassChroQ (which stands for Mass Chromatogram Quantification) with the aim of being as experiment-independent as possible, while being able to take into account complex experimental designs. MassChroQ processes quantification data from their rough state to a form ready to be used by statistical software. It is fully configurable and every step of the analysis is traceable. MassChroQ allows the user to: i) process data obtained from spectrometers with various levels of resolution; ii) analyse label-free as well as isotopic labelling experiments; iii) analyse experiments in which samples were fractionated prior to LC-MS analysis (as in SDS-PAGE, SCX, etc.); iv) time-efficiently process a large number of samples.
Low-resolution (LR) instruments (e.g. LTQ ion traps) can provide valuable quantitative data from samples of low or medium complexity. In order to be able to quantify data obtained from LR as well as from HR instruments (e.g. Orbitrap), we chose a quantification method based on eXtracted Ion Chromatograms (XIC) rather than on feature detection on the virtual 2D image (e.g algorithms derived from 2D gels analysis or "peak picking" [START_REF] Li | A software suite for the generation and comparison of peptide arrays from sets of data collected by liquid chromatography-mass spectrometry[END_REF][START_REF] Bellew | A suite of algorithms for the comprehensive analysis of complex protein mixtures using high-resolution LC-MS[END_REF][START_REF] Sturm | OpenMS -an open-source software framework for mass spectrometry[END_REF][START_REF] Mueller | SuperHirn -a novel tool for high resolution LC-MS-based peptide/protein profiling[END_REF][START_REF] Palagi | MSight: an image analysis software for liquid chromatography-mass spectrometry[END_REF]). Indeed, the latter needs high resolution in MS mode to identify isotopic profiles. By contrast, quantification based on XICs is obtained by extracting the intensity corresponding to the m/z of the selected peptides along the LC-MS run, and by integrating the peak area at their retention time (RT). This strategy can be used with LR as well as HR mass spectrometers by adapting the window size of XIC extraction. It can be used with labelfree (e.g. [START_REF] Mortensen | MSQuant, an open source platform for mass spectrometry-based quantitative proteomics[END_REF][START_REF] Tsou | IDEAL-Q, an Automated Tool for Label-free Quantitation Analysis Using an Efficient Peptide Alignment Approach and Spectral Data Validation[END_REF] as well as isotopic methods (e.g. SILAC, ICAT, N 15, [START_REF] Li | Automated statistical analysis of protein abundance ratios from data generated by stable-isotope dilution and tandem mass spectrometry[END_REF][START_REF] Bouyssié | Mascot file parsing and quantification (MFPaQ), a new software to parse, validate, and quantify proteomics data generated by ICAT and SILAC mass spectrometric analyses: application to the proteomics study of membrane proteins from primary human endothelial cells[END_REF][START_REF] Cox | A practical guide to the MaxQuant computational platform for SILAC-based quantitative proteomics[END_REF]).
The main features of MassChroQ are : i) Determination of peptides to be quantified. If an experiment includes MS/MS acquisition for identification, the identified peptides and protein descriptions can be provided to MassChroQ. He will then automatically quantify them in all samples, including those where the peptides were not identified. Peptides can also be specified by providing a list of m/z or m/z-RT values. If isotopic labelling was performed, the different labels can be described by specifying the modified sites (e.g. amino acids, peptide N-or C-terminal) and the mass shifts. XICs are then smoothed with an average filter before performing a closing and an opening mathematical morphology operation with a small flat structuring element [START_REF] Serra | Image analysis and mathematical morphology[END_REF]. The closing operation eliminates thin valleys and conserves the intensity of local maxima, while the opening operation eliminates thin peaks (i.e. remaining spikes) and conserves the intensity of local minima.
Hence, detection of peak positions is performed on the closed profile, and the opened profile is used to eliminate remaining spikes (Fig. S2). The peak boundaries are searched on the closed profile, and the peak area (i.e. the quantification value) is computed on the unaltered XIC, by integrating the intensity between these boundaries.
iii) Alignment and peak matching. As distortions can occur between LC-MS runs, MS RTs must be aligned before matching. Two alignment methods are proposed in MassChroQ: the OBI-Warp (Ordered Bijective Interpolated Warping) alignment method [START_REF] Prince | Chromatographic alignment of ESI-LC-MS proteomics data sets by ordered bijective interpolated warping[END_REF] which is based on MS data only, and an in-house MS/MS alignment method. The latter uses common MS/MS identifications as landmarks to evaluate time deviation (Δ MS ) along the chromatography. More precisely, the Δ MS/MS difference between the MS/MS RT in the run to align and the MS/MS RT in the run chosen as a reference is computed for each common peptide. Then, for each MS RT of the run to align, Δ MS is evaluated by linear interpolation between the Δ MS/MS values of its two closest surrounding MS/MS RTs. Both Δ MS/MS and Δ MS data are smoothed before use (with average and median filters) to eliminate low-scale RT heterogeneities. After alignment, peak matching is performed as follows: the quantitative value of a peak is assigned to an identified peptide if and only if the MS/MS RT of this peptide is within the boundaries of this peak. Of course, only similar LC-MS runs should be aligned. For example, if samples were fractionated by SCX, only LC-MS runs from the same SCX fraction should be aligned. To take this into account the user defines groups of LC-MS runs that can be compared to each other. Alignment and peak matching will be performed only within these groups. If necessary, alignment and quantification methods can be specifically defined for each group.
To evaluate MassChroQ performances, we prepared 6 samples made each of 700 ng of the same total protein digest of Saccharomyces cerevisiae, spiked with 6 different amounts of BSA digest (4.5, 15, 45, 105, 450 and 1500 fmol). These samples were analysed with an LR and an HR system (respectively a Thermo-Fisher LTQ XL coupled to an Eksigent 2D-ultra-nanoLC, and a Thermo-Fisher Orbitrap Discovery coupled to a Dionex U3000 nanoLC; see supplement materials). All runs included MS and MS/MS acquisition. Two groups of runs were defined in MassChroQ to separate the six LR runs from the six HR runs. The alignment was performed by using the MS/MS alignment method (Fig. 1A). Since spectrometers do not trigger MS/MS at the exact RT of peptide peaks, MS/MS RTs showed a non negligible dispersion. However, data points were numerous enough to allow the computation of the tendency deviation curve along the reference LC, that was used by the alignment algorithm to correct RTs . The standard deviation of peptide RT was clearly reduced by the alignment in both LR and HR systems (Fig. 1B and1C), showing its efficiency. Although LC-MS runs showed small deviations before alignment, the alignment significantly impacted data quality affecting the matching of 5% of the peaks (data not shown).
XIC extraction was performed with an m/z window of 0.3 Th for LR data and 10 ppm for HR data.
All identified peptides were selected for quantification. Combining all LR and HR LC-MS/MS runs, 5831 different peptide sequences allowed the identification of 556 proteins (with a false discovery rate of 0.3%), distributed in 492 groups of proteins sharing at least one peptide. A total of 5936 and 2467 XICs were extracted from respectively LR and HR LC-MS runs. Almost all detected peptides were found reproducible (i.e. detected in at least five of the six replicates) in the HR system (97%), against 67% in the LR system (Fig. 2A). Peptide reproducibility was clearly correlated to peptide intensity in LR data (Fig. S3), most probably due to noisy XICs. Altogether, 418 of the 492 identified proteins were represented by at least one reproducible peptide.
After normalization and log 10 -transformation (see supplement materials), the mean coefficient of variation of peptide quantitative values was 1.31% in HR and 1.40% in LR data (Fig. 2B,2C). This small technical variation is similar to other reported data (see [START_REF] America | Comparative LC-MS: a landscape of peaks and valleys[END_REF]) and attests the accuracy of the detection/quantification process. Moreover, a correlation of 0.89 between the mean intensity of peptides common to LR and HR data (1179 peptides, Fig. 2D) showed that the quantification process extracted similar results from both systems, despite a high sample complexity not favourable to LR analysis. The few high coefficients observed for abundant peptides in the HR data were mostly due to a poor determination of the ends of smearing peaks.
Twenty-five and fourteen BSA peptides were quantified in at least three samples in respectively LR and HR systems. All HR peptide intensities except one were highly correlated and linearly related to injected BSA quantities with a mean coefficient of correlation greater than 0.98 on three orders of magnitude. This exception was due to a single datapoint (Fig. 3A). Nineteen of the twenty-five LR peptides responded linearly to BSA quantities with a mean coefficient of correlation higher than 0.98 on two orders of magnitude (Fig. 3B). The lower correlation observed for the six remaining peptides was mainly due to miss-assignments at low BSA quantities (<45 fmol): the BSA peptide peak was contaminated by a peak of the yeast digest of similar m/z and RT values (Fig. S4). Thus, quantification performances were lower with the LR than with the HR system, mainly because of mismatches caused by the high complexity of the yeast lysate. This confirms that accurate measurements can be expected with LR systems only when analysing peptide samples of lower complexity. Nevertheless, the observed correlations between peptide intensity and protein quantity were globally similar to those obtained by other software [START_REF] Deutsch | A guided tour of the Trans-Proteomic Pipeline[END_REF][START_REF] Kohlbacher | TOPP--the OpenMS proteomics pipeline[END_REF][START_REF] Tsou | IDEAL-Q, an Automated Tool for Label-free Quantitation Analysis Using an Efficient Peptide Alignment Approach and Spectral Data Validation[END_REF].
MassChroQ is written in C++ with Qt and runs both on Linux and Windows platforms. It is a command-line standalone program and it comes with a library for integration in proteomics pipelines.
MassChroQ is fully configurable via an XML input file (in masschroqML format) where the user indicates the chosen processing steps, parameters and data files to analyse (see example on Fig. S5). This file can be automatically generated by any XML editor by using the provided schema, or manually by using a text editor. Parameters of XIC creation, filtering and detection, which depend on the type, precision and noise level of the spectrometer, can all be configured in the masschroqML file. Templates for several experiment scenarios are provided in the documentation. LC-MS data input files can be in mzXML [START_REF] Pedrioli | A common open representation of mass spectrometry data and its application to proteomics research[END_REF] or mzML format [START_REF] Martens | mzML -a Community Standard for Mass Spectrometry Data[END_REF]. If X!Tandem [START_REF] Craig | TANDEM: matching proteins with tandem mass spectra[END_REF] is used for protein identification, a complete masschroqML input file containing identified peptides and protein descriptions can be automatically generated via our X!Tandem pipeline tool (http://pappso.inra.fr/bioinfo/xtandempipeline/ ). If another identification engine is used, identified peptides to be quantified can be provided to MassChroQ via TSV or CSV text files (Tab or Comma Separated Values). MassChroQ results can be exported in TSV, gnumeric spreadsheet or masschroqML XML format. TSV and spreadsheet formats allow direct import of data to statistical software and the XML format allows their upload in proteomics databases like PROTICdb [START_REF] Ferry-Dumazet | PROTICdb: a web-based application to store, track, query, and compare plant proteome data[END_REF].
XICs can also be exported for visualization.
Computation time depends on data size and on the number of extracted XICs. In the present study, the processing of the twelve LC-MS runs (6GB) where more than 5000 different peptide XICs were extracted took 1 hour with a 2.93 GHz CPU on a Linux platform. Most of that time was spent analysing non-centroid data from the LR system.
In conclusion, we showed that MassChroQ efficiently aligns and quantifies LR and HR LC-MS data. Low coefficients of variation and high coefficients of correlation to protein quantity attested the quality of the quantification measurements. MassChroQ is currently being successfully used in our laboratory on both isotopic and label-free large experiments (data not shown). Its very modular structure facilitates implementation of new algorithms and integration in other pipelines. Future developments will focus on handling SRM data, developing a graphical user interface for parameter adjustment with XIC visualization and computation time optimisation. This program is licensed under the GNU General Public License v3.0 (http://www.gnu.org/licenses/gpl.html). The source code is available at https://sourcesup.cru.fr/projects/masschroq. Compiled binaries for Linux and Windows platforms and documentation (including data and results of this test set) can be found at http://pappso.inra.fr/bioinfo/masschroq/. The original XIC contains high frequency noise that is partly eliminated by smoothing (average and/ or median filters). (B) Morphological closing (blue profile) and opening (red profile) by a small flat structuring element are performed on the smoothed XIC (gray profile). The closing operation eliminates many noisy peaks by filling small valleys and preserves the actual position of the remaining peaks. Hence peaks are detected on this profile if they are greater than a threshold (blue line). Only peaks that are thick compared to the structuring element stay high in the opened profile. Then, to avoid detection of thin artifactual spikes, peaks detected on the closed profile are filtered according to the intensity at the same position in the opened profile : intensity in the opened profile must be greater than a second threshold (red line).
ii) XIC extraction, peak detection and quantification. XICs of peptides of interest are extracted from the original data file. Filters are used to correct baselines or to remove artefactual spikes (Fig S1).
Figure 1 .
1 Figure 1. Evaluation of alignment. (A) Example of RT correction along an HR LC-MS run. Points correspond to peptides identified by MS/MS in both the reference LC-MS run and the LC-MS run being aligned. The line is the computed time deviation used for alignment of MS data. (B, C) Standard deviation of peptide RT before and after alignment respectively in HR and LR experiments.
Figure 2 .
2 Figure 2. Evaluation of reproducibility. (A) Number of peptides detected in 1 to 6 replicates in LR and HR LC-MS experiments. (B, C) Influence of mean peptide intensity on peptide coefficient of variation (all reproducible peptides) in HR and LR systems. (D) Correlation between HR and LR mean values of peptides identified in both systems (1179 peptides, r = 0.89).
Figure 3 .
3 Figure 3. Linear relation between BSA peptide intensity and BSA quantity. (A) and (B) : respectively HR (25 peptides) and LR (14 peptides) experiments. Each line corresponds to a different peptide. Only peptides detected in at least 3 samples are shown. Dashed line: peptides showing a correlation to BSA quantity lower than 0.98.
Figure S1 .
S1 Figure S1. XIC creation, filtering and peak detection: example for peptide LVNELTEFAK (45 fmol of BSA digest spiked in yeast total digest) in LC-MS runs from LR (A) and HR (B) experiments. Filtering involves baseline correction for LR experiments and spike removing for HR ones. Red circles: detected peaks; blue circles: detected peak matching the peptide's RT.
Figure S2 .
S2 Figure S2. Signal treatment for peak detection. (A)The original XIC contains high frequency noise that is partly eliminated by smoothing (average and/ or median filters). (B) Morphological closing (blue profile) and opening (red profile) by a small flat structuring element are performed on the smoothed XIC (gray profile). The closing operation eliminates many noisy peaks by filling small valleys and preserves the actual position of the remaining peaks. Hence peaks are detected on this profile if they are greater than a threshold (blue line). Only peaks that are thick compared to the structuring element stay high in the opened profile. Then, to avoid detection of thin artifactual spikes, peaks detected on the closed profile are filtered according to the intensity at the same position in the opened profile : intensity in the opened profile must be greater than a second threshold (red line).
Figure S3 .
S3 Figure S3. Relation between peptide intensity and detection reproducibility in the LR experiment.
Acknowledgements
This work was partially supported by "Infrastructures en Biologie Santé et Agronomie (IbiSA)" (E N salary). The authors want to thank Mélisande Blein for the yeast extracts and SourceSup for their subversion repository support.
Conflict of interest statement
The authors have declared no conflicts of interest.
Figure S4.
Peak assignment error in LR data analysis: example of the BSA peptide HLVDEPQNLIK. The BSA digest quantity is indicated on the top-left corner of each graph. Red circles: detected peaks. Blue circles: Peaks matching the peptide's RT. A yeast peak at RT and m/z values very close to those of HLVDEPQNLIK, induced a mismatch at 15 fmol. In addition, peaks assigned to HLVDEPQNLIK in other samples partly contained the intensity that should have been assigned to the yeast peptides.
SUPPLEMENTAL FIGURE S5
<!--Example of MassChroQ processing file--> <?xml version="1.0" encoding="UTF-8" standalone="no"?> <masschroq> <!--List of LC-MS run files in open format : mzXML or mzML --> <rawdata> <data_file id="samp0" format="mzxml" path="Labelling-light-heavy.mzXML" type="centroid"/> <data_file id="samp1" format="mzxml" path="Label-free-samp1.mzXML" type="centroid"/> <data_file id="samp2" format="mzxml" path="Label-free-samp2.mzXML" type="centroid"/> <data_file id="samp3" format="mzxml" path="MS-data-without_identification-samp1.mzXML" type="profile"/> <data_file id="samp4" format="mzxml" path="MS-data-without_identification-samp2.mzXML" type="profile"/> </rawdata> <groups> <!--Grouping of LC-MS runs. Within a group:
-all LC-MS runs are aligned (with the same alignment method); -peptides observed in at least one LC-MS run are quantified in all LC-MS runs of this group (using the same quantification method) --> <group data_ids="samp0" id="G1"/> <group data_ids="samp1 samp2" id="G2"/> <group data_ids="samp3 samp4" id="G3"/> </groups> <!--The peptide features list can be defined in two ways : in separate spreadsheets containing identified peptides for each LC-MS run as follows :--> <peptide_files_list> <peptide_file data="samp0" path="labelling_peptide_list.txt"/> </peptide_files_list> <!--directly in this file with proteins/peptides list as follows : --> <protein_list> <protein desc="conta|P02769|ALBU_BOVIN SERUM ALBUMIN PRECURSOR." id="P1.1"/> </protein_list> <peptide_list> <peptide id="pep0" mh="1463.626" mods="114.08" prot_ids="P1.1" seq="TCVADESHAGCEK"> <observed_in data="samp2" scan="755" z="2"/> <observed_in data="samp3" scan="798" z="2"/> </peptide> <peptide id="pep1" mh="1103.461" mods="57.04" prot_ids="P1.1" seq="ADESHAGCEK"> <observed_in data="samp2" scan="663" z="2"/> </peptide> </peptide_list> <!--Definition of different labels for isotopic experiments.
Example with a dimethylation of primary amine --> <isotope_label_list> <isotope_label id="light"> <mod at="Nter" value="28.0"/> <mod at="K" value="28.0"/> </isotope_label> <isotope_label id="heavy"> <mod at="Nter" value="32.0"/> <mod at="K" value="32.0"/> </isotope_label> </isotope_label_list> <!--Definition of different alignment methods. Two alignment algorithms are implemented : MS/MS alignment and OBI-Warp alignment --> <alignments> <alignment_methods> <alignment_method id="ms2_1"> <ms2> <ms2_tendency_halfwindow>10</ms2_tendency_halfwindow> <ms2_smoothing_halfwindow>5</ms2_smoothing_halfwindow> <ms1_smoothing_halfwindow>0</ms1_smoothing_halfwindow> </ms2> </alignment_method> <alignment_method id="obiwarp1"> <obiwarp> <lmat_precision>1</lmat_precision> <mz_start>500</mz_start> <mz_stop>1200</mz_stop> </obiwarp> </alignment_method> </alignment_methods> <!--Perform alignment on each group using the above defined methods.
A reference LC-MS run should be defined for each group. All other runs of the group will be aligned towards this reference run --> <align group_id="G2" method_id="ms2_1" reference_data_id="samp1"/> <align group_id="G3" method_id="obiwarp1" reference_data_id="samp3"/> </alignments> <!--Definition of different quantification methods and parameters for XIC creation, XIC filtering and peak detection --> <quantification_methods> <quantification_method id="quanti1"> <!--XIC creation on mz or ppm range using TIC (sum) or basepeak (max) --> <xic_extraction xic_type="sum"> <ppm_range max="10" min="10"/> </xic_extraction> <!--XIC filtering with spike removing, baseline correction or smoothing --> <xic_filters> <anti_spike half="5"/> <background half_mediane="5" half_min_max="15"/> <smoothing half="3"/> </xic_filters> <!--XIC detection with threshold --> <peak_detection> <detection_zivy> <mean_filter_half_edge>1</mean_filter_half_edge> <minmax_half_edge>3</minmax_half_edge> <maxmin_half_edge>2</maxmin_half_edge> <detection_threshold_on_max>5000</detection_threshold_on_max> <detection_threshold_on_min>3000</detection_threshold_on_min> </detection_zivy> </peak_detection> </quantification_method> </quantification_methods> <!--Quantification area --> <quantification> <!--Definition of the export files and formats for the quantification results : spreadsheet formats (tsv, gnumeric) or xml format --> <quantification_results> <quantification_result output_file="results" format ="tsv"/> </quantification_results> <!--Definition of the export files for the XIC traces in spreadsheet (tsv) format :
one can trace all the XICs and/or a list of given peptides and/or a list of given m/z values, and/or a list of m/z-rt values --> <quantification_traces> <all_xics_traces output_dir="all_xics_traces" format="tsv"/> <peptide_traces peptide_ids="pep0 pep1" output_dir="peplist_xics_traces" format="tsv"/> </quantification_traces> <!--For each group, start quantification on : --> <!--all the peptides --> <quantify withingroup="G1" quantification_method_id="quanti1"> <peptides_in_peptide_list mode="real_or_mean" isotope_label_refs="light heavy"/> </quantify> <quantify withingroup="G2" quantification_method_id="quanti1" id="q1"> <peptides_in_peptide_list mode="real_or_mean"/> </quantify> <!--and/or on a list of given m/z values --> <quantify withingroup="G3" quantification_method_id="quanti1" id="q2"> <mz_list>732.317 449.754 552.234 464.251 381.577 569.771 575.256</mz_list> </quantify> <!--Quantification can also be performed on a list of m/z-rt values --> </quantification> </masschroq>
Supplementary materials and methods 1 Protein extraction and digestion
The proteins were extracted from yeast cell pellets using a TCA/acetone method: proteins were precipitated in 10% TCA, 0.07% 2-Mercaptoethanol in acetone and the pellet was rinsed in 0.07% 2-Mercaptoethanol in acetone. Precipitated proteins were resuspended in urea 8 M, thiourea 2 M, CHAPS 2%. Protein concentration was determined by using the PlusOne 2-D Quant Kit (GE Healthcare), and 10 µg of proteins were reduced and alkylated with respectively 10 mM of DTT and 55 mM of iodoacetamide. Proteins were diluted 10 times with 50 mM ammonium bicarbonate and digested with 1/50 (w/w) trypsin (Promega) at 37 °C overnight. Digestion was stopped by acidification with TFA. Bovin Serum Albumin (initial fractionation by heat shock, Sigma) was digested similarly.
LC-MS/MS analysis
Low resolution analysis
HPLC was performed on a NanoLC-Ultra system (Eksigent). Seven-hundred ng of protein digest were loaded at 7.5 µL/min -1 on a precolumn cartridge (stationary phase: C18 PepMap 100, 5 µm; column: 100 µm i.d., 1 cm; Dionex) and desalted with 0.1% HCOOH. After 3 min, the precolumn cartridge was connected to the separating PepMap C18 column (stationary phase: C18 PepMap 100, 3 µm; column: 75 µm i.d., 150 mm; Dionex). Buffers were 0.1% HCOOH in water (A) and 0.1% HCOOH in ACN (B). The peptide separation was achieved with a linear gradient from 5 to 30% B for 60 min at 300 nL/min -1 . Including the regeneration step at 95% B and the equilibration step at 95% A, one run took 90 min.
Eluted peptides were analysed on-line with a LTQ XL ion trap (Thermo Electron) using a nanoelectrospray interface. Ionization (1.5 kV ionization potential) was performed with liquid junction and a non-coated capillary probe (10 µm i.d.; New Objective). Peptide ions were analysed using Xcalibur 2.0.7 with the following data-dependent acquisition steps: (1) full MS scan (m/z of 300 to 1300, enhanced profile mode) and ( 2) MS/MS (qz = 0.25, activation time = 30 ms, and collision energy = 35%; centroid mode). Step 2 was repeated for the three major ions detected in step 1. Dynamic exclusion was set to 45 s.
High resolution (HR) analysis
HPLC was performed on an Ultimate 3000 LC system (Dionex). Seven-hundred ng of protein digest were loaded at 7.5 µL/min -1 on a precolumn cartridge (stationary phase: C18 PepMap 100, 5 µm; column: 300 µm i.d., 5 mm; Dionex) and desalted with 0.08% TFA and 2% ACN. After 3 minutes, the precolumn cartridge was connected to the separating PepMap C18 column (stationary phase: C18 PepMap 100, 3 µm; column: 75 µm i.d., 150 mm; Dionex). Buffers were 0.1% HCOOH, 3% ACN (A) and 0.1% HCOOH, 80% ACN (B). The peptide separation was achieved with a linear gradient from 4 to 36% B for 60 min at 300 nL/min -1 . Including the regeneration step at 100% B and the equilibration step at 100% A, one run took 90 min.
Eluted peptides were analysed on-line with a LTQ-Orbitrap Discovery (Thermo Electron) using a nanoelectrospray interface. Ionization (1.3 ionization potential) was performed with liquid junction and a non-coated capillary probe (10 µm i.d.; New Objective). Peptide ions were analysed using Xcalibur 2.0.7 with the following data-dependent acquisition steps: (1) FTMS scan on Orbitrap (m/z of 300 to 1300, 15000 resolution, profile mode), (2) MS/MS on the LTQ (qz = 0.25, activation time = 30 ms, and collision energy = 35%; centroid mode). Step 2 was repeated for the two major ions detected in step 1. Dynamic exclusion was set to 90s. Identified proteins were filtered and sorted by using the XTandem pipeline (http://pappso.inra.fr/bioinfo/xtandempipeline/). Criteria used for protein identification were (1) at least two different peptides identified with an E-value smaller than 0.05, (2) a protein E-value (product of unique peptide E-values) smaller than 10 -4 . These criteria led to a False Discovery Rate of 0.3% for peptide and protein identification. To take redundancy into account (i.e. the fact that the same peptide sequence can be found in several proteins), proteins with at least one peptide in common were grouped. Grouped proteins had similar functions. Within each group, proteins with at least one specific peptide relatively to other members of the group were reported as sub-groups.
Statistical analysis
In order to take into account possible global quantitative variations between LC-MS runs, normalization was performed. For each LC-MS run, the ratio of all peptide values to their value in the chosen reference LC-MS run was computed. Normalization was performed by dividing peptide values by the median value of peptide ratios. Subsequent statistical analyses were performed on log 10 -transformed normalized data.
BSA peptides showed linear relationships to protein quantity, but the slopes were peptide-specific. These peptide-specific global effects were estimated by a two-way ANOVA on normalized data, where the two factors were peptide and BSA quantity. This enabled the estimation of the specific effect of each BSA peptide, which was removed from normalized data in Figure 3. This allowed the representation of all peptides on the same scale, but had no effect on curve shapes. | 31,052 | [
"16640",
"740636",
"754429",
"740633"
] | [
"135779",
"135779",
"135779",
"135779"
] |
01481393 | en | [
"math"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01481393/file/1701.08861.pdf | Romuald Elie
email: [email protected].
Ludovic Moreau
email: [email protected]
Dylan Possamaï
email: [email protected]
On a class of path-dependent singular stochastic control problems
Keywords: singular control, constrained BSDEs, path-dependent PDEs, viscosity solutions, transaction costs, regularity
This paper studies a class of non-Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a Z-constrained BSDE, with dynamics associated to a non singular underlying forward process. Due to the non-Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to Z-constrained BSDEs. As an application, we study the utility maximization problem with transaction costs for non-Markovian dynamics.
Introduction
The study of singular stochastic problems initiated by Chernoff [START_REF] Chernoff | Optimal stochastic control[END_REF] and Bather and Chernoff [START_REF] Bather | Sequential decisions in the control of a spaceship[END_REF][START_REF] Bather | Sequential decisions in the control of a space-ship (finite fuel)[END_REF] gave rise to a large literature, mostly motivated by its large scope of applications in economics or mathematics. This includes in particular the well-known monotone follower problem, see for instance Karatzas [START_REF] Karatzas | A class of singular stochastic control problems[END_REF], real options decision modeling related to optimal investment issues, see Davis, Dempster, Sethi and Vermes [START_REF] Davis | Optimal capacity expansion under uncertainty[END_REF], Dynkin games, see Karatzas and Wang [START_REF] Karatzas | Connections between bounded-variation control and Dynkin games[END_REF] or Boetius [START_REF] Boetius | Bounded variation singular stochastic control and Dynkin game[END_REF], optimal stopping problems, see Karatzas and Shreve [START_REF] Karatzas | Connections between optimal stopping and singular stochastic control i. Monotone follower problems[END_REF][START_REF] Karatzas | Connections between optimal stopping and singular stochastic control ii. Reflected follower problems[END_REF], Boetius and Kohlmann [START_REF] Boetius | Connections between optimal stopping and singular stochastic control[END_REF], or Benth and Reikvam [START_REF] Benth | A connection between singular stochastic control and optimal stopping[END_REF], as well as optimal switching problems, see Guo and Tomecek [START_REF] Guo | Connections between singular control and optimal switching[END_REF]. For all these questions, the problem of interest is naturally modeled under the form of a singular control of finite variation problem. In this abundant literature, it is quite striking that very few studies take into consideration this type of questions in a non-Markovian framework. One of the purpose of his paper is to try to fill this gap 1 .
In a Markovian environment, the solution to nice singular stochastic control problems characterizes typically as the unique weak solution to a variational inequality, where the linear part of the dynamics is combined with a constraint on the gradient of the solution. For example, when modeling the optimization of dividend flow for a firm, as initiated in continuous time by Jeanblanc-Picqué and Shiryaev [START_REF] Jeanblanc-Picqué | Optimization of the flow of dividends[END_REF], the optimal singular actions identify to paying dividends as soon as the underlying wealth process hits a free boundary, where the gradient of the value function reaches its upper-bound value 1. This example naturally suggests a connection between singular stochastic control problems and stochastic processes with gradient constraints, and more precisely backward stochastic differential equations (BSDEs) with constraints on the gain process, as introduced by Cvitanić, Karatzas and Soner [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF]. The main initial motivation for the introduction of such type of equation was the super-hedging of claims with portfolio constraint. We establish in this paper that the solution to such BSDEs provides a nice probabilistic representation for solution to singular stochastic control problems.
More precisely, the class of non-Markovian stochastic control problem of interest is of the form v sing : (t, x) -→ sup
K∈U t sing E P t 0 U x ⊗ t X t,x,K , 0 ≤ t ≤ T,
where U t sing denotes the set of multidimensional càdlàg non decreasing F t -adapted process stating from 0 and in L p , for p ≥ 1. The controlled underlying process X has the following non-Markovian singular dynamics
X t,x,K = x(t) + • t µ t,x s X t,x,K ds + • t f s dK s + • t σ t,x s X t,x,K dB t s ,
where µ and σ are functional maps satisfying usual conditions, as detailed in Assumption 2.1 below. By density argument, it is worth noticing that we may also reduce to taking the supremum over the subset of absolutely continuous controls. We can also consider a weak version of such control problem.
After a well suited Girsanov type probability transform, we rewrite v sing (t, x) in a form that looks similar to a a face-lift type transformation of the terminal reward. Hereby, this provides the intuition behind the representation of v sing (t, x) as the solution to a BSDE with Z-constraint. The convex constraints imposed on the integrand process Z are induced by the directions f and represented via the convex set
K t := q ∈ R d s.t. (f ⊤ t q)
• e i ≤ 0 for all i ∈ {1, . . . , d} .
For any time t, we verify that v sing (t, x) identifies to Y t,x where (Y t,x , Z t,x ) is the minimal solution to the constrained BSDE
Y t,x • ≥ U t,x X t,x - T • Z t,x s • dB t s , ((σ t,x s ) ⊤ ) -1 X t,x Z t,x s ∈ K s .
The line of proof relies on the observation that penalized versions of both the singular problem and BSDE are solutions of the same path-dependent partial differential equation (PPDE for short), for which we are able to provide a comparison theorem. As far as we know, it is the first time that the newly introduced theory of viscosity solutions of path-dependent PDEs (see the works of Ekren, Keller, Ren, Touzi and Zhang [START_REF] Ekren | On viscosity solutions of path dependent PDEs[END_REF][START_REF] Ekren | Viscosity solutions of fully nonlinear parabolic path dependent PDEs: part i[END_REF][START_REF] Ekren | Viscosity solutions of fully nonlinear parabolic path dependent PDEs: part ii[END_REF][START_REF] Ren | Comparison of viscosity solutions of semi-linear path-dependent PDEs[END_REF][START_REF] Ren | An overview of viscosity solutions of path-dependent PDEs[END_REF][START_REF] Ren | Comparison of viscosity solutions of fully nonlinear degenerate parabolic path-dependent PDEs[END_REF]) is used to prove such a representation. Even though the ideas are reminiscent of the approach that one could have used in the Markovian case (see for instance Peng and Xu [START_REF] Peng | Constrained BSDE and viscosity solutions of variation inequalities[END_REF] for probabilistic interpretation of classical variational inequalities through BSDEs with constraints), we believe that using it successfully in a non-Markovian setting will open the door to many new potential applications of the PPDE theory.
Our main motivation for investigating such singular stochastic control problem, was the problem of utility maximization faced by an investor in a market presenting both transaction costs and non-Markovian dynamics. From the point of view of applications, having non-Markovian dynamics can be seen as a very desirable effect, as this case encompasses stochastic volatility models for instance.
However, in such a framework our previous representation does not apply directly, since it requires nondegeneracy of the diffusion matrix of X, and due to transaction costs, the dynamics of wealth needs to be viewed as a bi-dimensional stochastic process driven by a one dimensional noise. We therefore extend our representation in order to include degenerate volatility coefficients. Our line of proof relies on compactness properties together with convex order ordering arguments as in Pagès [START_REF] Pagès | Convex order for path-dependent derivatives: a dynamic programming approach[END_REF]. In such degenerate context, the solution to the singular control problem identifies with the infimum of a family of constrained BSDEs.
As a by-product, the probabilistic representation of v sing in terms of a BSDE solution allows to derive insightful properties on the singular stochastic control problem. First, it automatically provides a dynamic programming principle for such problem. Second, this representation allows us to quantify the regularity of v sing in terms of the initial data points. We observe that v sing is Lipschitz in space as well as 1/2-Holder in time. Obtaining such results for singular control problems is in general a very hard task (see the discussion in [START_REF] Bouchard | A general Doob-Meyer-Mertens decomposition for g-supermartingale systems[END_REF]Section 4.2] for instance), and our approach could be one potential and promising solution.
The paper is organized as follows: Section 2 presents the class of singular control problems of interest and derives alternative mathematical representations. Section 3 presents the corresponding constrained BSDE representation. The connection is derived in Section 4 via Path-dependent PDE arguments. The consideration of degenerate volatility together with the example on transaction costs example are discussed in Section 5. Finally Section 6 provides the applications of such representation in terms of dynamic programming and regularity of the solution.
Notations: For any d ∈ N\{0}, and for every vector x ∈ R d , we will denote its entries by x i , 1 ≤ i ≤ d. For any p ≥ l and (x l , x l+1 , . . . , x p ) ∈ (R d ) p-l+1 , we will also sometimes use the notation x l:p := (x l , x l+1 , . . . , x p ).
2 The singular control problem
Preliminaries
We fix throughout the paper a time horizon T > 0. For any (t, x) ∈ [0, T ] × R d , we denote by Λ t,x the space of continuous functions x on [0, T ], satisfying x t = x, B t,x the corresponding canonical process and F t,x,o := (F t,x,o s ) t≤s≤T the (raw) natural filtration of B t,x . It is classical result that the σ-algebra F t,x,o T coincides with the Borel σ-algebra on Λ t,x , for the topology of uniform convergence. We will simplify notations when x = 0 by setting Ω t := Λ t,0 , Ω := Ω 0 , B t := B t,0 , B := B 0 , F t,o := F t,0,o and F o := F 0,o . Besides, we will denote generically by C t the space of continuous functions on [t, T ], without any reference to their values at time t. Moreover, P t
x will denote the Wiener measure on (Λ t,x , F t,x,o T ), that is the unique measure on this space which makes the canonical process B t,x a Brownian motion on [t, T ], starting from x at time t. We will often make use of the completed natural filtration of F t,x,o under the measure P t
x , which we denote F t,x := (F t,x s ) t≤s≤T . Again we simplify notations by setting F t := F t,0 and F := F 0 , and we emphasize that all these filtrations satisfy the usual assumptions of completeness and right-continuity. For any t ∈ [0, T ], any s ∈ [t, T ] and any x ∈ C t , we will abuse notations and denote x ∞,s := sup For any (t, s) ∈ [0, T ] × [t, T ] and any x ∈ C t , we define x s ∈ C s by
x s (r) := x(r), r ∈ [s, T ].
We also define the following concatenation operation on continuous paths. For any
0 ≤ t < t ′ ≤ s ≤ T , for any (x, x ′ ) ∈ R d × R d and any (x, x ′ ) ∈ Λ t,x × Λ t ′ ,x ′ , we let x ⊗ s x ′ ∈ Λ t,
x be defined as
(x ⊗ s x ′ )(r) := x(r)1 t≤r≤s + x ′ (r) + x(s) -x ′ (s) 1 s<r≤T .
Let us consider some
(t, x, s, x) ∈ [0, T ] × R d × [t, T ] × C t .
We will also denote, for simplicity, by x ⊗ s x the concatenation between the constant path equal to x on [0, T ] and x. That being said, for any map g : [0, T ] × C 0 and for any (t, x) ∈ [0, T ] × C 0 , we will denote by g t,x the map from [t, T ] × C t defined by
g t,x (s, x ′ ) := g(s, x ⊗ t x ′ ).
Furthermore, we also use the following (pseudo)distance, defined for any (t, t ′ , s, s
′ ) ∈ [0, T ] 4 , any (x, x ′ ) ∈ R d × R d and any (x, x ′ ) ∈ Λ s,x × Λ s ′ ,x ′ by d ∞ (t, x), (t ′ , x ′ ) := |t ′ -t| + sup 0≤r≤T (x ⊗ s x)(r ∧ t) -(x ′ ⊗ s ′ x ′ )(r ∧ t ′ ) .
A first version of the control problem
The first set of control processes that we will consider will be typical of singular stochastic control.
More precisely, we define We next consider the following maps
U t sing := (K s ) t≤s≤T , which are càdlàg, R d -valued, F t -
µ : [0, T ] × C 0 -→ R d and σ : [0, T ] × C 0 -→ S d ,
where S d is the set of d × d matrices (which we endow with the operator norm associated to • , which we still denote • for simplicity) as well as a bounded map f : [0, T ] -→ S d .
The following assumption will be in force throughout the paper Assumption 2.1. (i) The maps µ and σ are progressively measurable, in the sense that for any (x, x ′ ) ∈ C 0 × C 0 and any t ∈ [0, T ], we have for ϕ = µ, σ
x(s) = x ′ (s), for all s ∈ [0, t] ⇒ ϕ(s, x) = ϕ(s, x ′ ), for all s ∈ [0, t].
(ii) µ and σ have linear growth in x, uniformly in t, that is there exists a constant C > 0 such that for every
(t, x) ∈ [0, T ] × C 0 µ t (x) + σ t (x) ≤ C 1 + x ∞,t .
(iii) µ and σ are uniformly Lipschitz continuous in x, that is there exists a constant C > 0 such that for any
(t, x, x ′ ) ∈ [0, T ] × C 0 × C 0 we have µ t (x) -µ t (x ′ ) + σ t (x) -σ t (x ′ ) ≤ C x -x ′ ∞,t .
(iv) For any (t, x) ∈ [0, T ] × C, σ t (x) is an invertible matrix and the matrix σ -1 t (x)f is uniformly bounded in (t, x).
For any (t, x) ∈ [0, T ] × C 0 and K ∈ U t sing , we denote respectively by X t,x and X t,x,K the unique strong solutions on (Ω t , F t,o T , P t 0 ) of the following SDEs (existence and uniqueness under Assumption 2.1 are classical results which can be found for instance in [START_REF] Jacod | Calcul stochastique et problèmes de martingales[END_REF], see Theorems 14.18 and 14.21)
X t,x = x(t) + • t µ t,x s X t,x ds + • t σ t,x s X t,x dB t s , P t 0 -a.s., (2.1)
X t,x,K = x(t) + • t µ t,x s X t,x,K ds + • t f s dK s + • t σ t,x s X t,
; K, K ′ ) ∈ [0, T ] × [t, T ] × Λ 2 × (U t sing ) 2 E P t 0 sup t≤s≤t ′ X t,x,K s -x(t) p ≤ C p (t ′ -t) 1 2 1 + x p ∞,t + 1 + (t ′ -t) 1 2 E P t 0 K t ′ -K t p , (2.2)
E P t 0 sup t≤s≤T X t,x,K s p ≤ C p 1 + x p ∞,t + E P t 0 K T -K t p , (2.3)
E P t 0 sup t≤s≤T X t,x,K s -X t,x ′ ,K ′ s p ≤ C p x -x ′ p ∞,t + E P t 0 T t d K s -K ′ s p . (2.4)
The stochastic control problem we are interested in is then
v sing (t, x) := sup K∈U t sing E P t 0 U x ⊗ t X t,x,K , (2.5)
where the reward function U : C 0 -→ R is assumed to satisfy Assumption 2.3. For any (x, x ′ ) ∈ C 0 × C 0 , we have for some C > 0 and some r ≥ 0
U (x) -U (x ′ ) ≤ C x -x ′ ∞,T 1 + x r ∞,T + x ′ r ∞,T .
Notice that it is clear from (2.3) that under Assumption 2.3, we have
v sing (t, x) < +∞, for any (t, x) ∈ [0, T ] × C 0 .
A first simplification
Let us consider for any t ∈ [0, T ] the following subset U t of U t sing consisting of controls which are absolutely continuous with respect to the Lebesgue measure on [t, T ]
U t := K ∈ U t sing , K s = s t
ν r dr, P t 0 -a.s., with (ν s ) t≤s≤T , F t -predictable and (R + ) d -valued .
For any K ∈ U t , it will be simpler for us to consider the corresponding process ν, so that we define Then, for any (t, x) ∈ [0, T ] × C 0 and ν ∈ U t , we denote by X t,x,ν the unique strong solution on (Ω t , F t,o T , P t 0 ) of the following SDE
U t := (ν s ) t≤s≤T , (R + ) d -valued, F t -
X t,x,ν = x(t) + • t µ t,x s X t,x,ν ds + • t f s ν s ds + • t σ t,x s X t,x,ν dB t s , P t 0 -a.s. (2.6)
We can then define v(t, x) := sup
ν∈U t E P t 0 U x ⊗ t X t,x,ν . (2.7)
Our first result is that the maximization under U t sing and U t actually lead to the same value function. Notice that for such a result to hold, the continuity assumptions that we made on the functions intervening in our problem are crucial. Indeed, as shown by Heinricher and Mizel [START_REF] Heinricher | A stochastic control problem with different value functions for singular and absolutely continuous control[END_REF], such an approximation result does not always hold. Proposition 2.4. Under Assumptions 2.1 and 2.3, we have for any
(t, x) ∈ [0, T ] × C 0 v sing (t, x) = v(t, x).
Proof. First of all, it is a classical result that U t is dense in U t sing in the sense that for any K ∈ U t sing , there is some sequence (ν n ) n≥0 ⊂ U t such that
E P t 0 sup t≤s≤T K s - s t ν n r dr 2 -→ n→+∞ 0. (2.8)
It is also clear from Assumption 2.3, (2.3) and (2.4) with x = x ′ that for any (t, x; K, K ′ ) ∈ [0, T ] × C 0 × (U t sing ) 2 , we have for some constant C 0 > 0 which may vary from line to line
E P t 0 U x ⊗ t X t,x,K -E P t 0 U x ⊗ t X t,x,K ′ ≤ C 0 E P t 0 x ⊗ t X t,x,K -X t,x,K ′ 2 ∞,T 1 2 1 + E P t 0 x ⊗ t X t,x,K 2r ∞,T 1 2 + E P t 0 x ⊗ t X t,x,K ′ 2r ∞,T 1 2 ≤ C 0 E P t 0 T t d K s -K ′ s 2 1 2 1 + x r ∞,t + E P t 0 T t dK s 2r 1 2 + E P t 0 T t dK ′ s 2r 1 2
.
We thus deduce immediately that the map K -→ E P t 0 U x ⊗ t X t,x,K is continuous with respect to the convergence in (2.8). Hence the result. ✷
In the rest of the paper, we will therefore focus on the value function v instead of v sing .
Weak formulation for v
For any (t, x) ∈ [0, T ] × C 0 and ν ∈ U t , we now define the following P t 0 -equivalent measure
dP t,x,ν dP t 0 = E • t (σ t,x s ) -1 X t,x f s ν s • dB t s .
The weak formulation of the control problem (2.7) is defined as v weak (t, x) := sup ν∈U t E P t,x,ν U t,x X t,x , for any (t, x) ∈ [0, T ] × C 0 .
(2.9)
The following result gives the equivalence between the two formulations of the control problem. It is a classical result and we refer the reader to Proposition 4.1 in [START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF] or Theorem 4.5 of [START_REF] Karoui | Capacities, measurable selection and dynamic programming part II: application in stochastic control problems[END_REF].
Proposition 2.5. Under Assumptions 2.1 and 2.3, we have for any
(t, x) ∈ [0, T ] × C 0 v weak (t, x) = v(t, x).
A canonical weak formulation
In this section we introduce yet another interpretation of the value function v, which will be particularly well suited when we will use the theory of viscosity solutions for path-dependent PDEs.
For any (t, x) ∈ [0, T ] × C 0 , let us define the following probability measure on
(Λ t,xt , F t,xt,o T ) P t,x 0 := P t 0 • X t,x -1 .
Since σ is assumed to be invertible, it is a classical result that we have
F t,o = F X t,x
, and
F t = F X t,x P t 0 , (2.10)
where F X t,x denotes the raw natural filtration of X t,x and F X t,x P t 0 its completion under P t 0 . As an immediate consequence, all these filtrations satisfy the Blumenthal 0 -1 law as well as the predictable martingale representation property. This implies that the process below is a Brownian motion on (Λ Indeed, by definition of P t,x 0 , we know that the law of B t,xt under P t,x 0 = the law of X t,x under P t 0 .
(2.11)
Hence, since we do have
B t = • t σ t,x s -1 X t,x dX t,x s -µ t,x s X t,
:= E • t σ t,x s -1 B t,xt f s ν s W t,x dW t,x s T
, where it is understood that we interpret ν as a (Borel) map from C t to R d . We have the following simple result Lemma 2.6. We have for all t ∈ [0, T ]
E P t,x,ν U t,x X t,x = E P t,x ν U t,x B t,x , for every ν ∈ U t .
Proof. Fix some t ∈ [0, T ] and some ν ∈ U t . We have, using successively the definition of P t,x ν , (2.11), (2.12) and the definition of P t,x,ν
E P t,x ν U t,x B t,xt = E P t,x 0 E • t σ t,x s -1 B t,xt f s ν s W t,x dW t,x s T U t,x B t,xt = E P t 0 E • t σ t,x s -1 X t,x f s ν s B t dB t s T U t,x X t,x = E P t,x,ν U t,x X t,x .
✷ As a consequence of Lemma 2.6, we deduce immediately that for any
(t, x) ∈ [0, T ] × C 0 v(t, x) = sup ν∈U t E P t,x ν U t,x B t,xt .
(2.13)
Approximating the value function
To obtain our main probabilistic representation result for the value function v (and thus for v sing ), we will use, as mentioned before, the theory of viscosity solutions of path-dependent PDEs. However, in order to do so we will have to make a small detour, and first approximate v.
For any integer n > 0 and any t ∈ [0, T ], we let U t,n denote the subset of U t consisting of processes ν such that 0 ≤ ν i s ≤ n, for i = 1, . . . , d, for Lebesgue almost every s ∈ [t, T ]. We then define the approximating value function for all (t, x)
∈ [0, T ] × C 0 v n (t, x) := sup ν∈U t,n E P t 0 U t,x X t,x,ν = sup ν∈U t,n E P t,x ν U t,x (B t,x ) , (2.14)
where the second equality can be proved exactly as for v in Lemma 2.6.
We have the following simple result.
Lemma 2.7. Under Assumptions 2.1 and 2.3, for every (t, x) ∈ [0, T ] × C 0 , we have that the sequence (v n (t, x)) n≥1 is non-decreasing and
v n (t, x) -→ n→+∞ v(t, x).
Proof. It is clear that the sequence is non-decreasing, as the sequence of sets U .,n is. Moreover, since the elements of U t have by definition moments of any order, it is clear that ∪ n≥1 U t,n is dense in U t , in the sense that for any ν ∈ U t , there exists a sequence (ν m ) m≥1 such that for any m ≥ 1, ν m ∈ U t,m and
E P t 0 T t ν r -ν m r 2 dr -→ m→+∞ 0. (2.15)
By the same arguments as in the proof of Proposition 2.4, we deduce that
v(t, x) = sup ν∈∪ n≥1 U t,n E P t 0 U t,x X t,x,ν = lim n→+∞ v n (t, x),
since the sets U t,n are non-decreasing with respect to n. ✷
3 The corresponding constrained BSDEs
Spaces and norms
We now define the following family of convex sets, for any t ∈ [0, T ]:
K t := q ∈ R d s.t. (f ⊤ t q) • e i ≤ 0 for all i ∈ {1, . . . , d}
where (e i ) 1≤i≤d denotes the usual canonical basis of R d , and where for any M ∈ S d , M ⊤ denotes its usual transposition.
Remark 3.1. This form of constraint that one wishes intuitively to impose on the gradient of the value function v is quite natural according to representation (2.13). Recall that f describes the direction in which the underlying forward process is pushed in case of singular action.
We next introduce for any p ≥ 1 the following spaces
S p t := (Y s ) t≤s≤T , R-valued, F t -
:= E P t,x 0 sup t≤s≤T |Y s | p . H p t := (Z s ) t≤s≤T , R d -valued, F t -predictable, Z H p t < +∞ ,
where
Z p H p t := E P t 0 T t Z s 2 ds p 2 . H p t,x := (Z s ) t≤s≤T , R d -valued, F
Strong formulation for the BSDE
For any (t, x) ∈ [0, T ] × C 0 , we would like to solve the K-constrained BSDE with generator 0 and terminal condition U t,x X t,x , that is to say we want to find a pair
(Y t,x , Z t,x ) ∈ S 2 t × H 2 t such Y t,x • ≥ U t,x X t,x - T • Z t,x s • dB t s , P t 0 -a.s. (3.1) ((σ t,x s ) ⊤ ) -1 X t,x Z t,x s ∈ K s , ds ⊗ dP t 0 -a.e., (3.2)
and such that if there is another pair ( Y t,x , Z t,x ) ∈ S 2 t × H 2 t satisfying (3.1) and (3.2), then we have Y t,x ≤ Y t,x , P t 0 -a.s. When it exists, the pair (Y t,x , Z t,x ) is called the minimal solution of the Kconstrained BSDE.
Such constrained BSDEs have been studied in the literature, first in [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF] and then by Peng in [START_REF] Peng | Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer's type[END_REF]. However, all these existence results rely on the assumption that there is at least one solution (which does not have to be the minimal one) to the problem. This forces us to adopt the following assumption,
Assumption 3.2. For every (t, x) ∈ [0, T ] × C 0 , there exists a pair (Y t,x , Z t,x ) ∈ S 2 t × H 2 t such that Y t,x • ≥ U t,x X t,x - T • Z t,x s • dB t s , P t 0 -a.s. ((σ t,x s ) ⊤ ) -1 X t,x Z t,x s ∈ K s , ds ⊗ dP t 0 -a.e.
Remark 3.3. This assumption simply indicates that it is indeed possible to satisfy the Z-constraint as well as solve the BSDE. Such constrained BSDEs have been first introduced in order to find the super-hedging price of a claim under portfolio constraints, and such condition in this framework simply indicates that one can find an admissible portfolio strategy that indeed super-hedges the claim of interest.
We then have immediately from [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF] the following Proof. Since it is clear by Assumption 2.3 and (2.3) that E P t 0 U t,x X t,x 2 < +∞, the result is an immediate consequence of the main result in [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF]. ✷ Since Assumption 3.2 is rather implicit, let us discuss some sufficient conditions under which it holds. We refer the reader to Assumption 7.1 in [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF] for the proof of the sufficiency.
Lemma 3.5. Fix some (t, x) ∈ [0, T ] × C 0 . If there exist a constant C ∈ R and a process ϕ ∈ H 2 t such that ((σ t,x s ) ⊤ ) -1 X t,x ϕ s ∈ K s , ds ⊗ dP t 0 -a.e. and such that U t,x (X t,x ) ≤ C + T t ϕ s • dB t s , P t 0 -a.e., (3.3)
then Assumption 3.2 is satisfied. Moreover, (3.3) holds if for instance U is bounded.
Proof. We only have to prove that (3.3) holds when U is bounded. It suffices to notice that in this case we can take C to be a bound for U and ϕ = 0 since 0 ∈ K s for every s ∈ [0, T ]. ✷
Weak formulation for the BSDE
It will also be important for us to look at the weak version of the constrained BSDE, where for any
(t, x) ∈ [0, T ] × C 0 , we now look for a pair (Y t,x , Z t,x ) ∈ S 2 t,x × H 2 t,x such that Y t,x • ≥ U t,x B t,xt - T • Z t,x s • dW t,x s , P t,x 0 -a.s. (3.4) ((σ t,x s ) ⊤ ) -1 B t,xt Z t,x s ∈ K s , ds ⊗ dP t,x 0 -a.e., (3.5)
and such that if there is another pair ( Y t,x , Z t,x ) ∈ S 2 t,x × H 2 t,x satisfying (3.4) and (3.5), then we have Y t,x ≤ Y t,x , P t,x 0 -a.s. Again, we need an assumption in order to ensure the existence of the minimal solution.
Assumption 3.6. For every
(t, x) ∈ [0, T ] × C 0 , there exists a pair (Y t,x , Z t,x ) ∈ S 2 t,x × H 2 t,x such that Y t,x • ≥ U t,x B t,xt - T • Z t,x s • dW t,x s , P t,x 0 -a.s. ((σ t,x s ) ⊤ ) -1 B t,xt Z t,x s ∈ K s , ds ⊗ dP t,x 0 -a.e.
We then deduce immediately Proposition 3.7. Let Assumption 3.6, Assumption 2.1 and Assumption 2.3 hold. Then, the minimal solution
(Y t,x , Z t,x ) of the K-constrained BSDE (3.2) exists. Moreover, if Y t,
x and Y t,x both exist, we have that the law of Y t,x under P t 0 = the law of Y t,x under P t,x 0 . Remark 3.8. Of course the sufficient conditions of Lemma 3.5 can readily be adapted in this context. In particular, both Assumptions 3.2 and 3.6 hold if U is bounded.
As an immediate consequence of the Blumenthal 0 -1 law and Proposition 3.7, we have the following equality for every
(t, x) ∈ [0, T ] × C 0 Y t,x t = Y t,x t . The aim of this paper is to show that for all (t, x) ∈ [0, T ] × C 0 , we have v(t, x) = Y t,x t = Y t,x t .
The penalized BSDEs
Exactly as we have approximated the value function v by v n defined in (2.14), it will be useful for us to consider approximations of the K-constrained BSDEs introduced in the previous section. It is actually a very well-known problem, which already appeared in [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF], and which can be solved by considering the so-called penalized BSDEs associated to the K-constrained BSDE. Before doing so, we need to introduce, for any (t, x) ∈ [0, T ] × C 0 , the map
ρ : q ∈ R d -→ q + • 1 d (3.6)
where for each q := (q 1 , • • • , q d )⊤ ∈ R d we have used the notation: q + := (q + 1 , • • • , q + d ) ⊤ . Under Assumptions 2.1 and 2.3, we can then define for any
(t, x, n) ∈ [0, T ] × C 0 × N * , (Y t,x,n , Z t,x,n ) ∈ S 2
t × H 2 t as the unique solution of the following BSDE
Y t,x,n • = U t,x X t,x + T • nρ f ⊤ s σ t,x s ⊤ -1 X t,x Z t,x,n s ds - T • Z t,x,n s dB t s , P t 0 -a.s. (3.7)
Notice that existence and uniqueness hold using for instance the results in [START_REF] Karoui | Backward stochastic differential equations in finance[END_REF], since under Assumptions 2.1 and 2.3, the terminal condition is obviously square-integrable, the generator z -→ ρ(f ⊤ s ((σ t,x ) ⊤ ) -1 (X t,x )z) is null at 0 and uniformly Lipschitz continuous in z (we remind the reader that σ -1 f is bounded, so its transpose is bounded as well).
It is then a classical result (see [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF] or [START_REF] Peng | Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer's type[END_REF]) that under Assumption 3.2, for any
(t, x) ∈ [0, T ] × C 0 Y t,x,n s ↑ n→+∞ Y t,x
s , for any s ∈ [t, T ], P t 0 -a.s., and Y t,x -Y t,x,n
H 2 t -→ n→+∞ 0. (3.8)
Alternatively, we may consider the penalized BSDEs in weak formulation
Y t,x,n • = U t,x B t,x + T • nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 B t,x Z t,x,n s ds - T • Z t,
The representation formula
The main goal of this section is to prove the following representation
v(t, x) = Y t,x t = Y t,x t , for every (t, x) ∈ [0, T ] × C 0 .
In order to prove this result, we will show that both v n (t, x) defined in (2.14) and u n (t, x) := Y t,x,n t are viscosity solutions of a semi-linear path-dependent PDE, for which a comparison result holds. It will then imply that v n (t, x) = u n (t, x), and the desired result will be obtained by passing to the limit when n goes to +∞, see Lemma 2.7 and (3.9).
A crash course on PPDEs
In this section, we follow closely [START_REF] Ren | Comparison of viscosity solutions of semi-linear path-dependent PDEs[END_REF] to introduce all the notions needed for the definition of viscosity solutions of path-dependent PDEs. Let us start with the notions of regularity we will consider.
Definition 4.1. (i) For any (t, x, x) ∈ [0, T ] × C 0 × C t , any s ∈ [t, T ] and any d ≥ 1, we say that a R d -valued process on (Λ t,xt , F t,xt,o T ) is in C 0 ([s, T ] × Λ s,(x⊗t x)s , R d ) when it is continuous with respect to the distance d ∞ , that is, for any ε > 0, for any (r 1 , r 2 , x 1 , x 2 ) ∈ [s, T ] 2 × C s × C s , there exists δ > 0 such that if d ∞ ((r 1 , x 1 ), (r 2 , x 2 )) ≤ δ, then u s, x (r 1 , x 1 ) -u s, x (r 2 , x 2 ) ≤ ε. (ii) For any (t, x, x) ∈ [0, T ] × C 0 × C t and any s ∈ [t, T ], we say that a R-valued process u on (Λ t,xt , F t,xt,o T ) belongs to C 1,2 ([s, T ] × Λ s,(x⊗t x)s ) if u ∈ C 0 ([s, T ] × Λ s,(x⊗t x)s , R) and if there exists (Z, Γ) ∈ C 0 ([t, T ] × Λ t,xt , R d ) × C 0 ([t, T ] × Λ t,xt , R) such that u s, x z -u s ( x) = z s Γ s, x r dr + z s Z s, x r • dB s,(x⊗t x)s r , z ∈ [s, T ], P s,(x⊗t x) 0 -a.s.
We then denote for any x ∈ C t and any s ∈ [t, T ],
L t,x u(s, x) := ∂ t u s ( x) + µ t,x s ( x) • Du(s, x) + 1 2 Tr[σ t,x s (σ t,x s ) ⊤ ( x)D 2 u(s, x)] := Γ s ( x), Du(s, x) := ((σ t,x s ( x)) ⊤ ) -1 Z s ( x).
Let us then denote, for any (t, x) ∈ [0, T ] × C 0 , by T t,x the set of F Next, we define for any N ≥ 1 P t,x,N := P t,x ν , ν ∈ U t,N ,
M t,x,N := Q s.t. dQ dP t,x 0 = E T t b s dW t,x s , b, F t,xt -predictable s.t. b ∞ ≤ N .
For any (t, x) ∈ [0, T ]×C 0 , and for any w ∈ C 0 ([0, T ]×Λ 0,x 0 , R), we now define the sets of test functions for w as
A n w(t, x) := ϕ ∈ C 1,2 ([t, T ] × Λ t,xt ), 0 = ϕ -w t,x (t, x t ) > E n t ϕ -w t,x •, B t,xt τ ∧H
for some H ∈ T t,x and for all τ ∈ T t,x H,+ ,
A n w(t, x) := ϕ ∈ C 1,2 ([t, T ] × Λ t,xt ), 0 = ϕ -w t,x (t, x t ) < E n t ϕ -w t,x •, B t,xt τ ∧H
for some H ∈ T t,x and for all τ ∈ T t,x H,+ ,
where for all F t,xt T -measurable ξ such that the quantities below are finite
E n t [ξ] = sup Q∈M t,x,n E Q [ξ] , E n t [ξ] = inf Q∈M t,x,n E Q [ξ] , with n := n √ d max 1; σ -1 f .
Finally, we define for every
(t, x, ϕ) ∈ [0, T ] × C 0 × C 1,2 ([t, T ] × Λ t,xt ) the following PPDE -L t,x ϕ(t, x t ) -nρ f ⊤ t Dϕ(t, x t ) = 0. (4.1)
Definition 4.2. Fix some x ∈ R d and let u ∈ C 0 ([0, T ] × Λ 0,x , R). We say that (i) u is a viscosity subsolution of PPDE (4.1) if for any
(t, x, ϕ) ∈ [0, T ) × C 0 × A n u(t, x) -L t,x ϕ(t, x t ) -nρ f ⊤ t Dϕ(t, x t ) ≤ 0. (ii) u is a viscosity supersolution of PPDE (4.1) if for any (t, x, ϕ) ∈ [0, T ) × C 0 × A n u(t, x) -L t,x ϕ(t, x t ) -nρ f ⊤ t Dϕ(t, x t ) ≥ 0.
(iii) u is a viscosity solution of PPDE (4.1) if it is both a sub and a supersolution.
We shall end this section with the following result which will be useful in the PPDE derivation of the value function.
Lemma 4.3. For all (t, x) ∈ [0, T ] × C 0 and τ ∈ T t,x + we have
E n t [τ -t] > 0. Proof. Fix (t, x) ∈ [0, T ] × C 0 and τ ∈ T t,x + and denote by (U, V ) ∈ S 2 t,x × H 2 t,
x the unique solution of the following backward stochastic differential equation on [t, τ ]
U s = τ -t - τ s n |V r | dr - τ s V r • dW t,x r , s ∈ [t, τ ], P t,x 0 -a.s.,
where we remind the reader that in our context τ is F t,x,o τ -measurable, and that we can always consider a P t,x 0 -version of U (resp. V ), which we still denote by U (resp. V ) for simplicity, and which is F t,xt,oprogressively measurable (resp. predictable).
Let µ be an arbitrary F t,xt,o -predictable process satisfying |µ| ≤ n, so that for all z ∈ R d we have -n |z| ≤ µ • z. This implies in particular that Q µ ∈ M t,x,n with
dQ µ := E T t µ s • dW t,x s dP t,x 0 .
Hence, by standard a comparison result for BSDEs (see for instance Theorem 2.2 in [START_REF] Karoui | Backward stochastic differential equations in finance[END_REF]) we have
U t ≤ E Q µ [τ -t].
Hence, the arbitrariness of µ implies that
U t ≤ E n t [τ -t]. (4.2)
On the other hand, let ν n be defined such that
ν n • V = -n |V | and observe that |ν n | ≤ n. Then we have Q n ∈ M t,x,n with dQ n := E T t ν n s • dW t,x s dP t,x 0 .
We hence have, using the fact that U t is a constant by the Blumenthal 0 -1 law
U t = τ -t - τ t V s • dW t,x s -ν n s ds = E Q n [τ -t] ≥ E n t [τ -t],
by definition of E n t . The last inequality together with (4.2) gives that
U t = E n t [τ -t].
Since τ > t, P t,x 0 -a.s., the result follows by strict comparison (see again Theorem 2.2 in [START_REF] Karoui | Backward stochastic differential equations in finance[END_REF]). ✷
The viscosity solution properties
We start with the value function v n .
Proposition 4.4. Under Assumptions 2.1 and 2.3, v n is a viscosity solution of PPDE (4.1).
Proof. We proceed along the lines of [17, Proof of Proposition 4.4] and split the proof into two steps.
Fix (t, x) ∈ [0, T ] × C 0 for the remainder of the proof.
Step 1: We show that v n ∈ C 0 ([0, T ] × Λ 0,x 0 ) and satisfies the dynamic programming principle, for any τ ∈ T t and any θ ∈ T t,x
v n (t, x) = sup ν∈U t,n E P t 0 v n (τ, x ⊗ t X t,x,ν ) = sup ν∈U t,n E P t,x ν v n (θ, x ⊗ t B t,xt ) (4.3)
The dynamic programming result is actually classical since the controls ν that we consider here take values in a compact subset of R d . We refer the reader to Proposition 2.5 and Theorem 3.3 in [START_REF] Karoui | Capacities, measurable selection and dynamic programming part II: application in stochastic control problems[END_REF],
where we emphasize that their proof of Theorem 3.3 can immediately be extended to the case of µ and σ Lipschitz continuous with linear growth (instead of Lipschitz continuous and bounded), as they only require to have existence of a strong solution to the SDEs considered.
We next show the continuity of v n . For any (t,
t ′ ; x, x ′ ) ∈ [0, T ] × [t, T ] × C 0 × C 0 , we have v n (t, x) -v n (t ′ , x ′ ) ≤ v n (t, x) -v n (t, x ′ ) + v n (t, x ′ ) -v n (t ′ , x ′ ) .
We now estimate separately the two terms in the right-hand side above. We have first using Assumption 2.3, (2.3), (2.4) and the fact that the controls ν ∈ U t,n are bounded by n
√ d v n (t, x) -v n (t, x ′ ) ≤ sup ν∈U t,n E P t 0 U t,x X t,x,ν -U t,x ′ X t,x ′ ,ν ≤ C sup ν∈U t,n E P t 0 x ⊗ t X t,x,ν -X t,x ′ ,ν 2 ∞,T 1 2 1 + E P t 0 x ⊗ t X t,x,ν 2r ∞,T 1 2 + E P t 0 x ′ ⊗ t X t,x ′ ,ν 2r ∞,T 1 2 ≤ C d n 1∨r x -x ′ ∞,t 1 + x r ∞,t + x ′ r ∞,t ,
where the constant C d does not depend on n.
Next, using (4.3) for τ = t ′ , we compute, using the previous calculation and (2.2)
v n (t, x ′ ) -v n (t ′ , x ′ ) ≤ sup ν∈U t,n E P t 0 v n (t ′ , x ′ ⊗ t X t,x ′ ,ν ) -v n (t ′ , x ′ ) ≤ C d n 1∨r sup ν∈U t,n E P t 0 x ′ ⊗ t X t,x ′ ,ν -x ′ 2 ∞,t ′ 1 2 1 + x ′ r ∞,t ′ + E P t 0 x ′ ⊗ t X t,x ′ ,ν 2r
∞,t ′ 1 2 ≤ C d n 1∨r+1+r (t ′ -t) 1 2 1 + x ′ r+1 ∞,t + x ′ r+1 ∞,t ′ .
By definition of d ∞ , we have thus obtained that
v n (t, x) -v n (t ′ , x ′ ) ≤ C d n 1+r+1∨r d ∞ (t, x), (t ′ , x ′ ) 1 + x r+1 ∞,t + x ′ r+1 ∞,t ′ ,
which proves the continuity of v n with respect to d ∞ .
Step 2: We show that v n is a viscosity subsolution to PPDE (4.1).
Assume to the contrary that there
(t, x; ϕ) ∈ [0, T ] × C 0 × Av n (t, x) s.t. for some c > 0 -L t,x ϕ(t, x t ) -nρ(f ⊤ t Dϕ(t, x t )) ≥ 2c > 0.
Without loss of generality, we may reduce H in the definition of ϕ ∈ Av n (t, x) so that by continuity of all the above maps, we obtain
-L t,x ϕ(s, B t,xt ) -nρ(f ⊤ s Dϕ(s, B t,xt )) ≥ c, on [t, H], P t,x 0 -a.s. Furthermore, observe that for each s ∈ [t, H] nρ(f ⊤ s Dϕ(s, B t,xt )) = sup u∈[0,n] d u • (f ⊤ s Dϕ(s, B t,xt )),
so that by definition of U t,n we have for all ν ∈ U t,n
-L t,x ϕ(s, B t,xt ) -ν s • f ⊤ s Dϕ(s, B t,xt ) ≥ c, on [t, H], P t,x 0 -a.s. ( 4
+ H t Dϕ(s, B t,xt ) • σ t,x s (B t,xt ) dW t,x s -(σ t,x s ) -1 (B t,xt )f s ν s ds .
Since ν ∈ U t,n , we have P t,x ν ∈ M t,x,n so that by (4.4):
E P t,x ν (ϕ -(v n ) t,x ) H, B t,xt ≤ -cE t [H -t] + ϕ t, x t -E P t,x ν (v n ) t,x H, B t,xt ,
and taking the infimum on P t,x ν ∈ M t,x,n on the left-hand side and recalling that ϕ ∈ Av n (t, x), this gives
0 < E t (ϕ -(v n ) t,x ) H, B t,x ≤ -cE t [H -t] -E P t,x ν (v n ) t,x H, B t,xt -(v n ) t,x t, x t ,
and finally, by the dynamic programming principle (4.3), taking the infimum over ν ∈ U t,n on the right-hand side gives
0 < -cE t [H -t] ,
which is a contradiction since by Lemma 4.3 the right-hand side is negative.
Step 3: We show that v n is a viscosity supersolution to PPDE (4.1).
Assume to the contrary that there (t, x; ϕ) ∈ [0, T ] × C 0 × Av n (t, x) such that for some c > 0
-L t,x ϕ(t, x t ) -nρ(f ⊤ t Dϕ(t, x t )) ≤ -3c < 0.
Observe again that for each s
∈ [t, T ] nρ(f ⊤ s Dϕ(s, B t,x )) = sup u∈[0,n] d u • (f ⊤ s Dϕ(s, B t,x )), so that there is u * n ∈ [0, n] d such that -L t,x ϕ(t, x t ) -u * n • (f ⊤ t Dϕ(t, x t )) ≤ -2c.
Without loss of generality, we may reduce H in the definition of ϕ ∈ Av n (t, x) so that by continuity, we obtain
-L t,x ϕ(t, x t ) -u * n • (f ⊤ s Dϕ(s, B t,xt )) ≤ -c,
+ H t Dϕ(s, B t,xt ) • σ t,x s (B t,xt ) dW t,x s -(σ t,x s ) -1 (B t,xt )f s ν n s ds .
By (4.5), this gives
E P t,x ν n (ϕ -(v n ) t,x ) H, B t,xt ≥ cE P t,x ν n [H -t] + ϕ t, x t -E P t,x ν n (v n ) t,x H, B t,xt .
Since ϕ ∈ Av n (t, x), we have the equality ϕ(t, x t ) = v n (t, x). Moreover, the fact that ν n ∈ U t,n enables us to use the DPP (4.3) to have
E P t,x ν n (ϕ -(v n ) t,x ) H, B t,xt ≥ cE P t,x ν n [H -t] > 0.
Using (again) the fact that ν n ∈ U t,n implies that P t,x ν n ∈ M t,x,n , this contradicts ϕ ∈ Av n (t, x). ✷
Let us now treat the penalized BSDEs. Proof. We proceed along the lines of [17, Proof of Proposition 4.4] and split the proof into two steps. Fix (t, x) ∈ [0, T ] × Λ for the remainder of the proof.
Step 1: We show that u n ∈ C 0 ([0, T ]× Λ 0,x 0 ) and satisfies the following dynamic programming principle, for any τ ∈ T t,x :
Y t,x,n = (u n ) t,x (τ, B t,xt ) + τ • nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 B t,x Z t,x,n s ds - τ • Z t,x,n s • dW t,x s , P t,x 0 -a.s. (4.6)
First of all, since nρ ε is Lipschitz-continuous and nul at 0 since Assumption 2.3 holds, by standard stability results on BSDEs (see e.g. [START_REF] Karoui | Backward stochastic differential equations in finance[END_REF]), for any n ≥ 1, there is a constant C n (which may vary from line to line) such that for all
(t, x, x ′ ) ∈ [0, T ] × (C 0 ) 2 E P t 0 sup t≤s≤T Y t,x,n s 2 + T t Z t,x,n s 2 ds ≤ C n 1 + x 2(r+1) ∞,t , (4.7)
E P t 0 sup t≤s≤T Y t,x,n s -Y t,x ′ ,n s 2 + T t Z t,x,n s -Z t,x ′ ,n s 2 ds ≤ C n x -x ′ 2 ∞,t 1 + x 2r ∞,t + x ′ 2r ∞,t . (4.8)
In particular, this gives the following regularity
|u n (t, x)| ≤ C n 1 + x r+1 ∞,t (4.9)
and
u n (t, x) -u n (t, x ′ ) ≤ C n x -x ′ ∞,t 1 + x r ∞,t + x ′ r ∞,t . (4.10)
By standard arguments in the BSDE theory (this would be simply the tower property for conditional expectations if ρ were equal to 0, and the result can easily be generalized using the fact that solutions to BSDEs with Lipschitz drivers can be obtained via Picard iterations), we have the following dynamic programming principle, for any t < t ′ ≤ T
Y t,x,n = (u n ) t,x (t ′ , B t,xt ) + t ′ • nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 B t,xt Z t,x,n s ds - t ′ • Z t,x,n s • dW t,x s , P t,x 0 -a.s. (4.11) In particular Y t,x,n s = (u n ) t,x (s, B t,xt ) for any s ∈ [t, T ]. It then follows |u n (t, x) -u n (t ′ , x)| = E P t,x 0 Y t,x,n t -Y t,x,n t ′ + (u n ) t,x (t ′ , B t,xt ) -u n (t ′ , x) ≤ E P t,x 0 t ′ t nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 (B t,xt )Z t,x,n s ds + E P t,x 0 (u n ) t,x (t ′ , B t,xt ) -u n (t ′ , x) ≤ E P t,x 0 t ′ t nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 (B t,xt )Z t,x,n s ds + C n sup t≤s≤t ′ x s -x t + E P t,x 0 sup t≤s≤t ′ B t,xt s -x t 2 1 2 1 + x r ∞,t ′ + E P t,x 0 sup t≤s≤t ′ B t,xt s 2 1 2 ≤ E P t,x 0 t ′ t nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 (B t,xt )Z t,x,n s ds + C n d ∞ ((t, x); (t ′ , x)) 1 + x r ∞,t ′ , (4.12)
where the last line follows from (2.2). Observe from (4.7) combined with Assumption 2.1 and the Lipschitz-continuity of ρ ε that
E P t,x 0 t ′ t nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 B t,xt Z t,x,n s ds = E P t 0 t ′ t nρ f ⊤ s ((σ t,x s ) ⊤ ) -1 B t Z t,x,n s ds ≤ C n 1 + x r+1 ∞,t ′ (t ′ -t) 1 2 .
Plugging the latter into (4.12) and since
√ t ′ -t ≤ d ∞ ((t, x); (t ′ , x)), this gives finally u n (t, x) -u n (t ′ , x) ≤ C n 1 + x r+1 ∞,t ′ d ∞ ((t, x); (t ′ , x)).
Finally, the regularity in time that we just proved, allows us to classically extend the dynamic programming principle in (4.11) to stopping times, giving (4.6) (the result is clear for stopping times taking finitely many values, and the general result follows by the usual approximation of stopping times by decreasing sequences of stopping times with finitely many values).
Step 2: We conclude the proof.
Without loss of generality, we prove only the viscosity subsolution, the supersolution being obtained similarly. Assume to the contrary that there is
(t, x; ϕ) ∈ [0, T ] × C 0 × Au n (t, x) such that 2c := -L t,x ϕ(t, x t ) -nρ f ⊤ t Dϕ(t, x t ) > 0.
Let H be the hitting time corresponding to the definition of ϕ ∈ Au n (t, x). By continuity of ϕ and ρ, reducing H if necessary, we deduce that
-L t,x ϕ(s, B t,xt ) -nρ f s Dϕ(s, B t,xt ) ≥ c > 0, s ∈ [t, H].
By the DPP (4.6) and the smoothness of ϕ, we have under P t,x 0 , we have
(ϕ -(u n ) t,x ) H (•, B t,xt ) -(ϕ -(u n ) t,x ) t (•, x t ) = H t L t,x ϕ(s, B t,xt )ds + H t σ t,x s B t,xt Dϕ(s, B t,xt ) • dW t,x s + H t nρ f ⊤ s (σ t,x s ) ⊤ -1 B t,xt Z t,x,n s ds - H t Z t,x,n s • dW t,x s ≤ - H t c + n ρ f ⊤ s Dϕ(s, B t,xt ) -ρ f ⊤ s (σ t,x s ) ⊤ -1 B t,xt Z t,x,n s ds + H t σ t,x s B t,xt Dϕ(s, B t,xt ) -σ t,x s -1 B t,xt Z t,x,n s • dW t,x s = -c(H -t) + H t α n s • Dϕ(s, B t,xt ) -σ t,x s -1 B t,xt Z t,x,n s ds + H t σ t,x s B t,xt Dϕ(s, B t,xt ) -σ t,x s -1 B t,xt Z t,x,n s • dW t,x s = c(t -H) + H t Dϕ(s, B t,xt ) -σ t,x s -1 B t,xt Z t,x,n s • σ t,x s B t,xt dW t,x s + α n s ds ,
where |α n | ≤ n||f || ds ⊗ dP t,x 0 -a.e. By Girsanov's Theorem, we then have that there is Q ∈ M t,x,n such that σ t,x s (B t,xt )dW t,x s + α n s ds is a Q-Brownian motion. The above inequality holds then Q n -a.s. so that
(ϕ -(u n ) t,x ) t (•, x t ) ≥ E Q n (ϕ -(u n ) t,x ) H (•, B t,xt ) + c(H -t) > E Q n (ϕ -(u n ) t,x ) H (•, B t,xt ) ,
which is in contradiction with the definition of ϕ ∈ Au n (t, x). ✷
The main result
Define, for any x ∈ C 0 , the following subset of C 0 ([0, T ] × Λ 0,x 0 )
C 0 2 ([0, T ] × Λ 0,x 0 ) := u ∈ C 0 ([0, T ] × Λ 0,x 0 ), s.t. for any (t, x) ∈ [0, T ] × C 0 , u t,
x is continuous in time P t, x 0 -a.s., u t, x ∈ S 2 t, x .
We now recall the following comparison theorem from [START_REF] Ren | Comparison of viscosity solutions of semi-linear path-dependent PDEs[END_REF] (see their Theorem 4.1), adapted to our context.
Theorem 4.6 ([34]
). Let u, v in C 0 2 ([0, T ]×Λ 0,x 0 ) be respectively viscosity subsolution and supersolution of PPDE (4.1). If u(T,
•) ≤ v(T, •), then u ≤ v on [0, T ] × C 0 .
Our first main result is then Theorem 4.7. Let Assumptions 2.1, 2.3, 3.2 and 3.6 hold. Then, for any (t, x) ∈ [0, T ] × C 0 , we have
v(t, x) = Y t,x t = Y t,x t .
Proof. From Proposition 4.4 and Proposition 4.5, we know that for every n ≥ 1, v n and u n are viscosity solutions of PPDE (4.1). Since it is clear by all our estimates that v n , u n ∈ C 0 2 ([0, T ] × Λ 0,x 0 ), and since v n (T, •) = u n (T, •), by Theorem 4.6 we deduce that
v n (t, x) = Y t,x,n t = Y t,x,n t .
By Lemma 2.7, (3.8) and (3.9), it then suffices to let n go to +∞. ✷
5 Extension to degenerate diffusions
The setting
The result of the previous section is fundamentally based on the non-degeneracy of the diffusion matrix σ. Our main purpose here is to extend our general representation to cases where σ is allowed to degenerate. As will be clear later on, the type of degeneracy we will consider will be rather specific, but it will nonetheless be particularly well-suited for the applications we have in mind. Before stating our results, we need to introduce some notations.
For every n ∈ N\{0} and any t ∈ [0, T ), we consider uniform partitions of the interval [t, T ] by {t t,n k := t + k(T -t)n -1 , k = 0, . . . , n}. We also define for every 0 ≤ k ≤ n and every (s,
x 0:k ) ∈ [t, T ] × (R d ) k+1 , the linear interpolator i k : (R d ) k+1 -→ C 0 by i k (x 0:k )(s) = n T -t k-1 i=0 (t t,n i+1 -s)x i + (s -t t,n i )x i+1 1 [t t,n i ,t t,n i+1 ] (s).
Our main assumption now becomes Assumption 5.1. Assumption 2.1(i), (ii), (iii) hold and (iv ′ ) For any p > 0, there exist progressively measurable maps η p : [0, T ] × C 0 -→ R -with linear growth, that is there exists some C > 0 such that
0 ≤ -η p t (x) ≤ C(1 + x ∞,t ),
and a deterministic map M : [0, T ] -→ S d , such that M t is symmetric positive for every t ∈ [0, T ], the maps x -→ η p t (x) are concave for every t ∈ [0, T ], the sequence (η p ) p≥0 is non-decreasing, and such that for any p ≥ 0 and any (t, x) ∈ [0, T ] × C 0 , the matrix σ η p t (x) is an invertible matrix such that (σ η p t ) -1 (x)f is uniformly bounded in (t, x), where
σ η p t (x) := η p t (x)M t + σ t (x).
(v) The matrix M t σ ⊤ t (x) + σ t (x)M t is symmetric negative, for every (t, x) ∈ [0, T ] × C 0 . (vi) The maps U , µ and σ are such that U is concave and for every n ≥ 1, and 0 ≤ k ≤ n -1, for every
(t, x) ∈ [0, T ] × C 0 ,every {(α i,j , β i,j , γ i,j ) ∈ R d × R * + × R d , 1 ≤ i ≤ n -k, 0 ≤ j ≤ n -
k}, and every
(x 0:k , x, ỹ, λ) ∈ R d k+1 × C t × C t × [0, 1], we have U i n x 0:k-1 , w λ (x, ỹ; 0, 0), 1 i=0 w λ (x, ỹ; i, 1), . . . , n-k i=0 w λ (x, ỹ; i, n -k) ≥ U i n x 0:k-1 , z λ (x, ỹ; 0, 0), 1 i=0 z λ (x, ỹ; i, 1), . . . , n-k i=0 z λ (x, ỹ; i, n -k)
where
w λ (x, ỹ; i, ℓ) := α i,ℓ + β i,ℓ µ t,x t t,n k-1+i (λx + (1 -λ)ỹ) + η t,x t t,n k-1+i M t t,n k-1+i + σ t,x t t,n k-1+i (λx + (1 -λ)ỹ)γ i,ℓ , z λ (x, ỹ; i, ℓ) := α i,ℓ + β i,ℓ λµ t,x t t,n k-1+i (x) + (1 -λ)µ t,x t t,n k-1+i (ỹ) + γ i,ℓ λ η t,x t t,n k-1+i M t t,n k-1+i + σ t,x t t,n k-1+i (x) + γ i,ℓ (1 -λ) η t,x t t,n k-1+i M t t,n k-1+i + σ t,x t t,n k-1+i (ỹ) Remark 5.2.
This assumption deserves a certain number of comments.
• (iv ′ ) is here in order to ensure that the degenerate matrix σ becomes invertible when it is suitably perturbed. Of course, our ultimate goal here is to assume that η p converges to 0 and to approximate the solution of our problem with degenerate diffusion as the corresponding limit. We also emphasize that this assumption implies in particular that for any (t, x) ∈ [0, T ] × C 0 and any
p ≥ p ′ σ η p t (x) -σ η p ′ t (x) = (η p t (x) -η p ′ t (x))M t ,
which is a symmetric positive matrix. Hence, the sequence σ η p is non-decreasing for the usual order on symmetric positive matrices.
• (v) and (vi) are actually here mainly so that the results of Lemma 5.3 below hold for a certain function f involving U (see the proof of Proposition 5.4 below). They take a particularly complicated form for two reasons. First, our setting is fully non-Markovian, and second, it is also multidimensional. Indeed, as can be checked directly, if d = 1 and x -→ σ t (x) is linear, then we only need to assume that µ is concave and f non-decreasing for (5.1) below to hold. Similarly, (vi) is somehow a concavity assumption on U , µ and σ. Indeed, if again d = 1 and if U were Markovian, then a sufficient condition for (vi) to hold is that U is non-decreasing, µ is concave and σ and η are linear.
Our strategy of proof here is to start by obtaining a monotonicity result, with respect to the parameter p, for the solution of our control problem with diffusion coefficient σ p . Such a result will be based on convex order type arguments. More precisely, we follow the strategy outlined by Pagès [START_REF] Pagès | Convex order for path-dependent derivatives: a dynamic programming approach[END_REF] and start by proving the result in a discrete-time setting, this is Proposition 5.4, which can then be extended to continuous-time through weak convergence arguments. Though the strategy of proof is the same as in [START_REF] Pagès | Convex order for path-dependent derivatives: a dynamic programming approach[END_REF], our proofs are more involved mainly due to the fact that, unlike in [START_REF] Pagès | Convex order for path-dependent derivatives: a dynamic programming approach[END_REF], our framework is fully non-Markovian and multidimensional.
Lemma 5.3. Let Assumption 5.1 hold. Fix n ≥ 1. For every (t, s, x, x, λ, u)
∈ [0, T ] × [t, T ] × C 0 × C t × R * + × R,
for every k = 0, . . . , n -1, and for every Borel map f : R d -→ R with polynomial growth, define the following operators
Q k+1 t,x,λ (f )(x, u) := E P t 0 f xt t,n k + λ µ t,x t t,n k (x) + f t t,n k ν t t,n k + uM t t,n k + σ t,x t t,n k (x) B t t t,n k+1 -B t t t,n k F t t t,n k .
If f is concave and s.t. for every (t, s, x, x, ỹ, α, β, γ, η)
∈ [0, T ]×[t, T ]×C 0 ×C t ×C t ×R d ×R * + ×R d ×[0, 1], f α + βµ t,x s (ηx + (1 -η)ỹ) + σ t,x s (ηx + (1 -η)ỹ)γ ≥ f α + β(ηµ t,x s (x) + (1 -η)µ t,x s (ỹ)) + (ησ t,x s (x) + (1 -η)σ t,x s (ỹ))γ , (5.1) then the map (x, u) -→ Q k+1 t,x,λ (f )(x, u) is concave, and the map u -→ Q k+1 t,x,λ (f )(x, u) is non-decreasing on R -.
Proof. The fact that the operators Q k+1 t,x,γ are well-defined is clear from the polynomial growth of f , the linear growth of µ and σ, and the fact that B t has moments of any order under P t 0 . Then, the concavity of Q k+1 t,x,γ (f ) is an immediate corollary of the concavity assumptions on f , as well as (5.1). Then, since f has polynomial growth, Feynman-Kac's formula implies that
Q k+1 t,x,λ (f )(x, u) = v(t t,n k , B t t t,n k ),
where v :
[t t,n k , t t,n k+1 ] × R d -→ R is the unique viscosity solution of the PDE -v s -1 2 Tr uM t k + σ t,x t t,n k (x) uM t t,n k + (σ t,x t t,n k ) ⊤ (x) v xx = 0, on [t t,n k , t t,n k+1 ) × R d , v t t,n k+1 , x = f xt t,n k + λ µ t,x t t,n k (x) + f t t,n k ν t t,n k + uM t t,n k + σ t,x t t,n k x , x ∈ R d .
This linear PDE classically satisfies a comparison theorem, and v is concave in x because of the concavity of f . Moreover, the diffusion part of the PDE rewrites, as a quadratic functional of u
u 2 Tr M 2 t t,n k v xx + uTr M t t,n k (σ t,x t t,n k ) ⊤ (x) + σ t,x t t,n k (x)M t t,n k v xx + Tr σ t,x t t,n k (x)(σ t,x t t,n k ) ⊤ (x) .
Since M 2 t is symmetric positive, M t σ ⊤ t (x)+σ t (x)M t is symmetric negative and v xx is symmetric negative as well, the above is actually non-decreasing for u ∈ R -. The same then holds for Q k+1 t,x,λ (f )(x, u) by comparison. ✷ Proposition 5.4. Let Assumptions 2.3 and 5.1 hold and fix some q ≥ p > 0. For every n ∈ N\{0}, x ∈ C 0 , let us define recursively (X t,x,n k
) 0≤k≤n and (Y t,x,n k ) 0≤k≤n by X t,x,n 0 = Y t,x,n 0 = x t , and for 0 ≤ k ≤ n -1 X t,x,n k+1 = X t,x,n k + µ t,x t t,n k (X t,x,n k ) + f t t,n k ν t t,n k t t,n k+1 -t t,n k + η p,t,x t t,n k X t,x,n k M t t,n k + σ t,x t t,n k X t,x,n k B t t t,n k+1 -B t t t,n k , Y t,x,n k+1 = Y t,x,n k + µ t,x t t,n k (Y t,x,n k ) + f t t,n k ν t t,n k t t,n k+1 -t t,n k + η q,t,x t t,n k Y t,x,n k M t t,n k + σ t,x t t,n k Y t,x,n k B t t t,n k+1 -B t t t,n k ,
where (X t,x,n k
) 0≤k≤n and (Y t,x,n k
) 0≤k≤n are defined as the following piecewise linear interpolations
X t,x,n k := i k (X t,x,n 0:k ), Y t,x,n k := i k (Y t,x,n 0:k ).
Then, we have
E P t 0 U t,x i n X t,n,x 0:n ≤ E P t 0 U t,x i n Y t,n,x 0:n . Proof. Let ∆ t,n := t t,n k+1 -t t,n k = (T -t)
/n, and consider the following martingales, for 0 ≤ k ≤ n,
M k := E P t 0 U t,x i n X t,n,x 0:n F t t t,n k , N k := E P t 0 U t,x i n Y t,n,x 0:n F t t t,n k ,
which are well-defined, since U has polynomial growth and we know from Lemma 2.2 that X t,n,x 0:n and Y t,n,x 0:n have moments of any order. For any k = 0, . . . , n, we also define the following sequences of functions from R k+1 to R, for k = 0, . . . , n -1, by backward induction, for any
x 0:k ∈ (R d ) k+1 Φ n := U t,x • i n , Φ k (x 0:k ) := Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) i k (x 0:k ), η p t t,n k (i k (x 0:k )) , Ψ n := U t,x • i n , Ψ k (x 0:k ) := Q k+1 t,x,∆ t,n (Ψ k+1 (x 0:k , •)) i k (x 0:k ), η q t t,n k (i k (x 0:k )) .
It is immediate by definition of X t,x,n and Y t,x,n that we have for every
0 ≤ k ≤ n M k = Φ k (X t,x,n 0:k ) and N k = Ψ k (Y t,x,n 0:k ).
Let us now show that the maps Φ k and Ψ k are concave for every k = 0, . . . , n, and that they verify that for any
0 ≤ i ≤ k -1, (x 0:n , x0:n , η) ∈ (R d ) n+1 × (R d ) n+1 × [0, 1], for any {(α m,l , β m,l , γ m,l ) ∈ R d × ×R * + × R d , i ≤ m ≤ k -1, 0 ≤ l ≤ k -i -1}, we have for ϕ = Φ, Ψ ϕ k x 0:i , α i,0 + β i,0 µ t,x t t,n i (i i (ηx 0:i + (1 -η)x 0:i ) + σ t,x t t,n i (i i (ηx 0:i + (1 -η)x 0:i )γ i,0 , i+1 j=i α j,1 + β j,1 µ t,x t t,n j (i j (ηx 0:j + (1 -η) x0:j ) + σ t,x t t,n j (i j (ηx 0:j + (1 -η) x0:j )γ j,1 , . . . , k-1 j=i α j,k-i-1 + β j,k-i-1 µ t,x t t,n j (i j (ηx 0:j + (1 -η) x0:j ) + σ t,x t t,n j (i j (ηx 0:j + (1 -η) x0:j )γ j,k-i-1 ≥ ϕ k x 0:i , α i,0 + β i,0 ηµ t,x t t,n i (i i (x 0:i )) + (1 -η)µ t,x t t,n i (i i (x 0:i )) + ησ t,x t t,n i (i i (x 0:i )) + (1 -η)σ t,x t t,n i (i i (x 0:i )) γ i,0 , i+1 j=i α j,1 + β j,1 ηµ t,x t t,n j (i j (x 0:j )) + (1 -η)µ t,x t t,n j (i j ( x0:j )) + ησ t,x t t,n j (i j (x 0:j )) + (1 -η)σ t,x t t,n j (i j ( x0:j )) γ j,1 , . . . , k-1 j=i α j,k-i-1 + β j,k-i-1 ηµ t,x t t,n j (i j (x 0:j )) + (1 -η)µ t,x t t,n j (i j ( x0:j )) + ησ t,x t t,n j (i j (x 0:j )) + (1 -η)σ t,x t t,n j (i j ( x0:j )) γ j,k-i-1 , (5.2)
where x and x are defined recursively, for w := x, x, by
ŵl := w l , 0 ≤ l ≤ i, ŵl+1 := l j=i α j,l-i + β j,l-i µ t,x t t,n j (i j ( ŵ0:j )) + σ t,x t t,n j (i j (x 0:j ))γ j,l-i , i ≤ l ≤ k -1.
We only prove the result for Φ k , the other one being exactly similar. We argue by backward induction. When k = n, the result is obvious since U is concave and Assumption 5.1(vi) holds. Let us assume that the properties holds for Φ k+1 for some k ≤ n -1. Then, let us now show that the map x 0:k -→
Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) (i k (x 0:k ), u
) is concave for any u ∈ R. We actually have
Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) (i k (x 0:k ), u) = E P t 0 Φ k+1 x 0:k , x k + ∆ t,n µ t,x t t,n k (i k (x 0:k )) + f t t,n k ν t t,n k + uM t t,n k + σ t,x t t,n k (i k (x 0:k )) B t t t,n k+1 -B t t t,n k F t t t,n k .
Therefore, the concavity is immediate from the induction hypothesis on Φ k+1 (both the concavity and Inequality (5.2))
Now, we know that η p t (•) is concave and non-positive, and, from Lemma 5.3, that the map u -→
Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) (i k (x 0:k ), u
) is non-decreasing on R -. This therefore proves the concavity of Φ k . Moreover, Φ k inherits (5.2) directly from Φ k+1 by its definition as an expectation of Φ k+1 .
Finally, let us prove, again by backward induction, that for every k = 0, . . . , n, Φ k ≤ Ψ k . The result is obvious by definition for k = n. Assume now that for some k ≤ n -1, we have Φ k+1 ≤ Ψ k+1 . Then, for any
x 0:k ∈ R k+1 Φ k (x 0:k ) = Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) i k (x 0:k ), η p t t,n k (i k (x 0:k )) ≤ Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) i k (x 0:k ), η q t t,n k (i k (x 0:k )) ≤ Q k+1 t,x,∆ t,n (Ψ k+1 (x 0:k , •)) i k (x 0:k ), η q t t,n k (i k (x 0:k )) = Ψ k (x 0:k ),
where we have used successively the fact that u
-→ Q k+1 t,x,∆ t,n (Φ k+1 (x 0:k , •)) (i k (x 0:k ), u
) is non-decreasing on R -(remember that η p ≤ η q ) and the induction hypothesis. To conclude, it suffices to take k = 0 to obtain Φ 0 (x t ) ≤ Ψ 0 (x t ), which is equivalent by the martingale property of M and N to
E P t 0 U t,x i n X t,n,x 0:n ≤ E P t 0 U t,x i n Y t,n,x 0:n .
✷
We can now state the main technical result of this section. Proposition 5.5. Let Assumptions 2.3 and 5.1 hold. For any p > 0, denote by X t,x,ν,p the solution to the SDE (2.6) with diffusion matrix σ p instead of σ, and let
v p (t, x) := sup ν∈U t E P t 0 U x ⊗ t X t,x,ν,p .
Then, for any q ≥ p > 0, we have for any
(t, x) ∈ [0, T ] × C 0 v p (t, x) ≤ v q (t, x).
Proof. By Proposition 5.4, we know that if we replace X t,x,ν,p and X t,x,ν,q by their Euler scheme, then the expectation of U of these Euler schemes are ordered. We can then follow exactly the arguments of the proofs of Lemma 2.2 and Theorem 2.1 in [START_REF] Pagès | Convex order for path-dependent derivatives: a dynamic programming approach[END_REF], using in particular the continuity we have assumed for U , as well as the fact that the genuine Euler scheme for a non-Markovian SDE converges to the solution of the SDE for the uniform topology on C 0 , to extend this result and obtain
E P t 0 U x ⊗ t X t,x,ν,p ≤ E P t 0 U x ⊗ t X t,x,ν,q ,
from which the result is clear. ✷
Our main result is then that with degenerate volatility, the singular stochastic control problem can be represented as an infimum of solution of constrained BSDEs.
Theorem 5.6. Let Assumptions 2.3, 3.2, 3.6 and 5.1 hold, with σ p instead of σ, and assume in addition that sup
(t,x)∈[0,T ]×C 0 |η p t (x)| -→ p→+∞ 0.
Then, we have
v(t, x) = lim p→+∞ ↑ v p (t, x) = sup p>0 v p (t, x) = sup p>0 Y t,x,p t = sup p>0 Y t,x,p t ,
where Y t,x,p and Y t,x,p are defined as Y t,x and Y t,x with σ p instead of σ.
Proof. Since σ p satisfies all the required assumptions, by Theorem 4.7 and Proposition 5.5, the only equality that we have to prove is the first one. But it is a simple consequence of classical estimates for SDEs and the uniform convergence we have assumed for η p . ✷ Remark 5.7. The representation we have just obtained involves a supremum of solutions of constrained BSDEs. Formally speaking, such an object is close in spirit to so-called constrained second order BSDEs, as introduced by Fabre in her Phd thesis [START_REF] Fabre | Some contributions to stochastic control and backward stochastic differential equations in finance[END_REF]. Indeed, the supremum over p could be seen as a supremum over a family of probability measures, such that under these measures the canonical process has the same law as a continuous martingale whose quadratic variation has density σ p (σ p ) ⊤ . To prove such a relationship rigorously is a very interesting problem, which however falls outside the scope of this paper.
Utility maximization with transaction costs for non-Markovian dynamics
The setup
Let us consider the following framework. Let us fix d = 3, and for a given λ > 0
f := 1 -(1 + λ) 0 -(1 + λ) 1 0 0 0 0 .
Let us also be given the following bounded and progressively measurable maps r, µ S and σ S from the space of continuous functions (from [0, T ] to R) to R. For any x ∈ C 0 , we let
µ t (x) := r t (x 3 )x 1 t µ S t (x 3 )x 2 t 0 , σ t (x) := 0 0 0 0 0 σ S t (x 3 )x 2 t 0 0 1 , M t := 1 0 0 0 1 0 0 0 0 , η p t (x) := - 1 p .
Then, we have
(σ p t ) -1 (x) = -p 0 0 0 -p pσ S t (x 3 )x 2 t 0 0 1 , and (σ p t ) -1 (x)f = -pf.
Moreover, we have
M t σ t (x) = 0 0 0 0 0 0 0 0 0 ,
so that Assumption 5.1(iv'),(v) are satisfied. The dynamics of the 3 coordinates of X t,x,ν,p are then given by
X t,x,ν,p,1 s = x 1 t + s t r t,x 3 u X t,x,ν,p,3 s X t,x,ν,p,1 u du - 1 p B t,1 s + s t ν 1 u -(1 + λ)ν 2 u du, X t,x,ν,p,2 s = x 2 t + s t X t,x,ν,p,2 u (µ S ) t,x 3 u X t,x,ν,p,3 u du + (σ S ) t,x 3 u X t,x,ν,p,3 u dB t,3 u - 1 p B t,2 s + s t ν 2 u -(1 + λ)ν 1 u du, X t,x,ν,p,3 s = x 3 t + B t,3 s .
Under this form, the limit when p goes to +∞ of the above system is exactly the dynamics of a portfolio position in a market with transaction costs, as in [START_REF] Davis | Portfolio selection with transaction costs[END_REF][START_REF] Shreve | Optimal investment and consumption with transaction costs[END_REF] for instance. More precisely, the financial market considered consists of a riskless asset with (random) short rate r and a risky asset S whose (non-Markovian) dynamics is, under P t 0 ,
dS u S u = (µ S ) t,x 3 u B t,3 du + (σ S ) t,x 3 u B t,3 dB t,3 u .
Moreover, transactions between the risky and the riskless asset incur a proportional transaction cost of size λ. Then, X t,x,ν,p,1 above can be interpreted as the total amount of money invested in the riskless asset by an investor since time t, while X x,ν,p,2 is the total amount of money invested in the risky asset, and the controls ν 1 and ν 2 record respectively the transactions from the risky to the riskless asset and from the riskless to the risky asset. Finally, the only role played by X t,x,ν,p,3 is to represent the Brownian motion driving the randomness in r, µ and σ.
The result
We will actually use the result proved in Theorem 5.6 conditionally on X t,x,ν,p,3 (that is to say that we consider conditional expectations with respect to σ(X t,x,ν,p,3 s , t ≤ s ≤ T ) instead of simple expectations). It can be checked readily that all our arguments still go through in this case. Moreover, the drifts and volatility in the dynamics of X t,x,ν,p,1 and X t,x,ν,p,2 then become (conditionally) linear. We then choose a particular specification for the map U U
(x) =: U (x 3 , ℓ(x 1 T , x 2 T )),
where the so-called liquidation function ℓ is defined by
ℓ(x, y) := x + y + 1 + λ -(1 + λ)y -, (x, y) ∈ R 2 ,
and the map U is assumed to be a (random) utility function, which is increasing and strictly concave with respect to its second-variable, as well as locally-Lipschitz continuous with polynomial growth, so that Assumption 2.3 is satisfied. Then, remembering Remark 5.2 above, we know that (conditionally), Assumption 5.1(vi) is also satisfied. Therefore, we can apply our result to obtain that the value function, which corresponds in this case to that of the utility maximization problem in finite horizon with transaction costs can be represented as a supremum of solutions of constrained BSDEs. But there is more. Indeed, in this case the constraint can be read
-pf Z t,x s ∈ K s , dt × dP a.e. ,
which, by definition of K and since p > 0, is actually equivalent to -f Z t,x s ∈ K s . Therefore, the solution to the constrained BSDE is actually independent of p. Therefore, the value function can actually be represented as the solution of another BSDE, with a modified constraint as above.
As far as we know, such a result is completely new in the literature, even in the Markovian case. Moreover, as pointed out in the recent paper [START_REF] Kallsen | Portfolio optimization under small transaction costs: a convex duality approach[END_REF], the non-Markovian case has actually never been studied using stochastic control and PDE tools, the only approach in the literature being convex duality. We thus believe that our approach achieves a first step allowing to tackle this difficult problem. Let us nonetheless point out a gap in our approach. If we wanted to cover completely the problem of transaction costs, we should have added state constraints in our original stochastic control problem. Indeed, those are inherent to the problem of transaction costs, in order to avoid bankruptcy issues (though this is actually a lesser issue when the time horizon is finite, as in our case). We have chosen not to do so so as not to complicate even more our arguments, but we believe that they could be also used in this setting, albeit with possibly important modifications. In particular, the full dynamic programming principle that we used does not seem to be proved in such a general framework in the literature, when state constraints are present (see however [START_REF] Bouchard | Weak dynamic programming for generalized state constraints[END_REF][START_REF] Bouchard | Stochastic target games and dynamic programming via regularized viscosity solutions[END_REF]), and it is not completely clear in which sense the equality between v sing and v will then hold.
6 Applications: DPP and regularity for singular stochastic control
Dynamic programming principle
Notice that in all our study, we never actually proved that the dynamic programming principle actually held for the singular stochastic control problem defining v (or v sing ). However, it is an easy consequence of our main result. Theorem 6.1. For any (t, x) ∈ [0, T ] × C 0 , for any τ ∈ T t and any θ ∈ T t,x , we have
v(t, x) = sup ν∈U t E P t 0 v(τ, x ⊗ t X t,x,ν ) = sup ν∈U t E P t,x ν v(θ, x ⊗ t B t,xt )
Proof. It is an immediate consequence of the dynamic programming principle satisfied by the penalized BSDEs (4.6) and the convergence of penalized BSDEs to the minimal solution of the constrained BSDE. ✷
Regularity results
In this section, we show how our representation can help to obtain a priori regularity results for the value function of singular stochastic control problems. Such results in that level of generality are, as far as we know, the first available in the literature.
The main idea of the proof is that as soon as one knows that the value function of the singular control problem is associated to a constrained BSDE, one can use the fact that such BSDEs are actually linked to another different singular stochastic control problem, which is actually simpler to study. Such a representation is not new and was already the crux of the arguments of Cvitanić, Karatzas and Soner [START_REF] Cvitanić | Backward stochastic differential equations with constraints on the gains-process[END_REF]. It has also been used very recently in [START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF] to obtain the first regularity results in the literature for constrained BSDEs. For the sake of simplicity, and since this is not the heart of our article, we will concentrate on the Markovian set-up for this application, and leave the more general case to future research2 .
Let us define the map δ : [0, T ] × R d -→ R + such that for any t ∈ [0, T ], δ t (•) is the so-called support function of the set K t , that is to say
δ t (u) := sup {k • u, k ∈ K t } .
Notice that since the zero vector in R d belongs to K t for any t ∈ [0, T ], it is clear that δ is non-negative. This section requires requires the following additional assumptions. Assumption 6.2. (i) The maps U , µ and σ are Markovian, that is to say, abusing notations slightly
U (x) = U (x T ), µ t (x) = µ t (x t ), σ t (x) = σ t (x t ) , for any x ∈ C 0 .
(ii) the map t -→ f t does not depend on t, and thus δ as well.
(iii) If one defines the so-called face-lift of U by
U (x) := sup u∈R d {U (x + f T u) -δ(u)} , x ∈ R d ,
then we have for some constant C > 0 and any
(x, x ′ ) ∈ R d × R d U (x) -U (x ′ ) ≤ C x -x ′ .
The main result of this section is Theorem 6.3. Let Assumptions 2.1, 2.3, 3.2, 3.6 and 6.2 hold. Then, there is a constant C > 0 such that for any (t,
t ′ , x, x ′ ) ∈ [0, T ] × [t, T ] × C 0 × C 0 v(t, x) -v(t, x ′ ) ≤ C x t -x ′ t , v(t, x) -E P t 0 [v(t ′ , x)] ≤ C 1 + x t (t ′ -t) 1/2 .
The remaining of this section is dedicated to the proof of this result. We shall make a strong use of the connection with constrained BSDEs established before.
Another singular control problem
For any t ∈ [0, T ], let us consider the following set of controls V t b := (u s ) t≤s≤T , which are R d -valued, F t -predictable and bounded. .
For any (t, x) ∈ [0, T ] × R d , we define Y x t := sup u∈V t b E P t 0 U (X t,x,u T ) - T t δ(u s )ds ,
where X t,x,u is the unique strong solution on (Ω t , F t,o T , P t 0 ) of the following SDE
X t,x,u = x + • t µ s X t,x,u s ds + • t f s u s ds + • t σ s X t,x,u s dB t s , P t 0 -a.s.
This value function is always well-defined since u is bounded and δ is non-negative. Our first step is to show that one can actually replace the map U above by its facelift. It is a version of Proposition 3.1 of [START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF] for our setting.
Lemma 6.4. Let Assumptions 2.1, 2.3 and 6.2 hold. Then, for any t < T , we have
Y x t = sup u∈V t b E P t 0 U (X t,x,u T ) - T t δ(u s )ds .
Proof. Clearly, we have U ≥ U , so that one inequality is trivial. Next, fix some u ∈ V t b . For some ε o > 0 small enough and any ε ∈ (0, ε o ), we define the following element of
V t b u ε s := u s 1 [t,T -ε] (s) + ι ε 1 [T -ε,T ] (s),
where ι is any bounded and F t T -εo -measurable random variable. Then, we have by the tower property for expectations and the Markov property for SDEs (see for instance [START_REF] Claisse | A pseudo-Markov property for controlled diffusion processes[END_REF]) that
E P t 0 E P T -ε 0 U X T -ε,X t,x,u T -ε ,(u ε ) T -ε,B t T - T T -ε δ (u ε ) T -ε,B t s ds - T -ε t δ(u s )ds = E P t 0 U (X t,x,u ε T ) - T t δ(u ε s )ds , which implies that Y x t ≥ E P t 0 E P T -ε 0 U X T -ε,X t,x,u T -ε ,(u ε ) T -ε,B t T - T T -ε δ (u ε ) T -ε,B t s ds - T -ε t δ(u s )ds . (6.1)
Next, we claim that, at least along a subsequence, we have
lim ε→0 E P t 0 E P T -ε 0 U X T -ε,X t,x,u T -ε ,(u ε ) T -ε,B t T - T T -ε δ (u ε ) T -ε,B t s ds - T -ε t δ(u s )ds = E P t 0 U X t,x,u T + f T ι -δ(ι) - T t δ(u s )ds . (6.2)
Indeed, by (an easy extension of) (2.4), we first have for any p ≥ 2 Furthermore, by (2.2), we have P t 0 -a.s.
E P t 0 X T -ε,X t,x,u T -ε ,u T -ε,B t T + f ι -X T -ε,X t,x,
E P T -ε 0 X T -ε,X t,x,u T -ε ,u T -ε,B t T -X t,x,u T -ε p ≤ C p ε
E P t 0 X T -ε,X t,x,u T -ε ,u T -ε,B t T -X t,x,u T -ε p ≤ C p ε 1 2 1 + E P t 0 X t,x,u T -ε p + C p E P t 0 T T -ε u s ds p , P t 0 -a.s.
Hence, passing to a subsequence if necessary, and using the continuity of the paths of X t,x,u , we deduce from the above inequalities that X T -ε,X t,x,u ,(u ε ) T -ε,B t T converges to X t,x,u T + f ι, P t 0 -a.s. and in L p (P t 0 ).
By continuity of U , we deduce that the following convergence holds P t 0 -a.s. and in L p (P t 0 )
U X T -ε,X t,x,u T -ε ,(u ε ) T -ε,B t T - T T -ε δ (u ε ) T -ε,B t s ds -→ U (X t,x,u T + f ι) -δ(ι).
Then, this implies by the tower property that the following convergence holds in L 1 (P t 0 )
E P T -ε 0 U X T -ε,X t,x,u T -ε ,(u ε ) T -ε,B t T - T T -ε δ (u ε ) T -ε,B t s ds -→ U (X t,x,u T + f ι) -δ(ι),
which implies the desired claim (6.2).
Then, by (6.1) and (6.2) we deduce that for random variable ι which is bounded and F t T -εo -measurable, we have
Y x t ≥ E P t 0 U X t,x,u T + f ι -δ(ι) - T t δ(u s )ds , (6.3)
and the same statement holds for any ι which is bounded and F t T --measurable by arbitrariness of ε o . Now, since the map (x, ι) -→ U (x + f ι) -δ(ι) is Borel measurable, we can argue as in the proof of Proposition 3.1 in [START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF] to obtain the existence for any ε > 0 of a Borel measurable map x -→ ι ε (x) such that U (X t,x,u T ) ≤ U X t,x,u T + f ι ε (X t,x,u T ) -δ ι ε (X t,x,u T ) + ε.
Then, if we define ι n,ε := ι ε (X Then the required result follows by letting first n go to infinity and dominated convergence (remember that U is Lipschitz and X t,x,u has moments of any order), and then ε go to 0. ✷
The next result is Proposition 3.3 of [START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF] Fix now some (t, x) ∈ [0, T ] × R d , some ι ∈ L ∞ (F t ), some ε > 0 small enough, and define
u ε := 1 ε ι1 [t,t+ε] ∈ V t b .
By the dynamic programming principle, we have, Be definition of u ε , we have, using similar arguments as in the proof of Lemma 6.4, that along a subsequence if necessary, Thus, we deduce from passing to the limit in (6.5) that We can now give our main result of this section. Proposition 6.6. Let Assumptions 2.1, 2.3 and 6.2 hold. Then, there is some constant C > 0 such that for any 0 ≤ t ≤ s < T , any (x,
Y x t ≥ E P t 0 Y X t,
E P t 0 U X
Y x t ≥ E P t 0 U X t,
x ′ ) ∈ R d × R d Y x t -Y x ′ t ≤ C x -x ′ , Y x t -E P t 0 [Y x s ] ≤ C(1 + x )(s -t) 1 2 .
Proof. The first result is an immediate consequence of Lemma 6.4, the fact that U is Lipschitz continuous, and classical estimates on the solutions to SDE satisfied by X t,x,u .
Next, we have by (6.4), the regularity in x we just proved and (2.2)
Y x t ≥ E P t 0 Y X t,x,0 s s ≥ E P t 0 [Y x s ] -CE P t 0 X t,x,0 s -x ≥ E P t 0 [Y x s ] -C 1 + x (s -t) 1/2 .
Then, we have for any u ∈ V t b , using the fact that t -→ δ t (•) is non-increasing and sublinear, as well as Lemma 6.5 where we used the fact that in the expression X t,x,u s -f s t u r dr -x the control u actually disappears. By definition of Y x t , this ends the proof. ✷
E P s 0 U X s,X
Weak formulation and the main result
For any (t, x) ∈ [0, T ] × R d and u ∈ V t b , we now define the following P t 0 -equivalent measure
dP t,x,u dP t 0 = E • t (σ s ) -1 X t,x,0 f u s • dB t s .
The weak formulation of the control problem is defined as
Y w,x t := sup
t≤u≤s x u
u , where • is the usual Euclidean norm on R d , which we denote simply by |•| when d = 1. Furthermore, the usual inner product on R d is denoted by x • y, for any (x, y) ∈ R d × R d .
Proposition 3 . 4 .
34 Let Assumption 3.2, Assumption 2.1 and Assumption 2.3 hold. Then, the minimal solution of the K-constrained BSDE (3.1) exists.
Proposition 4 . 5 .
45 Under Assumptions 2.1, 2.3, 3.2 and 3.6, u n (t, x) := Y t,x,n t = Y t,x,n t is a viscosity solution of PPDE (4.1).
u T -ε ,(u ε ) T -ε,B t T p ≤ C p E P t 0
1 2 1 +
1 X t,x,u T -ε p + C p E P T -ε 0 T T -ε u T -ε,B t s ds p ,which implies by the tower property
Lemma 6.4 and the tower property we have for any u ∈ V t b r )dr .
r )dr .
r )dr , which implies by Lemma 6.4 and arbitrariness of u∈ V t b Y x t ≥ Y x+f ι t -δ(ι),hence the desired result. ✷
Proposition 6 . 7 ..
67 t,x,u U (X t,x,0 ) -T t δ(u s ) , for any (t, x) ∈ [0, T ] × R d .The following proposition is a simple consequence of Remark 3.8 and Theorem 4.5 of[START_REF] Karoui | Capacities, measurable selection and dynamic programming part II: application in stochastic control problems[END_REF]. For any(t, x) ∈ [0, T ] × R d , we have Y x t = Y w,x tWe can now proceed to the Proof of Theorem 6.3. By Theorem 4.1 of[START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF], we have for any(t, x) ∈ [0, T ] × C 0 , Y w,x(t) t = Y t,x t defined as the first component of the constrained BSDE (3.1)-(3.2). It then suffices to apply Theorem 4.7 together with Proposition 6.6. ✷
t,xt,o -stopping times taking values in [t, T ], by T t,x + the subset of T t,x consisting of the stopping times taking values in (t, T ], and for any H ∈ T t,x , by T t,x H and T t,x H,+ , the subsets of T t,x consisting of stopping times taking values respectively in [t, H] and (t, H].
ε,n is bounded and F T --measurable by continuity of the paths X t,x,u . Hence by (6.3) we have, using the fact that δ T is null at 0
Y x t ≥ E P t 0 U (X t,x,u T )1 |ιε(X t,x,u T )|≤n + U X t,x,u T 1 |ιε(X t,x,u T )|>n -
t,x,u T )1 |ιε(X t,x,u T )|≤n , ι T t δ s (u s )ds -ε.
in our framework Lemma 6.5. Let Assumptions 2.1, 2.3 and 6.2 hold. We have for any(t, x) ∈ [0, T ) × R dwhere L ∞ (F t ) is the set of R d -valued, bounded and F t -measurable random variables.Proof. First of all, one inequality is trivial by taking a constant control ι = 0. Then, the following dynamic programming principle holds classically for any 0 ≤ t ≤ s ≤ T
Y x t = essup ι∈L∞(Ft) Y x+f ι t -δ(ι) , a.s.,
Y x t = sup u∈V t b E P t 0 Y X t,x,u s s - t s δ(u r )dr . (6.4)
.
Then by the tower property, we have, taking expectations under P t 0 on both sides
T t,x,u s ,u s,B t - s T δ(u r )dr - t s δ(u r )dr ≤ Y X t,x,u s s - t s δ(u r )dr
≤ Y X t,x,u s s -δ t s u r dr
≤ Y X t,x,u s s -f s t urdr
E P t 0 U (X t,x,u T ) - t T δ(u r )dr ≤ E P t 0 Y X t,x,u s s -f s t urdr
≤ E P t 0 [Y x s ] + E P t 0 X t,x,u s -f t s u r dr -x
≤ E P t 0 [Y x
s ] + C 1 + x (s -t) 1/2 ,
We would like to point out the reader to the recent work in preparation[6] which will actually extend the results of[START_REF] Bouchard | Regularity of BSDEs with a convex constraint on the gains-process[END_REF] to the non-Markovian case
ANR project Pacman, ANR-16-CE05-0027. support of the ANR project Pacman, ANR-16-CE05-0027. | 78,391 | [
"751630"
] | [
"29",
"325891",
"60",
"300302"
] |
01481463 | en | [
"math"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01481463/file/heisenberg_wedges_crystals.pdf | Thomas Gerber
HEISENBERG ALGEBRA, WEDGES AND CRYSTALS
We explain how the action of the Heisenberg algebra on the space of q-deformed wedges yields the Heisenberg crystal structure on charged multipartitions, by using the boson-fermion correspondence and looking at the action of the Schur functions at q = 0. In addition, we give the explicit formula for computing this crystal in full generality.
Introduction
Categorification of representations of affine quantum groups has proved to be an important tool for understanding many classic objects arising from modular group representation theory, among which Hecke algebras and rational Cherednik algebras of cyclotomic type, and finite classical groups. More precisely, the study of crystals and canonical bases of the level ℓ Fock space representations F s of U ′ q ( sl e ) gives answers to several classical problems in combinatorial terms. In particular, we know that the U ′ q ( sl e )-crystal graph of F s can be categorified in the following ways: -by the parabolic branching rule for modular cyclotomic Hecke algebras [START_REF] Ariki | On the decomposition numbers of the Hecke algebra of G(m, 1, n)[END_REF], when restricting to the connected component containing the empty ℓ-partition, -by Bezrukavnikov and Etingof's parabolic branching rule for cyclotomic rational Cherednik algebras [START_REF] Shan | Crystals of Fock spaces and cyclotomic rational double affine Hecke algebras[END_REF], -by the weak Harish-Chandra modular branching rule on unipotent representations for finite unitary groups [START_REF] Gerber | Harish-Chandra series in finite unitary groups and crystal graphs[END_REF], [START_REF] Dudas | Categorical actions on unipotent representations I[END_REF], for ℓ = 2 and s varying. In each case, the branching rule depends on some parameters that are explicitly determined by the parameters e and s of the Fock space. Recently, there has been some important developments when Shan and Vasserot [START_REF] Shan | Heisenberg algebras and rational double affine Hecke algebras[END_REF] categorified the action of the Heisenberg algebra on a certain direct sum of Fock spaces, in order to prove a conjecture by Etingof [START_REF] Etingof | Supports of irreducible spherical representations of rational Cherednik algebras of finite Coxeter groups[END_REF]. Losev gave in [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] a formulation of Shan and Vasserot's results in terms of crystals, as well as an explicit formula for computing it in a particular case. Independently and using different methods, the author defined a notion of Heisenberg crystal for higher level Fock spaces [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF], that turns out to coincide with Losev's crystal. An explicit formula was also given in another particular case, using level-rank duality. Like the U ′ q ( sl e )-crystal, the Heisenberg crystal gives some information at the categorical level. In particular, it yields a characterisation of -the finite dimensional irreducible modules in the cyclotomic Cherednik category O by [START_REF] Shan | Heisenberg algebras and rational double affine Hecke algebras[END_REF] and [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF], -the usual cuspidal irreducible unipotent modular representations of finite unitary groups [START_REF] Dudas | Categorical actions on unipotent representations of finite classical groups[END_REF]. This paper solves two remaining problems about the Heisenberg crystal. Firstly, even though it originally arises from the study of cyclotomic rational Cherednik algebras (it is determined by combinatorial versions of certain adjoint functors defined on the bounded derived category O), the Heisenberg crystal has an intrinsic existence as shown in [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]. Therefore, it is natural to ask for an algebraic construction of the Heisenberg crystal which would be independent of any categorical interpretation. This is achieved via the boson-fermion correspondence and the use of the Schur functions, acting on Uglov's canonical basis of F s . This gives a new realisation of the Heisenberg crystal, analogous to Kashiwara's crystals for quantum group. Secondly, we give an explicit formula for computing the Heisenberg crystal in full generality. This generalises and completes the particular results of [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] and [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]. This is done in the spirit of [START_REF] Foda | Branching functions of A (1) n-1 and Jantzen-Seitz problem for Ariki-Koike algebras[END_REF] where formulas for the U ′ q ( sl e )crystal were explicited.
The present paper has the following structure. In Section 2, we start by introducing in detail several combinatorial objects indexing the basis of the wedge space (namely charged multipartitions, abaci and wedges) and the different ways to identify them. Then, we quickly recall some essential facts about the U ′ q ( sl e )-structure of the Fock spaces F s . Section 3 focuses on the Heisenberg action on the wedge space F s , seen as a certain direct sum of level ℓ Fock spaces F s . In particular, we recall the definition of the Heisenberg crystal given in [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]. Then, we give in Section 4 a solution to the first problem mentioned above. More precisely, we recall the quantum boson-fermion correspondence and fundamental facts about symmetric functions. Inspired by Shan and Vasserot [START_REF] Shan | Heisenberg algebras and rational double affine Hecke algebras[END_REF] and Leclerc and Thibon [START_REF] Leclerc | Littlewood-Richardson coefficients and Kazhdan-Lusztig polynomials[END_REF], we study the action of the Schur functions on the wedge space and use a result of Iijima [START_REF] Iijima | On a higher level extension of Leclerc-Thibon product theorem in q-deformed Fock spaces[END_REF] to construct the Heisenberg crystal as a version of this action at q = 0 (Theorem 4.4), resembling Kashiwara's philosophy of crystal and canonical bases. Most importantly, by doing so, we bypass entirely Shan and Vasserot's original categorical construction. Section 5 is devoted to the explicit computation of the Heisenberg crystal. We introduce level ℓ vertical estrips, as well as the notion of good vertical e-strips by defining an appropriate order. The action of the Heisenberg crystal operators is then given in terms of adding good level ℓ vertical e-strips (Theorem 5.11), which is reminiscent of the explicit formula for the Kashiwara crystal operators first given in [START_REF] Jimbo | Combinatorics of representations of U q ( sl(n)) at q = 0[END_REF] (see also [START_REF] Foda | Branching functions of A (1) n-1 and Jantzen-Seitz problem for Ariki-Koike algebras[END_REF]). We relate this result to other combinatorial procedures in the literature answering in particular a question of Tingley [START_REF] Tingley | Three combinatorial models for sl n crystals, with applications to cylindric plane partitions[END_REF]. In addition, we give several examples of explicit computations. Finally, we recall some useful facts about level-rank duality in appendix, enabling the definition of the Heisenberg crystal.
2. Higher level Fock spaces 2.1. Charged multipartitions and wedges.
2.1.1. Charged multipartitions. Fix once and for all elements e, ℓ ∈ Z ≥2 and s ∈ Z. An ℓ-partition (or simply multipartition) is an ℓ-tuple of partitions λ = (λ 1 , . . . , λ ℓ ). These will be represented using Young diagrams. Denote Π ℓ the set of ℓ-partitions and Π = Π 1 the set of partitions. Partitions will sometimes be denoted multiplicatively for convenience, e.g. (2, 1, 1) = (2.1 2 ) = . An ℓ-charge (or simply (multi)charge) is an ℓ-tuple of integers s = (s 1 , . . . , s ℓ ). We write |s| = ℓ j=1 s j . We call charged ℓ-partition the data consisting of an ℓ-partition λ and an ℓ-charge s, and denote it by |λ, s . For a box γ = (a, b, j) in the Young diagram of λ (where a ∈ Z is the row index of the box, b ∈ Z the column index and j ∈ {1, . . . , ℓ} the coordinate), let c(γ) = ba + s j , the content of γ. We will represent |λ, s by filling the boxes of the Young diagram of λ with their contents.
Example 2.1. Take ℓ = 2, s = (-1, 2) and λ = (2.1, 1 2 ). We have s = -1 + 2 = 1 and
|λ, s = -1 0 -2 , 2 1 .
In the following, we will only consider multicharges s verifying |s|
:= s 1 + • • • + s ℓ = s.
For a partition λ, let λ ′ denote its conjugate, that is the partition obtained by swapping rows and columns in the Young diagram of λ. We extend this to charged multipartitions by setting |λ, s ′ = |λ ′ , s ′ where λ ′ = ((λ ℓ ) ′ , . . . , (λ 1 ) ′ ) and s ′ = (-s ℓ , . . . , -s 1 ).
Abaci representation.
A charged ℓ-partition |λ, s can also be represented by the Z-graded ℓ-abacus A(λ, s) = ( j, λ j k + s j -k + 1) ; j ∈ {1, . . . , ℓ}, k ∈ Z >1 where λ j k denotes the k-th part of λ j . In the rest of this paper, we will sometimes identify |λ, s with A(λ, s). which we picture as follows, by putting a (black) bead at position ( j,
λ j k + s j -k + 1)
where k ∈ Z >1 and j ∈ {1, . . . , ℓ} is the row index (numbered from bottom to top):
Note that the conjugate a multipartition can be easily described on the abacus: it suffices to rotate it by 180 degrees around the point of coordinates ( 12 , ℓ 2 ) and swap the roles of the white and black beads.
Using the abaci realisation of charged multipartitions, we define below a bijection
τ : {|λ, s ; λ ∈ Π 1 } ∼ -→ {|λ, s ; λ ∈ Π ℓ , |s| = s} A(λ, s)
-→ A(λ, s), which can be seen as a twisted version of taking the ℓ-quotient and the ℓ-core of a partition, see [START_REF] Xavier | Bases canoniques d'espaces de Fock de niveau supérieur[END_REF]Chapter 1]. However, unlike the usual ℓ-quotient and ℓ-core, τ and τ -1 will depend not only on ℓ, but also on e.
Writing down the Euclidean division first by eℓ and then by e, one can decompose any n ∈ Z as n = -z(n)eℓ + (y(n) -1)e + (x(n) -1) with z(n) ∈ Z, 1 ≤ y(n) ≤ ℓ and 1 ≤ x(n) ≤ e. We can then associate to each pair (1, c) ∈ {1} × Z the pair τ(1, d) = ( j, d) ∈ {1, . . . , ℓ} × Z where j = y(-c ) andd = -(x(-c) -1) + ez(-c).
In particular, τ sends the bead in position (1, c) into the rectangle z(-c), on the row y(-c) and column x(-c) (numbered from right to left within each rectangle). The map τ is bijective and we can see τ -1 as the following procedure:
(1) Divide the ℓ-abacus into rectangles of size e × ℓ, where the z-th rectangle (z ∈ Z) contains the positions ( j, d) for all 1 ≤ j ≤ ℓ and -e + 1 + ze ≤ d ≤ ze. (2) Relabel each ( j, d) by the second coordinate of τ -1 ( j, d), see Figure 1 for an example.
(3) Replace the newly indexed beads on a 1-abacus according to this new labeling. Let |λ, s be a charged ℓ-partition with |s| = s, and let λ = (λ 1 , λ 2 , . . . ) be the partition such that τ(|λ, s ) = |λ, s . Set β = (β 1 , β 2 , . . . ) where β k = λ k + sk + 1 for all k ∈ Z >1 . In other terms, β is the sequence of integers indexing the beads of A(λ, s).
We clearly have β ∈ P > (s), so we can consider the elementary wedge u β . In fact, we will also identify |λ, s with u β .
To sum up, we will allow the following identifications:
|λ, s ←→ A(λ, s) τ ←→ A(λ, s) ←→ u β ←→ |λ, s . We will denote B s = {|λ, s ; λ ∈ Π ℓ } for an ℓ-charge s and B s = {|λ, s ; λ ∈ Π} = |s|=s B s .
2.2. Fock space as U ′ q ( sl e )-module. Let q be an indeterminate. 2.2.1. The JMMO Fock space. Fix an ℓ-charge s. The Fock space associated to s is the Q(q)-vector space F s with basis B s .
Theorem 2.5 (Jimbo-Misra-Miwa-Okado [START_REF] Jimbo | Combinatorics of representations of U q ( sl(n)) at q = 0[END_REF]). The space F s is an integrable U ′ q ( sl e )-module of level ℓ.
The action of the generators of U ′ q ( sl e ) depends on s and e, and is given explicitly in terms of addable/removable boxes, see e.g. [START_REF] Geck | Representations of Hecke Algebras at Roots of Unity[END_REF]Section 6.2]. In turn, this induces a U ′ q ( sl e )-crystal structure (also called Kashiwara crystal) [START_REF] Kashiwara | Global crystal bases of quantum groups[END_REF], usually encoded in a graph B s , whose vertices are the elements of B s , and with colored arrows representing the action of the Kashiwara operators. An explicit (recursive) formula for computing this graph was first given in [START_REF] Jimbo | Combinatorics of representations of U q ( sl(n)) at q = 0[END_REF]: two vertices are joined by an arrow if and only if one is obtained from the other by adding/removing a good box, see [7, Section 6.2] for details.
--
0 - -1 0 1 - 0 -1 - -1 2 0 1 - 1 0 0 1 1 0 1 2 - 0 1 -1 - -1 2 3 0 1 0 0 1 2 - 1 2 0 0 -1 -2 - - 1 0 -1 Figure 2.
The beginning of the Kashiwara crystal graph B s of F s for ℓ = 2, e = 3 and s = (0, 1) .
It has several connected components, each of which parametrised by its only source vertex, or highest weight vertex. The decomposition of this graph in connected components reflects the decomposition of F s into irreducible representations (which exists because F s is integrable according to Theorem 2.5).
2.2.2. Uglov's wedge space. Denote F s the Q(q)-vector space spanned by the elementary wedges, and subject to the straightening relations defined in [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]Proposition 3.16]. This is called the space of semi-infinite q-wedges, or simply the wedge space, and the elements of F s will be called wedges.
Using the straightening relations, a q-wedge can be expressed as a Q(q)-linear combination of ordered wedges. In fact, one can show (see [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]Proposition 4.1]) that the set of ordered wedges B s forms a basis of F s .
Remark 2.6.
(1) The terminology "wedge space" is justified by the original construction of this vector space in the level one case by Kashiwara, Miwa and Stern [START_REF] Kashiwara | Decomposition of q-deformed Fock spaces[END_REF]. In this context, one can first construct a quantum version of the usual k-fold wedge product (or exterior power) Λ k V, where V is the natural U ′ q ( sl e )-representation (an affinisation of C e ⊗ C ℓ ). The space F s is then defined as the projective limit (taking k → ∞) of Λ k V. In the higher level case, the analogous construction was achieved by Uglov [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF].
(2) Using the bijection between ordered wedges and partitions charged by s, one can see F s as a level 1 Fock space, whence the notation. In fact, it is sometimes called the fermionic Fock space.
Theorem 2.7. The wedge space F s is an integrable U ′ q ( sl e )-module of level ℓ.
Proof. The identification
B s = u β ∈ F s | β ∈ P > (s) = {|λ, s ∈ F s ; |s| = s} = |s|=s B s of Section 2.1.3 yields the Q(q)-vector space decomposition F s = |s|=s F s .
By Theorem 2.2.1, this decomposition still holds as integrable U ′ q ( sl e )-module.
The Heisenberg action
It turns out that the wedge space has some additional structure, namely that of a H-module, where H is the (quantum) Heisenberg algebra. This has been first observed by Uglov [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF], generalising some results of Lascoux, Leclerc and Thibon [START_REF] Lascoux | Ribbon Tableaux, Hall-Littlewood Functions, Quantum Affine Algebras and Unipotent Varieties[END_REF]. Let us recall the definition of this algebra. Definition 3.1. The (quantum) Heisenberg algebra is the unital Q(q)-algebra H with generators p m , m ∈ Z × and defining relations
[p m , p m ′ ] = δ m,-m ′ m 1 -q -2me 1 -q -2m × 1 -q 2mℓ 1 -q 2m for m, m ′ ∈ Z × .
The elements p m are called bosons.
Note that this is a q-deformation of the usual Heisenberg algebra with relations
[p m , p m ′ ] = δ m,-m ′ m. Theorem 3.2. The formula p m (u β ) = k≥1 u β 1 ∧ . . . ∧ u β k-1 ∧ u β k-eℓm ∧ u β k+1 ∧ . . . for u β ∈ F s and m ∈ Z ×
endows F s with the structure of H-module.
For a proof, we refer to [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]Proposition 4.4 and 4.5]. This is quite technical and is done in two distinct steps, the second of which requires the notion of asymptotic wedge. This will be crucial in Section 4.
Corollary 3.3. The action of the bosons preserves the level ℓ Fock spaces F s for |s| = s.
One can use the explicit action of p m on F s to show that it preserves the ℓ-charges s, or rely on Uglov's argument [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]Section 4.3]. As a consequence of this corollary, F s = |s|=s F s also holds as H-module decomposition.
3.2. Some notations and definitions. For two partitions λ and µ, denote λ + µ the reordering of (λ 1 , µ 1 , λ 2 , µ 2 , . . . ) and kλ the partition (λ k 1 .λ k 2 . . . . ). Extend these notations to ℓ-partitions by doing these operations coordinatewise.
Remark 3.4. This is not the usual notation for adding partitions or multiplying by an integer. One recovers the usual one by conjugating Example 3.5. We have
+ 2 = + = .
We also define the notion of asymptotic charges, which will mostly be useful in Section 4.
Definition 3.6. A wedge u β ∈ F s is called j 0 -asymptotic if there exists j 0 ∈ {1, . . . , ℓ} such that the corresponding charged multipartition |λ, s verifies |s j -s j 0 | ≥ |λ| for all j j 0 .
The Heisenberg crystal.
A notion of crystal for the Heisenberg algebra, or H-crystal, has been indepedently introduced by Losev [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] and the author [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]. Explicit formulas for computing this crystal have been given in some particular cases: asymptotic in [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] and doubly highest weight in [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF].
Recall the definition of the H-crystal according to [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]. This requires the crystal level-rank duality exposed in Appendix.
Definition 3.7. Let |λ, s ∈ B(s), which we identify with its level-rank dual charged e-partition. We call |λ, s a doubly highest weight vertex if it is a highest weight vertex simultaneously in the U ′ q ( sl e )-crystal and in the U ′ p ( sl ℓ )-crystal. Some important properties of doubly highest weight vertices are exposed in [9, Section 5.2]. In particular, an element |λ, s is a doubly highest weight vertex if and only if it has a totally e-periodic ℓ-abacus and a totally ℓ-periodic e-abacus, according to a result by Jacon and Lecouvey [START_REF] Jacon | A combinatorial decomposition of higher level Fock spaces[END_REF], see the definition therein. Moreover, every bead of a given period encodes a same part size in λ, and one can define the partition κ = κ|λ, s as (κ 1 , κ 2 . . . ) where κ k is the part encoded by the k-th period. q ( sl e ) and of U ′ p ( sl ℓ ).
Definition 3.9. We say that |λ, s ∈ B s is a highest weight vertex for H if b-σ |λ, s = 0 for all σ ∈ Π.
Note that each b±σ is well defined since (2) allows to define b±σ |λ, s even when |λ, s is not a doubly highest weight vertex. Moreover, [9, Theorem 7.6] claims that in the asymptotic case, the Heisenberg crystal operators coincide with the combinatorial maps introduced by Losev [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF]. Note also that the definition of κ for doubly highest weight vertices induces (by (2) of Definition 3.8) a surjective map κ : B s → Π such that b-κ|λ,s |λ, s is a highest weight vertex for H for all λ ∈ Π ℓ .
In order to have a description of the H-crystal similar to the U ′ q ( sl e )-crystal, see Remark 3.11. By definition, each b±1,c is a composition of maps b±σ . Conversely, each bσ is a composition of maps b1,c , see [9, Formula (6.17)]. We will also call the maps b1,c Heisenberg crystal operators.
Proposition 3.12.
(1) The H-module decomposition Proof. We know from Definition 3.8 that the action of the crystal Heisenberg operators preserves the multicharge, proving (1). In particular, there is a notion of H-crystal for F s , which we denote C s . Moreover, the H-crystal is characterised by being the preimage of a U q (sl ∞ )-crystal on the set of partitions under a certain bijection, depending on κ, see [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]Remark 6.16]. The U q (sl ∞ )-crystal is exactly the Young graph on partitions, see Figure 3, where the arrows are colored by the contents of the added boxes, which proves (2). In fact, the bijection from the Young graph to a given connected component, parametrised by its source vertex |λ, s is given by σ → bσ |λ, s , and its inverse is |λ, s → κ|λ, s . Point (3) is clear by definition, and (4) follows from the definition of κ.
F s = |s|=s F s induces a decomposition of C s . (2) Each connected component of C s is isomorphic to the Young graph. (3) A vertex |λ, s is source in C s if
Canonical bases and Schur functions
4.1. The boson-fermion correspondence. Denote Λ the algebra of symmetric functions, that is, the projective limit of the Q(q)-algebras of symmetric polynomials in finitely many indeterminates [START_REF] Macdonald | Symmetric functions and Hall polynomials[END_REF]Chapter 1]:
Λ = Q(q)[X 1 , X 2 , . . . ] S ∞ .
The space Λ has several natural linear bases, among which:
-the monomial functions {M σ ; σ ∈ Π} where M σ = π X π 1 1 X π 2 2 . . . where the sum runs over all permutations π of σ, -the complete functions {H σ ; σ ∈ Π} where
H σ = H σ 1 H σ 2 . . . and H m = k 1 ≤•••≤k m X k 1 . . . X k m
for r ∈ N, -the power sums {P σ ; σ ∈ Π} where P σ = P σ 1 P σ 2 . . . and P m = k≥1 X r k for r ∈ N, -the Schur functions {S σ ; σ ∈ Π} where S σ = π∈Π K σ,π M π where K σ,π are the Kostka numbers, see [START_REF] Stanley | Enumerative Combinatorics[END_REF]Chapter 7]. The expansion of H m in the basis of the power sums is given by
H m = π∈Π |π|=m 1 z π P π ,
where
z π = Π k>0 k α k α k with α k = π ′ k -π ′ k+1
, for all π ∈ Π. Moreover, by duality, the Kostka numbers also appear as the coefficients of the complete functions in the basis of the Schur functions:
H σ = π∈Π K π,σ S π .
By a result of Miwa, Jimbo and Date [START_REF] Miwa | Solitons: Differential Equations, Symmetries and Infinite Dimensional Algebras[END_REF], there is a vector space isomorphism
F s ∼ -→ Λ |λ, s -→ S λ ,
called the boson-fermion correspondence. In fact, when refering to the symmetric realisation Λ, one sometimes uses the term bosonic Fock space, as opposed to the fermionic, antisymmetric definition of F s . The following result is [24, Section 4.2 and Proposition 4.6].
Proposition 4.1.
There is a H-module isomorphism F s ≃ Λ, where, at q = 1, the action of p m on F s is mapped to the multiplication by P m on Λ.
In general, the action of p m is mapped to a q-deformation of the multiplication by P m , and p -m to a q-deformation of the derivation with respect to P m , see [START_REF] Lascoux | Ribbon Tableaux, Hall-Littlewood Functions, Quantum Affine Algebras and Unipotent Varieties[END_REF]Section 5] and [27, Section 5.1].
4.2.
Action of the Schur functions. In [START_REF] Leclerc | Littlewood-Richardson coefficients and Kazhdan-Lusztig polynomials[END_REF], Leclerc and Thibon studied the action of the Heisenberg algebra on level 1 Fock spaces in order to give an analogue of Lusztig's version [START_REF] Lusztig | Modular representations and quantum groups[END_REF] of the Steinberg tensor product theorem. Their idea was to use a different basis of Λ to compute the Heisenberg action in a simpler way, namely that of Schur functions. This result has been generalised to the level ℓ case by Iijima in a particular case [START_REF] Iijima | On a higher level extension of Leclerc-Thibon product theorem in q-deformed Fock spaces[END_REF]. Independently, the Schur functions have been used by Shan and Vasserot to categorify the Heisenberg action in the context of Cherednik algebras. More precisely, they constructed a functor on the category O for cyclotomic rational Cherednik algebras corresponding to the multiplication by a Schur function on the bosonic Fock space Λ [24, Proposition 5.13]. The aim of this section is to use Iijima's result to recover in a direct, simple way the results of [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] and [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF] and, by doing so, bypassing Shan and Vasserot's categorical constructions. 4.2.1. Canonical bases of the Fock space. In the early nineties, Kashiwara and Lusztig have independently introduced the notion of canonical bases for irreducible highest weight representations of quantum groups, see e.g. [START_REF] Kashiwara | Global crystal bases of quantum groups[END_REF]. They are characterised by their invariance under a certain involution. Uglov [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF] has proved an analogous result for the Fock spaces F s , even though it is no longer irreducible. We recall the definition of the involution on F s and Uglov's theorem. For any r ∈ N and t 1 , . . . ,
t r ∈ Z r , let ι(t 1 , . . . , t r ) = ♯ { (k, k ′ ) | k < k ′ and t k = t k ′ } that is, the number of repetitions in (t 1 , . . . , t r ).
Definition 4.2. The bar involution is the Q(q)-vector space automorphism
F s -→ F s q -→ q = q -1 u β -→ u β with u β = u β 1 ∧ . . . ∧ u β r ∧ u β r+1 ∧ • • • = (-q) ι(y 1 ,...,y r ) q ι(x 1 ,...,x r ) (u β r ∧ . . . ∧ u β 1 ) ∧ u β r+1 ∧ . . .
where x k = x(β k ) and y k = y(β k ) for all k = 1, . . . , r according to the notations of Section 2.1.2.
The bar involution behaves nicely on the wedge space, in particular it preserves the level ℓ Fock spaces F s for |s| = s, and commutes with the bosons p m , this is [27, Section 4.4]. We can now state Uglov's result, that is derived from the fact that the matrix of the bar involution is unitriangular.
G -= {G -(λ, s) ; λ ∈ Π ℓ } of F s such that, for ♭ ∈ {+, -}, (1) G ♭ (λ, s) = G ♭ (λ, s) (2) G ♭ (λ, s) = |λ, s mod q ♭1 L ♭ where L ♭ = λ∈Π ℓ Q[q ♭1 ]
|λ, s . This result is compatible with Kashiwara's crystal theory. More precisely, each integrable irreducible highest weight U ′ q ( sl e )-representation is contained in F s for some s, by taking the span of the vectors |∅, s for |s| = s, and it is proved in [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]Section 4.4] that the bases G ♭ restricted to U ′ q ( sl e )|∅, s coincide with Kashiwara's lower and upper canonical bases. Therefore, we also call G ♭ the (lower or upper if ♭ = -or + respectively) canonical basis of F s .
Schur functions in the asymptotic case. Define the following operators on
F s h m = |π|=m 1 z π p π , h σ = h σ 1 h σ 2 . . . and s σ = π∈Π K -1 π,σ h π .
where K -1 π,σ are the inverse Kostka numbers, that is, the entries of the inverse of the matrix of Kostka numbers. By Proposition 4.1, at q = 1, the action of h σ (respectively s σ ) corresponds to the multiplication by a complete function (respectively Schur function) on Λ through the bosonfermion correspondence.
Theorem 4.4.
(1) The operators s σ induce maps sσ on B s , preserving B s for |s| = s.
(2) Let |λ, s ∈ F s be j 0 -asymptotic. We have sσ |λ, s = |µ, s with µ = λ + eσ where σ j = ∅ if j j 0 and σ j 0 = σ, provided |µ, s is still j 0 -asymptotic. In particular, sσ coincides with the Heisenberg crystal operator bσ in this case. Remark 4.5. When λ = ∅, this says that sσ shifts the e rightmost beads in the j 0 -th runner of A(λ, s), and one recovers Losev's result, see for instance [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]Example 7.3].
Proof. In a general manner, we can identify a crystal map B s → B s with an operator on the wedge space F s mapping an element of G ♭ to an element of G ♭ , for ♭ ∈ {+, -}. Indeed, an element |λ, s ∈ B s can be identified with G ♭ (λ, s). Another way to see this is to look at the action on G ♭ (λ, s) and put q = 0 or q = ∞ respectively in the resulting vector. This provides the identification because of Condition (2) of Theorem 4.3. As a matter of fact, the action of s σ on a canonical basis element turns out to have the desired form. Indeed, by [START_REF] Iijima | On a higher level extension of Leclerc-Thibon product theorem in q-deformed Fock spaces[END_REF]Theorem 4.12], we know that s σ : F s → F s verifies s σ G + (λ, s) = G + (µ, s) where µ = λ + eσ (with σ as in the statement of Theorem 4.4) provided:
λ j is e-regular for all j = 1 . . . , ℓ -|λ, s and |µ, s are j 0 -asymptotic. Here, Iijima's original statement has been twisted by conjugation, because the level-rank duality used in his result is the reverse of that of the present paper. Thus, the notion of e-restricted multipartition is replaced by e-regular. Accordingly, we use the upper canonical basis instead of the lower one. This statement can be extended to an arbitrary j 0 -asymptotic element G + (λ, s) ∈ F s , that is, without the restriction that each λ j is e-regular. To do this, for all λ ∈ Π ℓ , let λ = λ + eπ where λj is e-regular for all j = 1, . . . , ℓ and π j = ∅ if j j 0 and π j 0 = π. Then, for all σ ∈ Π,
s σ G + (λ, s) = s τ G + ( λ, s)
where τ = (σ ′ + π ′ ) ′ , which we can compute by the preceding formula. Note that τ is the common addition of σ and π, see Remark 3.4. For a general j 0 -asymptotic vector |λ, s , we write again G + (µ, s) = s σ G + (λ, s). As explained in the beginning of the proof, this induces a crystal map sσ : B s → B s , by additionnaly requiring that sσ commutes with the Kashiwara crystal operators. In fact, when |λ, s is j 0 -asymptotic, the formula for sσ is precisely the formula for the Heisenberg crystal operator given in [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] (provided again that one twists by conjugating), which coincides with bσ by [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]Theorem 7.6]. This completes the proof.
As explained in the proof, with this approach, sσ |λ, s is identified with s σ G + (λ, s). The map s σ being an actual operator (on the vector space F s ), this justifies the terminology "operator" for the maps bσ : B s → B s , thereby completing the analogy with the Kashiwara crystal operators.
Explicit description of the Heisenberg crystal
In this section, we give the combinatorial formula for computing the Heisenberg crystal in full generality. This completes the results of [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF], where the asymptotic case is treated, and of [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF] where the case of doubly highest weight vertices (in the level-rank duality) is treated, see Appendix for details. 5.1. Level ℓ vertical strips. In the spirit of [START_REF] Jimbo | Combinatorics of representations of U q ( sl(n)) at q = 0[END_REF] and [START_REF] Foda | Branching functions of A (1) n-1 and Jantzen-Seitz problem for Ariki-Koike algebras[END_REF], we will express the action of the Heisenberg crystal operators in terms of adding/removing certain vertical strips.
For a given charged multipartition |λ, s , we denote
W 1 (λ, s) = {(a, b, j) ∈ Z >0 × Z >1 × {1, . . . , ℓ} | (a, b, j) λ and (a, b -1, j) ∈ λ} W 2 (λ, s) = {(a, 1, j) ∈ Z >0 × {1} × {1, . . . , ℓ} | (a, 1, j) λ} and W (λ, s) = W 1 (λ, s) ⊔ W 2 (
λ, s), so that W (λ, s) is the set of boxes directly to the right of λ (considering that it has infinitely many parts of size zero). [START_REF] Brundan | Graded decomposition numbers for cyclotomic Hecke algebras[END_REF][START_REF] Dudas | Categorical actions on unipotent representations I[END_REF][START_REF] Dudas | Categorical actions on unipotent representations I[END_REF], [START_REF] Ariki | On the decomposition numbers of the Hecke algebra of G(m, 1, n)[END_REF][START_REF] Dudas | Categorical actions on unipotent representations I[END_REF][START_REF] Brundan | Graded decomposition numbers for cyclotomic Hecke algebras[END_REF], (1, 5, 1)) with respective contents 8, 7, 6, 5 , X 2 = ((3, 2, 3), (4, 2, 3), (2, 1, 2), (2, 3, 1)) with respective contents 5, 4, 3, 2 , X 3 = ((3, 2, 3), (4, 2, 3), (2, 1, 2), (3, 1, 2)) with respective contents 5, 4, 3, 2 , X 4 = ((4, 2, 3), (2, 1, 2), (3, 1, 2), (4, 1, 2)) with respective contents 4, 3, 2, 1 , X 5 = ((2, 1, 2), (3, 1, 2), (4, 1, 2), (5, 1, 2)) with respective contents 3, 2, 1, 0 , X 6 = ((3, 1, 2), (4, 1, 2), (5, 1, 2), (3, 1, 1)) with respective contents 2, 1, 0, -1 , X 7 = ((3, 1, 2), (4, 1, 2), (5, 1, 2), (6, 1, 2)) with respective contents 2, 1, 0, -1 , X 8 = ((5, 1, 3), (4, 1, 2), (5, 1, 2), (3, 1, 1)) with respective contents 2, 1, 0, -1 , X 9 = ((5, 1, 3), (4, 1, 2), (5, 1, 2), (6, 1, 2)) with respective contents 2, 1, 0, -1 , X 10 = ((5, 1, 3), (6, 1, 3), (5, 1, 2), (3, 1, 1)) with respective contents 2, 1, 0, -1 , X 11 = ((5, 1, 3), (6, 1, 3), (5, 1, 2), (6, 1, 2)) with respective contents 2, 1, 0, -1 , X 12 = ((5, 1, 3), (6, 1, 3), (7, 1, 3), (3, 1, 1)) with respective contents 2, 1, 0, -1 , X 13 = ((5, 1, 3), (6, 1, 3), (7, 1, 3), (6, 1, 2)) with respective contents 2, 1, 0, -1 , X 14 = ((5, 1, 3), (6, 1, 3), (7, 1, 3), (8, 1, 3)) with respective contents 2, 1, 0, -1 , and so on. 5.2. Action of the Heisenberg crystal operators bσ . We will define an order on the vertical estrips of a given charged multipartition. First, let γ = (a, b, j) and γ ′ = (a ′ , b ′ , j ′ ) be two boxes of |λ, s . Write γ > γ ′ if c(γ) > c(γ ′ ) or c(γ) = c(γ ′ ) and j < j ′ . Remark 5.4. Note that this is the total order used to define the good boxes in a charged ℓ-partition, which characterises the Kashiwara crystals, see [START_REF] Geck | Representations of Hecke Algebras at Roots of Unity[END_REF]Chapter 6].
|λ, s = |(4.2, 2, 2 2 .1 2 ), (1, 4, 6) = 1 2 3 4 0 1 , 4 5 , 6 7 5 6 4 3 . Then V (λ, s) consists of X 1 = ((1, 3, 3),
By extension, let > denote the lexicographic order induced by > on e-tuples of boxes in a given charged ℓ-partition. This restricts to a total order on V (λ, s). Definition 5.5. Let |λ, s be a charged ℓ-partition. Denote simply V = V (λ, s).
-The first good vertical e-strip of |λ, s is the maximal element X 1 of V with respect to >.
-Let k ≥ 2. The k-th good vertical e-strip of |λ, s is the maximal element X k of
{ X ∈ V | X k-1 > X and X k-1 ∩ X = ∅ } with respect to >.
In other terms, the greatest (with respect to >) vertical strip of |λ, s is good, and another admissible vertical strip is good except if one of its boxes already belongs to a previous good vertical strip.
Example 5.6. We go back to Example 5.3. Then we have X k > A k-1 for all k = 1, . . . , 14. Moreover, there are only four good addable vertical strips among these, namely X 1 , X 2 , X 6 and X 13 .
Definition 5.7. Let σ = (σ 1 , σ 2 . . . ) ∈ Π . Define cσ : B s → B s cσ |λ, s = |µ, s
where µ is obtained from λ by adding recursively σ k times the k-th good vertical e-strip for k ≥ 1 .
Lemma 5.8. The map c(1) is well defined.
Proof. Let |λ, s be a charged multipartition. We need to prove that the first good vertical strip X of |λ, s is addable. Assuming it is not the case, then there exists a box (a, b, j) ∈ X such that (a -1, b, j) X ⊔ λ and (a -1, b -1, j) ∈ λ. But (a -1, b, j) λ and (a -1, b -1, j) ∈ λ implies (a -1, b, j) ∈ X, whence a contradiction.
Corollary 5.9. For all σ ∈ Π, the map cσ is well defined.
Proof. If σ = ∅, then cσ = Id and so is well defined. So we assume σ ∅, say σ = (σ 1 , σ 2 , . . . ). First of all, Lemma 5.8 and the definition of cσ implies that c(n) is well defined for all n ∈ Z ≥1 (so in particular for n = σ 1 ). It remains to prove that for all λ ∈ Π ℓ and for all ℓ-charge s, the second good vertical e-strip X of c(σ 1 ) |λ, s is addable. Since σ 1 ≥ 1, it is clear that for all (a, b, j) ∈ X, (a -1, b, j) is actually a box of λ, since if it was not, X could not be the second good vertical strip of λ exactly as in the proof of Lemma 5.8. Iterating, for all partition τ, the (h + 1)-th vertical strip of cτ |λ, s is addable, where h is the number of non-zero parts in τ.
Example 5.10. Take again the values of Example 5.3. Then, for σ = (2.1 3 ), we have
cσ |λ, s = c 1 2 3 4 0 1 , 4 5 , 6 7 5 6 4 3
=
.
Each box of σ corresponds to a vertical e-strip, the matching being given by the colors.
We will prove the following theorem.
Theorem 5.11. For all σ ∈ Π, we have cσ = bσ .
5.3.
Proof of Theorem 5.11. The strategy for proving this result consists in starting from the doubly highest weight case, in which we know an explicit formula for bσ . Then, we show that the c±σ commute with the Kashiwara crystal operators for U ′ p ( sl ℓ ), then U ′ q ( sl e ), and use the commutation of the Kashiwara crystals and the H-crystal (see Definition 3.8) to conclude. Proposition 5.12. Let |λ, s be a doubly highest weight vertex. Then, for all σ ∈ Π, we have cσ |λ, s = bσ |λ, s .
Proof. We need to translate the explicit formula for bσ , given in terms of abaci, in terms of ℓpartitions. By the correspondence given in Section 2.1, an e-period in the ℓ-abacus A = A(λ, s) (see [START_REF] Jacon | A combinatorial decomposition of higher level Fock spaces[END_REF]Section 2.3]) corresponds to a good vertical e-strip in |λ, s . Therefore, it yields an addable admissible vertical e-strip if ( j 1 , d 1 + 1) A where ( j 1 , d 1 ) is the first bead of the period. Thus, shifting the k-th e-period of A by σ k steps to the right amounts to adding the k-th good vertical strip of |λ, s . In other terms bσ is the same as cσ when restricted to doubly highest weight vertices.
Proposition 5.13. Let |λ, s be a highest weight vertex for U ′ q ( sl e ). Then, for all σ ∈ Π, we have cσ |λ, s = bσ |λ, s .
Proof. Write |λ, s = Ḟj |λ, s where |λ, s is the highest weight vertex for U ′ p ( sl ℓ ) associated to |λ, s , and where Ḟj = f j r . . . f j 1 is a sequence of Kashiwara crystal operators of U ′ p ( sl ℓ ). Because of Theorem A.4, the two Kashiwara crystals commute, and thus |λ, s is a doubly highest weight vertex. We prove the result by induction on r ∈ N. If r = 0, then |λ, s is a doubly highest weight vertex and this is Proposition 5.12. Suppose that the result holds for a fixed r -1 ≥ 0. Write |ν, t = f j r-1 . . . f j 1 |λ, s , so that |λ, s = f j r |ν, t . Because the crystal level-rank duality is realised in terms of abaci, we need to investigate one last time the action of f j r in the abacus. We know that f j r acts on an e-partition by adding its good addable j r -box (i.e. of content j r modulo ℓ). This corresponds to shifting a particular (white) bead one step up in the e-abacus of |ν, s , see Example A.3 for an illustration. Now, if j r 0, then this corresponds to moving down a (black) bead in the ℓ-abacus. Since the resulting abacus A(λ, s) is again totally e-periodic (the two Kashiwara crystals commute), this preserves the e-period containing this bead. If j r = 0, then moving this white bead up corresponds to moving a black bead in position (ℓ, d) in the ℓ-abacus (which is the first element of its e-period) down to position (1, de). Again, this preserves the e-period. In both cases, the reduced j r -word in the e-abacus (see [9, = cσ |λ, s .
We are now ready to prove Theorem 5.11. It remains to investigate the action of the Kashiwara crystal operators of U ′ q ( sl e ). Proof of Theorem 5.11. Write |λ, s = F i |λ, s where |λ, s is the highest weight vertex for U ′ q ( sl e ) associated to |λ, s , and where F i = fi r . . . fi 1 is a sequence of Kashiwara crystal operators of U ′ q ( sl e ). We prove the result by induction on r ∈ N. If r = 0, then |λ, s is a highest weight vertex for U ′ q ( sl e ) and this is Proposition 5.13. Suppose that the result holds for a fixed r -1 ≥ 0. Write |ν, s = fi r-1 . . . fi 1 |λ, s , so that |λ, s = fi r |ν, s . Consider the reduced i r -word for |ν, s . Again, it is preserved by the action of cσ by Property (1) of Definition 5.1, and we have f j r cσ |ν, t = cσ f j r |ν, t .
The commutation of bσ with fi r together with the induction hypothesis completes the proof.
Remark 5.14. Theorem 5.11 also yields an explicit description of the operators b1,c . It acts on any charged ℓ-partition by adding its k-th good vertical e-strip, where c is the content of the k-th addable box of κ (with respect to the order > on boxes).
5.4. Impact of conventions and relations with other results. We end this section by mentioning an alternative realisation of the H-crystal. In fact, some of the combinatorial procedures require some conventional choices, such as the maps τ and τ yielding the level-rank duality, or the order on the boxes or on the vertical strips of multipartition. Like in the case of Kashiwara crystals, see e.g.
[2, Remark 3.17] or [9, Remark 4.9] Choosing a different convention yields to a slightly different version of the Heisenberg crystal.
We have already seen in the proof of Theorem 4.4 that conventions can be adjusted to fit Losev's [START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF] or Iijima's [START_REF] Iijima | On a higher level extension of Leclerc-Thibon product theorem in q-deformed Fock spaces[END_REF] results about the action of a Heisenberg crystal operator or a Schur function respectively. This is done by using the conjugation of multipartitions. More precisely, one can decide to identify a charged ℓ-partition |λ, s with |λ ′ , s instead of |λ, s with the notations of Section 2.1.2. Then, the action of the Heisenberg crystal operators is expressed in terms of addable horizontal e-strips in the ℓ-partition. This is equivalent to changing the order on the vertical strips, applying a Heisenberg crystal operator, and then conjugate.
One could also decide to exchange the role of τ and τ in the level-rank duality, see Appendix. Then, applying a Heisenberg crystal operator on an ℓ-abacus would de decribed in terms of the corresponding e-abacus. More precisely, b1,c would then consists in shifting a ℓ-period in the eabacus in the particular case where it is totally ℓ-periodic (see also the proof of Proposition 5.12). This permits to give an interpretation of Tingley's tightening procedure on descending ℓ-abaci Proof. Recall that for this statement, we have considered the realisation of the Heisenberg crystal with the alternative version of level-rank-duality, swapping the roles of τ and τ, so that the operators b-1,c act on the e-abacus (rather than the ℓ-abacus) by removing a vertical ℓ-strip (Theorem 5.11). Remember that b-1,c = bθ b-κ where θ depends on c. Now, if an ℓ-abacus is descending [START_REF] Tingley | Three combinatorial models for sl n crystals, with applications to cylindric plane partitions[END_REF]Definition 3.6], its corresponding e-abacus is totally ℓ-periodic. In particular, the corresponding e-partition is a highest weight vertex in the U ′ p ( sl ℓ )-crystal, and we can use Proposition 5.13. In particular, b-κ and bθ act by shifting ℓ-periods in the e-abacus, and b-1,c shifts one ℓ-period, say the k-th one, one step to the left in the e-abacus. This precisely what T k does.
Finally, we mention that in another particular case, the Heisenberg crystal operator b-κ coincide with the canonical U ′ q ( sl e )-isomorphism ϕ (up to cyclage) of [START_REF] Gerber | Crystal isomorphisms in Fock spaces and Schensted correspondence in affine type A[END_REF]Theorem 5.26] used to construct an affine Robinson-Schensted correspondence. Proof. We use the notations of [START_REF] Gerber | Crystal isomorphisms in Fock spaces and Schensted correspondence in affine type A[END_REF]. First of all, because of [9, Proposition 5.7], doubly highest weight vertices are cylindric in the sense of [8, Definition 2.3], and ϕ|λ, s is therefore well defined, and simply verifies ϕ = ψ t where t is the number of pseudoperiods in |λ, s and ψ is the reduction isomorphism. In fact, by definition of κ, we have t = h, the number of (non-zero) parts of κ, and it suffices to apply the cyclage h times to |λ, s to match the formulas for b-κ and ψ t . 5.5. Examples of computations. By Remark 5.14, the Heisenberg crystal can be computed recursively from its highest weight vertices, each of which yields a unique connected component, isomorphic to the Young graph by Proposition 3.12.
The empty multipartition is obviously a highest weight vertex for H, and so is every multipartition with less than e boxes. For instance, if ℓ = 2, s = (0, 1) and e = 3, we can compute the connected components of the Heisenberg crystal of F s with highest weight vertex (-, -) and ( 0 , -). Up to rank 13, we get the following subgraph of the H-crystal.
-- We also wish to give an example of the asymptotic case. Take ℓ = 3, s = (0, (2) Relabel each ( j, d) by the second coordinate of τ-1 ( j, d), see Figure 4 for an example.
0 -1 1 0 1 -1 0 1 2 0 -1 -2 1 0 -1 0 1 2 -1 0 1 1 2 3 0 1 -1 0 -2 1 2 0 -1 0 -1 -2 -3 -4 1 0 -1 -2 0 1 2 3 -1 0 1 2 1 2 3 4 0 1 2 -1 0 1 -2 1 2 3 0 -1 0 1 -1 0 -2 -1 1 2 0 1 -1 0 0 1 -1 0 -2 -3 -4 1 2 0 -1 -2 0 -1 -2 -3 -4 -5 1 0 -1 -2 -3 -4 0 - 0 -1 1 0 0 1 -1 0 1 2 0 0 -1 -2 -3 1 0 -1 0 1 2 -1 0 1 1 2 3 0 0 1 -1 0 -2 -3 1 2 0 -1 0 -1 -2 -3 -4 1 0 -1 -2 -3 0
(3) Replace the newly indexed beads on a 1-abacus according to this new labeling. 1, -( j -1) + ed) , and one notices that τ corresponds to taking the usual e-quotient and the e-core of a partition, where the e-core is encoded in the e-charge. More precisely, the renumbering of the beads according to τ is the well-know "folding" procedure used to compute the e-quotient, see [START_REF] James | The Representation theory of the Symmetric Group[END_REF].
Definition A.1. The (twisted) level-rank duality is the bijective map τ • (.) ′ • τ -1
Remark A.2. The map τ•τ -1 already defines a level-rank duality, this was the one studied by Uglov [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]. However, the twisted version defined above (i.e. where the conjugation is added) is the one that is relevant when it comes to crystals, see Theorem A.4 below.
There is a convenient way to picture the crystal level-rank duality as follows. Starting from an ℓ-abacus, stack copies on top of each other by translating by e to the right. Then, extract a vertical slice of the resulting picture, and read off the corresponding e-partition by looking at the white beads (instead of black beads) in each column, starting from the leftmost one.
Example A.3. The abacus A(λ, s) with ℓ = 4, e = 3, λ = (1, ∅, 1 3 , 5) and s = (-1, -1, 1, 1) looks as follows (the origin is represented with the boldfaced vertical bar).
Stacking copies of A(λ, s) gives
Proof. The first point is essentially due to Uglov [START_REF] Uglov | Canonical bases of higher-level q-deformed Fock spaces and Kazhdan-Lusztig polynomials[END_REF]Proposition 4.6], where he uses the nontwisted version of level-rank duality, see [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]Theorem 3.9] for the justification in the twisted case. The second point is [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]Theorem 4.8].
Example 2 . 2 .
22 With |λ, s as in Example 2.1, we get the following corresponding abacus A(λ, s) = . . . , (2, -5), (2, -4), (2, -3), (2, -2), (2, -1), (2, 0), (2, 2), (2, 3), . . . , (1, -5), (1, -4), (1, -3), (1, -1), (1, 1)
Figure 1 .
1 Figure 1. Relabelling bead positions in the ℓ-abacus according to τ -1 , for ℓ = 4 and e = 3.
Example 2 . 3 .
23 We go back to Example 2.2. The action of τ and τ -1 are represented below. 2.1.3. Wedges. Let P(s) denote the set of sequences of integers α = (α 1 , α 2 , . . . ) such that α k = sk + 1 for k sufficiently large, and set P > (s) = {(α 1 , α 2 , . . . ) ∈ P(s) | α k > α k+1 for all k ∈ Z >1 } . Definition 2.4. An elementary wedge (respectively ordered wedge) is a formal element u α = u α 1 ∧ u α 2 ∧ . . . where α ∈ P(s) (respectively α ∈ P > (s)).
3. 1 .
1 The action of the bosons. Let us start by recalling the definition of the quantum Heisenberg algebra.
Definition 3 . 8 .
38 Let σ ∈ Π. The Heisenberg crystal operator bσ (respectively b-σ ) is the uniquely determined map B s → B s |λ, s → |µ, s (respectively B s → B s ⊔ {0}) such that (1) if |λ, s is a doubly highest weight vertex, then |µ, s is obtained from |λ, s by shifting the k-th period of ℓ-abaci of |λ, s by σ k steps to the right (respectively to the left when possible, and b-σ |λ, s = 0 otherwise), (2) it commutes with the Kashiwara crystal operators of U ′
Figure 2 ,
2 we wish to define the H-crystal as a graph whose arrows have minimal length. Therefore, we define the following maps b1,c = bη b-κ and b-1,d = bθ b-κ where η = κ ⊔{γ} with γ = (a, b) being the addable box of κ verifying b -a = c and where θ = κ\{γ} with γ = (a, b) being the removable box of κ verifying ba = d, Definition 3.10. The H-crystal of the wedge space F s is the graph C s with (1) vertices: all charged ℓ-partitions |λ, s with λ ∈ Π ℓ and |s| = s. (2) arrows : |λ, s c -→ |µ, s if |µ, s = b1,c |λ, s .
-Figure 3 .
3 Figure 3. The beginning of the Young graph.
Theorem 4 . 3 .
43 Let s ∈ Z ℓ such that |s| = s. There exist unique bases G + = {G + (λ, s) ; λ ∈ Π ℓ } and
Definition 5 . 1 .
51 Let |λ, s be a charged ℓ-partition.(1) A (level ℓ) vertical e-strip is a sequence of e boxes γ 1 = (a 1 , b 1 , j 1 ), . . . , γ e = (a e , b e , j e ) such that no horizontal domino appears, i.e. there is no 1 ≤ i ≤ e such that a i+1 = a i and j i+1 = j i . Moreover, a vertical e-strip is called admissible if: (a) The contents of γ 1 , . . . , γ e are consecutive, say c(γ i ) = c(γ i+1 ) + 1 for all 1 ≤ i ≤ e. (b) For all 1 ≤ i < i ′ ≤ e, we have j i ≥ j i ′ . (2) The admissible vertical e-strips contained in W (λ, s) are denoted V (λ, s) . The elements of V (λ, s) are called the (admissible) vertical e-strips of |λ, s . (3) A vertical e-strip X of |λ, s is called addable if X ∩ λ = ∅ and λ ⊔ X is still an ℓ-partition . Remark 5.2. This is a generalisation to multipartitions of the usual notion of vertical strips, see for instance [21, Chapter I]. Example 5.3. Let ℓ = 3, e = 4 and
[ 26 ,
26 Definition 3.8], thereby answering Question 1 of [26, Section 6]. Proposition 5.15. Let A be a descending ℓ-abacus. For all k ≥ 1, denote T k the k-th tightening operator associated to A. We have T k (A) = b-1,c (A) where c is determined by k.
Proposition 5 . 16 .
516 Let |λ, s ∈ B s be a doubly highest weight vertex. Write κ = κ|λ, s . We have b-κ ξ h |λ, s = ϕ|λ, s , where h is the number of parts of κ and ξ is the cyclage isomorphism, see [8, Proposition 4.4].
For
e = 2 and s = (3, 2, 5), the 3-partition (2, 1, 4) is clearly a highest weight vertex for H by Theorem 5.11. The corresponding connected component, up to rank 17 is the following graph.
Figure 4 .
4 Figure 4. Relabelling bead positions in the e-abacus according to τ-1 , for ℓ = 4 and e = 3.
and only if |λ, s is a highest weight vertex for H. (4) The depth of an element |λ, s in C s is |κ|λ, s |.
Section 4.2] for details) is unchanged and ( * ) f j r cσ |ν, t = cσ f j r |ν, t .
Therefore, we have bσ |λ, s = bσ f j r |ν, t
= f j r bσ |ν, t by Definition 3.8
= f j
r cσ |ν, t by induction hypothesis = cσ f j r |ν, t by ( * )
[START_REF] Geck | Representations of Hecke Algebras at Roots of Unity[END_REF][START_REF] Losev | Supports of simple modules in cyclotomic Cherednik categories O[END_REF], λ = (1, 3.2.1, 3.1) and e = 3, so that |λ, s is a highest weight vertex for H and is 3-asymptotic. We see in the following corresponding H-crystal that the elements bσ , for |σ| < 4, only act on the third component of |λ, s , but that b(14 ) acts already on the second component. This illustrates Theorem 4.4 (2).
0 7 8 9 6 7 5 19 20 21 18
19 20 21
7 8 9 18
0 6 7 17
5 16
15
19 20 21
0 7 8 9 6 7 5 19 20 21 18 19 17 18 16 17 15 0 7 8 9 6 7 5 18 17 16 15 14 13
12
19 20 21
0 7 8 9 6 7 5 19 20 21 18 19 20 17 18 19 16 17 18 15 0 7 8 9 6 7 5 19 20 21 18 19 17 18 16 17 15 14 13 12 0 7 8 9 6 7 5 18 17 16 15 14 13 12 11 10
9
19 20 21
19 20 21 18
0 7 8 9 6 7 5 19 20 21 22 18 19 20 21 17 18 19 20 16 17 18 15 0 7 8 9 6 7 5 19 20 21 18 19 20 17 18 19 16 17 18 15 14 13 12 0 7 8 9 6 7 5 19 20 21 18 19 17 18 16 17 15 16 14 15 13 14 12 0 7 8 9 6 7 5 18 19 17 18 16 17 15 14 13 12 11 10 0 7 8 9 6 7 5 6 17 16 15 14 13 12 11 10 9
9 8
Ackowledgments: Many thanks to Emily Norton for pointing out an inconsistency in the first version of this paper and for helpful conversations.
Appendix A. Crystal level-rank duality We recall, using a slightly different presentation, the results of [START_REF] Gerber | Triple crystal action in Fock spaces[END_REF]Section 4] concerning the crystal version of level-rank duality.
There is a double affine quantum group action on the wedge space. In Section 2.2.1, we have explained how U ′ q ( sl e ) acts on F s . It turns out that U ′ p ( sl ℓ ), where p = -1/q, acts on F s in a similar way. More precisely we will:
(1) index ordered wedges by charged e-partitions, using an alternative bijection τ, (2) make U ′ p ( sl ℓ ) act on F s via this new indexation by swapping the roles of e and ℓ and replacing q by p. To define τ, recall that we have introduced the quantities z(n) ∈ Z, 1 ≤ y(n) ≤ ℓ and 1 ≤ x(n) ≤ e for each n ∈ Z. To each pair (1, c) ∈ {1} × Z, we associate the pair τ(1, d) = ( j, d) ∈ {1, . . . , ℓ} × Z where j = x(-c) and d = -(y(-c) -1) + ℓz(-c). In particular, τ sends the bead in position (1, c) into the rectangle z(-c), on the row x(-c) and column y(-c) (numbered from right to left within each rectangle). The τ is bijective and we can see τ-1 as the following procedure:
(1) Divide the ℓ-abacus into rectangles of size e × ℓ, where the z-th rectangle (z ∈ Z) contains the positions ( j, d) for all 1 ≤ j ≤ ℓ and -e + 1 + ze ≤ d ≤ ze.
and extracting one vertical e-abacus yields which corresponds to |(1, 2.1 2 , 2.1 4 ), (-1, 1, 0) .
We have the following induced maps
Like τ, the bijection τ induces a U ′ p ( sl ℓ )-module isomorphism, and F s has a U ′ p ( sl ℓ )-crystal, given by the same rule as the U ′ q ( sl e )-crystal by swapping the roles of e and ℓ and replacing q by p. Theorem A.4. Via level-rank duality,
(1) the U ′ q ( sl e )-action and the U ′ p ( sl ℓ )-action on F s commute, and (2) the U ′ q ( sl e )-crystal and the U ′ p ( sl ℓ )-crystal of F s commute. | 55,681 | [
"1191998"
] | [
"303510"
] |
01481499 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481499/file/978-3-642-27585-2_10_Chapter.pdf | Muhammad Rizwan Asghar
email: [email protected]
Giovanni Russello
email: [email protected]
Flexible and Dynamic Consent-Capturing
Keywords: Consent Management, Consent Evaluation, Consent Automation, Consent-Capturing, Consent-Based Access Control
Data usage is of great concern for a user owning the data. Users want assurance that their personal data will be fairly used for the purposes for which they have provided their consent. Moreover, they should be able to withdraw their consent once they want. Actually, consent is captured as a matter of legal record that can be used as legal evidence. It restricts the use and dissemination of information. The separation of consent capturing from the access control enforcement mechanism may help a user to autonomously define the consent evaluation functionality, necessary for the automation of consent decision. In this paper, we present a solution that addresses how to capture, store, evaluate and withdraw consent. The proposed solution preserves integrity of consent, essential to provide a digital evidence for legal proceedings. Furthermore, it accommodates emergency situations when users cannot provide their consent.
Introduction
Data usage is of great concern for a user owning the data. Users want assurance that their personal data will be fairly used for the purposes for which they have provided their consent. Moreover, they should be able to withdraw their consent once they want. Actually, consent is captured as a matter of legal record that can be used as legal evidence. It restricts the use and dissemination of information which is controlled by the user owning the data. For the usage of personal data, organisations need to obtain user consent, strictly regulated by the legislation. This forces organisations to implement not only the organisational compliance rules but also the legislation rules to access the user data. Currently, the law enforcement agencies are forcing organisations to collect users consent every time their data is accessed where users can decide not to provide consent or they may withdraw their consent any time they want 3 . In the electronic environment, consent can be determined by the automatic process without the explicit involvement of the user owning the data, also known as a data subject. This enables enormous amount of data to be processed in a automated and faster manner where consent is captured at the runtime.
Motivation
The motivation behind consent-capturing is from the real-world, where a user intervention is reduced as much as possible. Let us consider the healthcare scenario where a patient provides his/her written consent to the hospital. Later on, the hospital staff refers to this written consent in order to provide access to patient's records. For instance, if a nurse needs to access patient's record, she needs first to make it sure that she has access and patient has provided his/her consent for a nurse. In the healthcare scenario, we can realise access controls at two different levels. At the first level, the access controls are enforced by the care/service provider while at the second level, it is enforced by the patient, where a patient provides his/her consent to the first level in order her data to be accessed. In short, consent is a mean to control the access on the personal data. There are two obvious questions which needs to be addressed when we capture the notion of consent in an automated manner. First, how to capture and store the consent. Second, how to evaluate consent from the written consent.
Unfortunately, traditional access control techniques, such as Role-Based Access Control (RBAC) [START_REF] Sandhu | Role-based access control models[END_REF], fail to capture consent. However, there are only a few access control techniques [START_REF] Anderson | A security policy model for clinical information systems[END_REF][START_REF] Kudo | Pbac: Provision-based access control model[END_REF] that can capture consent but they tightly couple the access control enforcement mechanism with the consent-capturing. The separation of consentcapturing from the enforcement mechanism may help a user to autonomously define the consent evaluation functionality, necessary for regulating the automation of consent decision. The main drawback of state-of-the-art consent-capturing schemes [START_REF] Coiera | e-Consent: The design and implementation of consumer consent mechanisms in an electronic environment[END_REF][START_REF] O'keefe | A decentralised approach to electronic consent and health information access control[END_REF][START_REF] Ruan | An authorization model for e-consent requirement in a health care application[END_REF][START_REF] Wuyts | Integrating patient consent in e-health access control[END_REF] is that consent-capturing mechanism is too rigid as they consider the consent with a predefined set of attributes and it is not possible to take the contextual information into account in order to provide consent. This contextual information may include time, location or other information about the requester, who makes an access request. In short, the existing consent-capturing mechanisms are not expressive enough to handle the realworld situations. Moreover, the existing access control techniques do not address how to ensure transparent auditing while capturing the consent. The transparent auditing could be required for providing digital evidence in the court.
Research Contributions
In this paper, we present how to capture, store and evaluate consent from the written consent. We consider the written consent as a consent-policy where a data subject indicates who is permitted to access his/her data. In the proposed solution, the consent evaluation functionality can be delegated to a third party. The advantage of this delegation is to separate the access control enforcement mechanism from the consent-capturing. This research is a step towards the automation of the consent-capturing. The automation do not only captures consent dynamically but also increases efficiency as compared to providing consent requiring data subject's intervention. Moreover, the proposed solution enables a data subject to withdraw his/her consent. Moreover, it treats emergency situations when a data subject cannot provide his/her consent. Last but not least, the integrity of consent is preserved and a log is maintained by system entities to provide a digital evidence for legal proceedings.
Organisation
The rest of this paper is organised as follows: Section 2 describes the legal requirements in order to capture consent. Section 3 reviews the related work. Section 4 presents the proposed solution. Section 5 focuses on the solution details. We follow with a discussion about availability, confidentiality and increased usability in Section 6. Finally, Section 7 concludes this paper and gives directions for the future work.
Legal Requirements to Capture Consent
Consent is an individual's right. In fact, consent can be regarded one's wish to provide access on one's personal information. Legally, a data subject should be able to provide, modify or withhold statements expressing consent. The given consent should be retained for the digital evidence. Generally, the paper-based consent is considered valid once signed by the data subject. In some countries, specific legislation may require the digital consent to be signed using the digital signature. In other words, an electronically signed consent can be considered equivalent to the manually signed paper-based consent.
According to article 2(h) of the EU Data Protection Directive (DPD) [5], consent is defined as: "'the data subject's consent' shall mean any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed." This definition clearly indicates three conditions, i.e., the consent must be freely given, specific and informed.
-Freely given: A consent can be considered free if captured/provided in the absence of any coercion. In other words, it is a voluntary decision of a data subject. -Specific: A consent can be considered specific when it is captured/provided for a dedicated purpose. Moreover, one consent is for one action which is specified to the data subject. Therefore, one consent cannot be used for any other purposes. -Informed: The consent is not considered valid until a data subject is provided with the necessary information. The data subject must be provided with information to understand all benefits and drawbacks of giving and not giving consent. This information must be accurate and given in a transparent, clear and understandable manner.
In legal terms, consent is often a positive response. However, a data subject can express a negative response as it can be regarded as the right of a data subject to express his/her desire for not sharing a certain piece of data.
Related Work
In RBAC [START_REF] Sandhu | Role-based access control models[END_REF], each system user is assigned a role and a set of permissions are granted to that role. A user can get access on data based on his/her role. RBAC is motivated by the fact that users in the real-world make the decisions based on their job functions within an organisation. The major drawback of RBAC is that it does not take into considerations the user consent for providing the access.
In the British Medical Association (BMA) policy model [START_REF] Anderson | A security policy model for clinical information systems[END_REF], access privileges for each medical record are defined in the form of Access Control Lists (ACLs) that are managed and updated by a clinician. The main goal of the BMA model is to capture consent, preventing multiple people in obtaining access to large databases of identifiable records. The shortcoming is that the ACLs are not flexible and expressive enough for defining the access. Moreover, the consent structure is not discussed. In Provision-Based Access Control (PBAC) [START_REF] Kudo | Pbac: Provision-based access control model[END_REF], access decisions are expressed as a sequence of provisional actions instead of simple permit or deny statements. The user consent can be captured if stated within the access policy. In both techniques [START_REF] Anderson | A security policy model for clinical information systems[END_REF] and [START_REF] Kudo | Pbac: Provision-based access control model[END_REF], the consentcapturing mechanism is tightly coupled with the access control enforcement. This restricts the possibility of consent-capturing in an automated manner.
Cassandra [START_REF] Becker | Cassandra: distributed access control policies with tunable expressiveness[END_REF], a role-based trust management language and system for expressing authorisation policy in healthcare systems, captures the notion of consent as a special role to inform the user that his/her data is being accessed. Cassandra has been used for enforcing the policies of the UK national Electronic Heath Record (EHR) system. The requirement of consent-capturing notion as a special role adds an extra workload because in most of the situations, the consent can be implicitly derived if a user has allowed the system to do so.
Usage-based access control [START_REF] Zhang | A usage-based authorization framework for collaborative computing systems[END_REF] aims to provide dynamic and fine-grained access control. In the usage-based access control, a policy is defined. This policy is based on attributes of the subject, target and environment. The attributes are continuously monitored and updated. If needed, a policy can be enforced repeatedly during a usage session. Attributes can be used for capturing information related to the consent while the access session is still active. If the information changes, such as the user revokes the consent, the access session is terminated. Russello et al. [START_REF] Russello | Consent-based workflows for healthcare management[END_REF] propose a framework for capturing the context in which data is being accessed. Both approaches [START_REF] Zhang | A usage-based authorization framework for collaborative computing systems[END_REF] and [START_REF] Russello | Consent-based workflows for healthcare management[END_REF] have major limitation of tightly coupling the access control enforcement mechanism with the consent capturing.
Ruan and Varadharajan [START_REF] Ruan | An authorization model for e-consent requirement in a health care application[END_REF] present an authorisation model for handling e-consent in healthcare applications. Their model supports consent delegation, denial and consent inheritance. However, it is not possible to capture the contextual information in order to provide the consent. In e-Consent [START_REF] Coiera | e-Consent: The design and implementation of consumer consent mechanisms in an electronic environment[END_REF], the consent can be identified in four different forms including, general consent, general consent with specific denial, general denial with specific consent and general denials. They provide some basics of consent and associated security services. They suggest to store consent in the database. However, it is not clear how they can express the consent rules and evaluate the request against those rules in order to provide the consent. O'Keefe, Greenfield and Goodchild [START_REF] O'keefe | A decentralised approach to electronic consent and health information access control[END_REF] propose an e-Consent system that captures, grants and withholds consent for access to manage electronic health information. Unfortunately, the consent-capturing mechanism is static, restricting the possibility of expressive consent rules.
Jin et al. [START_REF] Jin | Patientcentric authorization framework for sharing electronic health records[END_REF] proposes an authorisation framework for sharing EHR, enabling a patient to control his/her medical data. They consider three types of consents, i.e., breakglass consent, patient consent and default consent. The break-glass consent has the highest priority, representing emergency situations, while the default consent has the lowest one. The patient consent is captured if no default consent is provided. They store consent in the database. Unfortunately, the consent-capturing mechanism is not expressive enough to define the real-world consent rules.
Verhenneman [START_REF] Verhenneman | Consent, an instrument for patient empowerment[END_REF] provides a discussion about the legal theory on consent and describes the lifecycle of consent. The author provides a legal analysis in order to capture consent. However, the work is theoretical without any concrete solution. Wuyts et al. [START_REF] Wuyts | Integrating patient consent in e-health access control[END_REF] propose an architecture to integrate patient consent in e-health access control. They capture consent as Policy Information Point (PIP), where consent is stored in the database. The shortcoming of storing consent in the database is that it limits the data subject to rely on only pre-defined of attributes with a very limit expressivity to define the consent rules. While in our proposed solution, we do not consider a fix set of pre-defined attributes, instead we dynamically capture the attributes in the form of contextual information.
The existing research on access control lacks in providing a user to autonomously define the consent evaluation functionality, necessary for regulating the automatic collection of consent. That is, it requires investigation how to capture consent independent of the access control enforcement mechanism. Moreover, it is not clear how to provide transparent auditing while giving (or obtaining) the consent so that later on a forensic analysis can be performed by an investigating auditor.
ŶƚĞƌƉƌŝƐĞ ^Ğƌ|ŝĐĞ ƵƐ
ŽŶƐĞŶƚ |ĂůƵĂƚŽƌ WƌŽĐĞƐƐ ŶŐŝŶĞ
ĂƚĂ ^ƵďũĞĐƚ ZĞƋƵĞƐƚĞƌ
The Proposed Approach
Before presenting the details of the proposed solution, it is necessary to discuss the system model which is described as follows:
System Model
In this section, we identify the following system entities:
-Data Subject: Data Subject is the user whose consent is being captured. A Data Subject interacts with the Consent Evaluator to manage consent-policy corresponding to a resource. Furthermore, a Data Subject interacts with the Requester using the enterprise service bus to collect the contextual information. If required, a Requester sends contextual information to the Data Subject. This information may include, but not limited to, Requester's role, Requester's name, Requester's location, time, etc. -Process Engine: It is the data controller who is responsible to enforce access controls. It works as a bridge between a Requester and a Data Subject. Once required, it sends consent request to the Consent Evaluator and receives back the consent. -Consent Evaluator: The Consent Evaluator is an entity responsible to manage and evaluate consent-policy. When requested, it provides consent back to the Process Engine. This is an additional entity that is not present in the traditional access control systems.
The main idea is to have two levels of access controls. The first level of access controls to define access control policies which could be defined by the Data Subject, the data controller or both. While, the second level of access controls are defined by the Data Subject in order to authorise someone to get his/her consent in an automated manner. The first level of access controls express either consent is required or not. The second level of access controls are managed by the Data Subject. In other words, each Data Subject defines consent-policies in order to indicate who is authorised to get his/her consent. Moreover, a Data Subject may withdraw his/her consent by deleting the corresponding consent-policy. In this paper, our main focus is to elaborate the second level of access controls.
Figure 1 shows the abstract architecture of the proposed solution. The Process Engine is responsible to handle the first level of access controls defining who can access the resource while the Consent Evaluator is responsible to capture and store the consent of a Data Subject. The Consent Evaluator is maintained for each Data Subject. Once the Process Engine identifies that consent is required, it interacts with the Consent Evaluator via the enterprise service bus.
Figure 2 illustrates how system entities interact with each other. We can distinguish between two cases for capturing consent. The first case is when consent can be stored statically while the second case is when consent is captured dynamically.
In the first case, shown in Figure 2(a), a Data Subject stores the consent-policy and gets back an acknowledgment. Once a Requester sends an access request, the Process Engine identifies if the access control policy corresponding to the requested resource requires any consent. If the Data Subject consent is required, the Process Engine generates and sends the consent request to the Consent Evaluator. Since the consent-policy is already stored, the Consent Evaluator does not need to interact with the Data Subject. However, the Consent Evaluator may collect contextual information from the Requester in order to evaluate the consent-policy. After the consent-policy has been evaluated, consent is sent from the Consent Evaluator to the Process Engine. Finally, the Process Engine evaluates the access policy and sends the access response back to the Requester.
In the second case, shown in Figure 2(b), we assume that the consent-policy is not already stored as we considered in the above case. Once a Requester makes an access request, the Process Engine identifies if the access control policy corresponding to the requested resource requires any consent. If the Data Subject consent is required, the Process Engine generates and sends the consent request to the Consent Evaluator. Since the consent-policy is not stored, the Consent Evaluator needs to interact with the Data Subject. For this purpose, the Consent Evaluator forwards the consent request to the Data Subject. The Data Subject may reply with consent, consent-policy or both. In case if the Data Subject reply includes the consent-policy, the Consent Evaluator sends an acknowledgment back to the Data Subject; moreover, the Consent Evaluator may collect contextual information from the Requester in order to evaluate the consentpolicy. After the consent-policy has been evaluated, consent is sent from the Consent Evaluator to the Process Engine. However, if the Data Subject reply includes consent, then the Consent Evaluator does on to do evaluation. Once the Process Engine receives consent, it evaluates the access control policy and sends the access response back to the Requester.
In the proposed solution, it is possible to provide a Data Subject with a tool for the consent management; this not only enables the administration of the consent-policy but also supports the inspection service to check who has obtained consent automatically.
In other words, the system maintains a log at the Consent Evaluator side to facilitate a Data Subject with both the administration of consent-policies and inspection of consent log. The consent log may be provided as digital evidence.
Consent-Policy Types
The consent-policy in the proposed solution refers to the written consent. When a Requester needs to verify whether he or she has consent in order to access the data, he or she refers to the written consent, which is consent-policy in the proposed solution. The purpose of the Consent Evaluator is to store the consent-policy and evaluate the consent decision, whenever requested.
The consent-policy may require a set of attributes in order to evaluate consent. The consent-policy attributes include, but not limited to, the following:
-
Open Policy
The open policy can be categorised further into two types. One is blacklist while the other is white-list. The black-list associated with an attribute limits access to a resource for Requesters holding that attribute with values in the list. For instance, in Table 1, a black-list of Requesters (i.e., Requester name attribute) is maintained to limit the access on resource R 1 . On the other hand, the white-list associated with an attribute permits access to a resource only for Requesters holding that attribute with values in the list. For instance, in Table 1, a white-list of Requesters (i.e., requester role attribute) is maintained to permit access to resource R 2 .
Complex Policy
The complex policy is the one that may involve conditional expressions in order to provide consent. Typically, a conditional expression is evaluated against the Requester attributes. These conditional expressions are evaluated by the Consent Evaluator. If the Requester attributes satisfy all the conditional expressions in the consent-policy, the consent is Yes and No otherwise. For instance, in Table 1, a Requester can access to resource R 5 if the time of request is between 8:00 hrs and 17:00 hrs, and the Requester is located in HR-ward.
Solution Details
For each of the resource, requiring consent of the Data Subject, the Consent Evaluator stores the corresponding consent-policy. Table 1 illustrates how consent-policies are stored at the Consent Evaluator. Each row in Table 1 corresponds to the consent-policy per resource. For each resource, the Consent Evaluator stores the consent-policy type (that is, open or complex), parameters (that is, contextual information) required in order to evaluate the consent-policy and the description providing further details of the consent-policy. In case of the open type, the policy description provides the information about the type of access list, either black-list or white-list. While in case of the complex type, the policy description expresses the conditional expressions required to be fulfilled in order to provide consent as Yes.
Communication Messages
For both static and dynamic consent-capturing, the detail of each message, shown in Figure 2, is described as follows:
Store Policy When a Data Subject desires to store his/her consent, he/she sends Store Policy message to the Consent Evaluator. Each Data Subject has his/her own Consent Evaluator. This message includes resource ID, consent-policy type, a list of attributes used in the consent-policy and the consent-policy. Upon receiving this message, a Consent Evaluator stores this information, as shown in Table 1. Acknowledgment We can distinguish between two different cases which are based on how the consent-policy is stored. If the consent-policy is stored statically, then after the Store Policy message, the Acknowledgment message is sent back to th Data Subject. In case if the consent-policy is stored dynamically then it is sent after the Data Subject Reply. In both cases, if the consent-policy is stored successfully then the Acknowledgment will include OK. Otherwise, it will include the error message with the error details. In case if the contextual information of the Requester satisfies the consentpolicy corresponding to the requested resource, the Consent Response contains the consent. Otherwise, it contains an error message. Access Response Finally, the Process Engine sends the Access Response to the Requester. For the successful access response, it is necessary that the Consent Evaluator replies with the required consent.
Access Request
Consent Withdrawal
In the proposed architecture, a Data Subject may withdraw his/her consent any time he/she wants. This can be accomplished by deleting the consent-policy stored on the Consent Evaluator. In other words, the resource entry will be deleted from Table 1 by the Data Subject. This maps well to the real-world situations. Consider the healthcare scenario where a patient has provided his/her consent in order to provide access on his/her medical data for the research purpose. Every time the access is made, the consent will be checked. Let suppose that the patient calls the care-provider to withdraw his/her consent. After the withdrawal, the patient medical data cannot be accessed anymore.
The same is the situation in the proposed solution. After a Data Subject has defined the consent-policy, consent can be provided by the Consent Evaluator. However, once the Data Subject wants to withdraw, the consent-policy is deleted by the Consent Evaluator.
After the consent-policy has been deleted, no consent can be provided until the Data Subject provides the new consent or the consent-policy.
Integrity of Consent
Let us assume that the PKI is already in place, where each entity, including the Requesters and Data Subjects, has a private-public key pair. For providing the integrity, the consent-policy can be signed with the signing (or private) key of the Data Subject while the contextual information is signed with the signing key of the Requester. Once a Requester provides the contextual information, first an integrity check is performed to verify if the information is not altered or forged by an adversary. In case if a dispute occurs, the log of entities can be inspected in order to investigate the matter. Technically, the signature will be checked on the transmitted data. The digital signature not only guarantees integrity but also ensures non-repudiation and unforgeability of consent.
Emergency Situations
In case of emergency, the Emergency Response Team may provide consent on the behalf of the Data Subject. Let us consider the healthcare scenario where a patient is in emergency condition, such as heart-attack or some similar situation. In this situation, if a doctor needs any consent for which the patient has not defined any consent-policy then the Emergency Response Team or a legal guardian may provide consent on the behalf of the patient. Moreover, just like the paper-based consent, the consent-policy of minors or one who is mentally incapable can be provided by a legal guardian.
Digital Evidence
Once consent is requested by a Process Engine, the request should be logged. Not only this but also the Consent Evaluation should log once the consent is provided to the Process Engine. Since the contextual information is provided by a Requester to the Consent Evaluator, a Requester should also log the information requested by a Consent Evaluator. This would prevent repudiation of all the involved entities. These logs can be provided to the court as digital evidence.
Data Subject Tool
The proposed solution may provide a Data Subject with a tool to manage the consentpolicies. Furthermore, a Data Subject may be provided with an inspection tool for observing who has obtained consent in an automated manner.
Implementation Overview
For evaluating the consent-policy against the Requester's attributes, we may consider the widely accepted policy-based framework proposed by IETF [START_REF] Yavatkar | A Framework for Policy-based Admission Control[END_REF], where the Consent Evaluator manages the Policy Enforcement Point (PEP) (only the for the second level of access controls defined in the consent-policy) in order to provide the consent. The Consent Evaluator also manages Policy Decision Point (PDP) for making decision about the consent either Yes or No. The consent-policy store can be realised as the Policy Administration Point (PAP). In the proposed solution, the request is sent by the Process Engine while the Requester can be treated as a PIP. For representing the consent-policy, we may consider XACML policy language proposed by OASIS [START_REF]extensible access control markup language (xacml) version 2.0, February 2005[END_REF].
Performance Overhead
The performance overhead of the proposed solution is O(m + n), where m number of conditional predicates in the consent-policy while n is the number contextual attributes required to evaluate the consent-policy in order to provide consent.
Discussion
This section provides a discussion about how the proposed solution may provide security properties including availability and confidentiality. Moreover, this section also gives a brief overview about how to increase the usability for a Data Subject.
Availability and Confidentiality
For providing availability, the Consent Evaluator can be considered in the outsourced environment, such as the cloud service provider. If the Consent Evaluator is managed by a third party service provider, then there is a threat of information leakage about the consent-policy or the contextual information. In order to provide confidentiality to the consent-capturing in outsourced environments, we may consider ESPOON (Encrypted Security Policies in Outsourced Environments) proposed in [START_REF] Muhammad Rizwan Asghar | ESPOON: Enforcing encrypted security policies in outsourced environments[END_REF].
Increased Usability
In order to increase usability for defining a consent-policy, a Data Subject can be provided with a drag-and-drop policy definition tool for a pre-defined set of parameters of the contextual information. Moreover, a full-fledged pre-defined set of consent-policies can also be provided to the Data Subject.
Conclusions and Future Work
In this paper, we have proposed an architecture capture and manage consent. The proposed architecture enables the data controller to capture the consent in an automated manner. Furthermore, consent can be withdrawn any time a Data Subject wishes to do so. In the future, we are planning to implement and evaluate the performance overhead incurred by the proposed solution. In case if a Data Subject associates multiple consentpolicies with a resource, then consent resolution strategies needs to investigated.
Fig. 1 .
1 Fig. 1. System architecture of the proposed solution
Fig. 2 .
2 Fig. 2. Capturing consent in the proposed solution
Table 1 .
1 Consent-policy storage at the consent evaluator side
Resource ID Consent-Policy Type Contextual Information Policy Description
R 1 Open Requester-name Black-list: {Alice, Bob, Charlie}
R 2 Open Requester-role White-list: {Nurse, Doctor}
R 3 Complex Request-time Condition: 8:00 hrs ≤ Time ≤ 17:00 hrs
R 4 Complex Condition: Location=HR-Ward
R 5 Complex Request-time, Requester-location Condition: 8:00 hrs ≤ Time ≤ 17:00 hrs Location=HR-Ward
. . . . . . . . . . . .
A Requester sends the Access Request to the Process Engine in order to request access to the resource. The Access Request includes Requester's URI, target resource ID, the access operation, date and time. Consent Request Whenever the Process Engine identifies in the access control policy that the Data Subject consent is required, it sends the Consent Request to the Consent Evaluator. The Consent Request contains the Requester's URI, target resource ID, the access operation, date and time. Here, we can see that both the Access Request and the Consent Request contain the same information, since the consentpolicy is managed by the Data Subject while the access control policy does not necessarily need to be managed by the Data Subject. Data Subject Reply In case of dynamic consent capturing, this message is sent from the Data Subject to the Consent Evaluator in response to the Consent Request. It may contain the consent response, the consent-policy or both. The Consent Evaluator takes actions based on this message. That is, the Consent Evaluator either forwards the consent response or evaluates the consent-policy. In case if it contains both the consent response and the consent-policy then the evaluation can be skipped. In case if a Data Subject replies with consent then it may be accomplished synchronously or asynchronously using a handheld device such as PDA or mobile device. Alternatively, consent can also be provided by email. Contextual Information Request After the consent-policy has been found against the requested resource, the Consent Evaluator needs to collect the contextual information by sending the Contextual Information Request to the Requester that is identified by his/her URI. The Contextual Information Request contains all the parameters required for the consent-policy corresponding to the requested resource. Contextual Information Response The Requester replies the Contextual Information Request with the Contextual Information Response. The Contextual Information Response contains all the parameters requested in the Contextual Information Request. The contextual information may be collected in multiple round trips of request and response. Consent Response After the Consent Evaluator has evaluated the consent-policy against the contextual information, the Consent Response is sent back to the Process Engine.
http://www.bloomberg.com/news/2011-03-16/facebook-google-must-obey-eudata-protection-law-reding-says.html
Acknowledgment
This work is supported by the EU FP7 programme, Research Grant 257063 (project Endorse). | 35,048 | [
"1003309",
"1003310"
] | [
"110155",
"304024",
"110155"
] |
01481500 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481500/file/978-3-642-27585-2_11_Chapter.pdf | Towards User Centric Data Governance and Control in the Cloud
Stephan Groß and Alexander Schill Technische Universität Dresden Fakultät Informatik D-01062 Dresden, Germany {Stephan.Gross, Alexander.Schill}@tu-dresden.de Abstract. Cloud Computing, i. e. providing on-demand access to virtualised computing resources over the Internet, is one of the current megatrends in IT. Today, there are already several providers offering cloud computing infrastructure (IaaS), platform (PaaS) and software (SaaS) services. Although the cloud computing paradigm promises both economical as well as technological advantages, many potential users still have reservations about using cloud services as this would mean to trust a cloud provider to correctly handle their data according to previously negotiated rules. Furthermore, the virtualisation causes a location independence of offered services which could interfere with domain specific legislative regulations. In this paper, we present an approach of putting the cloud user back into power when migrating data and services into and within the cloud. We outline our work in progress, that aims at providing a platform for developing flexible service architectures for cloud computing with special consideration of security and non-functional properties.
Motivation
The recent progress in virtualising storage and computing resources combined with service oriented architectures (SOA) and broadband Internet access has led to a renaissance of already known concepts developed in research fields like grid, utility and autonomic computing. Today, the term cloud computing describes different ways of providing on-demand and pay-per-use access to elastic virtualised computing resource pools [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF]. These resources are abstracted to services so that cloud computing resources can be retrieved as infrastructure (IaaS), platform (PaaS) and software (SaaS) services respectively. The pay-per-use model of such service oriented architectures includes Service Level Agreements (SLA) negotiated between service provider and user to establish guarantees for required non-functional properties including mandatory security requirements. The (economical) advantages of this approach are fairly obvious: One saves costly investments for procuring and maintaining probably underused hardware and at the same time gains new flexibility to react on temporal higher demands.
Nevertheless, there are reasonable reservations about the deployment of cloud computing services, e.g. concerning data security and compliance. Most of these concerns result from the fact, that cloud computing describes complex sociotechnical systems with a high number of different kinds of stakeholders following different and possibly contradicting objectives. From a user's perspective, one has to hand over the control over his data and services when entering the cloud, i.e. the user has to trust that the cloud provider behaves in compliance with the established SLA. However, to actually agree on a specific SLA a user first has to assess his organizational risks related to security and resilience [START_REF] Catteddu | Security & Resilience in Governmental Clouds -Making an informed decision[END_REF].
Current solutions that restrict the provision of sensible services to dedicated private, hybrid or so-called national clouds1 do not go far enough as they reduce the user's flexibility when scaling in or out and still force him to trust the cloud provider. Furthermore, private clouds intensify the vendor lock-in problem. Last but not least, there is no support for deciding which services and data could be safely migrated to which cloud. Instead we demand new methods and technical support to put the user in a position to benefit from the advantages of cloud computing without giving up the sovereignty over his data and applications. In our current work, we follow a system oriented approach focussing on technical means to achieve this goal.
The remainder of this paper is structured as follows. We first refine our problem statement in section 2. Then, in section 3, we sketch our approach of developing a secure platform for easy and flexible cloud service architectures. Our solution is based on the idea of a personal secure cloud (Π-Cloud), i.e. the conglomerate of a user's resources and devices, that can be controlled by a specialized gateway, the so-called Π-Box. We elaborate on its basic components in section 3 and further exemplify how the Π-Box supports the controlled storage of a user's data in the cloud in section 4. We conclude with a discussion of our approach and compare it with related work in section 5. Finally, section 6 provides an outlook on future work.
Problem Statement
We identified security as a major obstacle that prevents someone to transfer his resources into the cloud. In order to make sound business decisions and to maintain or obtain security certifications, cloud customers need assurance that providers are following sound security practices and behave according to agreed SLAs [START_REF] Catteddu | Cloud Computing -Benefits, risks and recommendations for information security[END_REF]. Thus, our overall goal is the development of a flexible open source cloud platform that integrates all necessary components for the development of user-controlled and -monitored secure cloud environments. This platform should provide the following functionality:
1. Mechanisms to enable a user-controlled migration of resources and data into the cloud. These mechanisms should support (semi-)automatic configuration of cryptographic algorithms to simplify the enforcement of a user's security requirements as well as the dynamic selection of cloud providers that best fit the user's requirements and trust assumptions. Thus, we need a formalised way to acquire a user's requirements. Furthermore, we need to integrate the user's private resources and different cloud providers in our cloud platform, e.g. by using wrapper mechanisms or standardised interfaces. 2. A sound and trustworthy monitoring system for cloud services that is able to gather all relevant information to detect or even predict SLA violations without manipulations by the cloud provider under control. To support the configuration of the monitoring system, there should be some mechanism that derives relevant monitoring objectives from negotiated SLAs. Thus, we need a formalised language for machine-readable SLA focussing on the technical details of a cloud computing environment. 3. Adaptation mechanisms optimizing the cloud utilization according to userdefined constraints like cost, energy consumption as well as to react on SLA violations detected by the monitoring system in order to mitigate the resulting negative effects. This includes migration support to transparently transfer resources between different cloud providers as well as adaptation tools that leaves the resources at the chosen provider but transforms them to further meet the user's non-functional and security requirements. In other words, we demand the implementation of an iterative quality management process to establish a secure cloud computing lifecyle that enables the user to constantly supervise and control his cloud computing services. By applying the well-established PDCA model [START_REF]Information technology -Security techniques -Information security management systems -Requirements[END_REF] the major objectives of such a cloud computing management process can be summarized as depicted in figure 1.
Introducing FlexCloud
Within the FlexCloud project we aim at developing methods and mechanisms to support the development of flexible and secure service architectures for cloud computing. Our major objective is to put the user in a position to externalize his IT infrastructure without losing control. For this, we have first refined the definition of cloud deployment models by introducing the concept of a personal secure cloud.
We define a personal secure cloud or Π-Cloud as a hybrid cloud that covers all ressources, services and data under complete control of a user. The user is able to dynamically adjust the Π-Cloud's shape according to his actual demands, i.e. to securely include foreign services and ressources as well as to securely share parts of his Π-Cloud with others.
Thus, we need to control the data-flow as well as the service distribution and execution. The technical means to control the Π-Cloud when sharing ressources or exchanging data are provided by the so-called Π-Gateway. The Π-Gateway provides all mechanisms to manage and optimize a user's policies concerning security and other non-functional properties, e.g. performance, energy-efficiency or costs. Furthermore, it provides the necessary means to enforce these policies such as adaptation and migration mechanisms for services and data.
To bridge the gap between a Π-Cloud's raw ressources, i.e. a user's devices, and the actually used services, we rely on a service platform. Its primary task is to dynamically allocate a user's software service to the available infrastructure services. Figure 2 shows our vision of a secure cloud computing setup. On the left hand side we have a user's personal devices building his Π-Cloud. It is controlled by his Π-Box (rectangle in the middle) that combines service platform and Π-Gateway. Depending on the Π-Cloud's size, the Π-Box can be realized physically (either as a separate hardware appliance or as a part of an existing device such as a router). It can also be virtualized, thus it can be migrated within the Π-Cloud or to some trustworthy cloud provider.
The following subsections give more details on the differnt parts of our approach.
Service Platform
A major foundation of our work is represented by our service platform Space.
Space is an open source platform for the Internet of services which provides basic tools for contract-bound adaptive service execution and acts as a hosting and brokering environment for services. It already integrates techniques for trading Internet services and their surveillance during execution by the user as well as the service provider. Being designed as a stand-alone server in the beginning, we are now working on bringing Space into the cloud, making it a fully distributed platform. Its latest extension already integrates Amazon EC2-compatible cloud environments as target environment for complex services delivered as virtual machines [START_REF] Spillner | Spaceflight -A versatile live demonstrator and teaching system for advanced service-oriented technologies[END_REF]. However, adding and removing new ressources and infrastructure services is still a rather static process.
Π-P2P Ressources
Π-P2P Ressources The Π-P2P Ressource component aims at improving the functionality to dynamically adjust the ressource and infrastructure pool available within a Π-Cloud. This includes the different devices in the Π-Cloud under the user's control as well as external cloud ressources (right part in figure 2). The on-going work on this topic are twofold.
On the one hand, we are further extending the Space by specific protocols to organize the physical ressource pool in a peer-to-peer network. This includes adaption algorithms to reorganize the container setup at the IaaS layer.
On the other hand, the Π-P2P Ressources component gets integrated with the Π-Gateway to implement a control flow that guarantees the combination of services and ressources according to a user's given policies.
For further details the interested reader is refered to [START_REF] Mosch | User-controlled data sovereignty in the Cloud[END_REF].
Π-Gateway
We illustrate the Π-Gateway's functionality by an example: confidential cloud storage. Although there already exist first cloud storage solutions providing client-side encryption (e.g. [START_REF] Grolimund | Cryptree: A folder tree structure for cryptographic file systems[END_REF]), it is cumbersome to integrate these systems into a company's existing IT infrastructure. For example, they introduce new potential for human errors as they ignore available access control systems and must be manually configured, e.g. by (re)defining the encryption keys for authorised users. It is also difficult to control if they comply with given regulations. Figure 4 sketches how the Π-Gateway incorporates different tools, such as existing access control and user management systems, face recognition or information retrieval tools to determine the users authorised to access a specific file. For example, it could search a text file for a specific confidentiality note, analyse the people displayed on a foto, or simply check the file system's access rights in combination with the operating systems user database to determine the user identities to be granted access. By using these identities it is able to retrieve the necessary public keys from a public key infrastructure and to encrypt the data accordingly before storing it into the cloud.
Thus, the Π-Gateway consolidates all functionality to control and coordinate a user's cloud. This includes the management and enforcement of user policies concerning security and other non-functional properties; the aggregation and analyzation of monitoring data retrieved by sensors implemented in the service platform as well as in the Π-P2P Ressources component; In other words, whereas the Π-Gateway represents the Π-Box's brain the service platform and the Π-P2P Ressources forms its body and extremeties respectively.
Π-Cockpit
Although we are aiming to include as much intelligence in our Π-Box to disburden the user from cumbersome administration tasks, it would be impudent to claim that our Π-Cloud is able to maintain itself. As we call for user-control, we need the necessary means to put him into this position even if he is not an expert. In short, we need a Π-Cockpit, i.e. adequate user interfaces to supervise and adjust the Π-Box. These user interfaces must be able to adjust to the user's skills and preferences as well as to the device's capabilities it is currently used on. The foundations for our work in this area are two-fold:
On the one hand, we aim at following up recent research in the area of usable security and privacy technologies that proposes an interdisciplinary approach to support a user in meeting his security demands independent of his skills and expertise. For an overview on related work in this field we refer to [START_REF] Cranor | Designing Secure Systems That People Can Use[END_REF][START_REF] Fischer-Hübner | Usable Security und Privacy[END_REF][START_REF] Garfinkel | Design principles and patterns for computer systems that are simultaneously secure and usable[END_REF].
On the other hand, we are in line with recent developments in the humancomputer interaction community that started to discuss the impact of cloud computing on the design of the user experience [START_REF] England | Designing interaction for the cloud[END_REF].
As a first evaluation scenario for our approach we have chosen the cloud storage use case already mentioned in the previous section. Thus, our pool of physical ressources consists of disk storage space that is provided by different cloud storage providers as an infrastructure service.
Problems of current Cloud Storage Solutions
Current cloud storage offers suffer from different issues. First of all, off-site data storage raises several security and privacy concerns. Due to the nature of cloud computing the user can usually be not be sure about the geographical placement of his data. This leads to a possible mismatch of legislative rules in the cloud user's and provider's country respectively. Recent media reports document that even geographic restrictions for used ressources and services as assured by some cloud providers cannot guarantee a user's security compliance [START_REF] Whittaker | Microsoft admits patriot act can access EU-based cloud data[END_REF].
As a solution to ensure confidentiality, several cloud storage providers therefor rely on cryptography. However, for sake of simplicity and usability, most of them retain full control of the key management. Thus, the user's data stored at a trustworthy provider might be safe from intruders but can still subject to internal attacks or governmental desires. Even more, this increases the user's dependability on a specific storage provider leading to the point of a complete vendor-lockin and a possible loss of availability.
With respect to the functionality stated in section 2 we claim the following requirements for an ideal cloud storage solution:
User-controlled migration: The user should always be in the position to decide which data shall be migrated to which cloud storage provider. Furthermore, he should be assisted in applying cryptographic tools to enforce his security policies. To ensure best possible trustworthiness these security tools must be applied at the user's premises or at least by a fully trusted third party.
Trustworthy monitoring: After having transfered his data to the cloud the user should be able to control the reliability and trustworthiness of the chosen cloud storage providers. This includes audit mechanisms for the preservation of evidence to support subsequent legal enforcement.
Adaptation mechanisms: Finally, the user should be supported when recovering from detected malfunctions or inadequateness, e.g. securely restoring data stored at a provider or migrating it to another more trusted one.
Proposed Solution
We have developed a first prototype of a cloud storage integrator that aims at providing the stated functionality. Our prototype called SecCSIE (Secure Storage Integrator for Enterprises) implements a Linux based proxy server to be placed in a company's intranet. It mediates the data flow to arbitrary cloud storage providers and provides a SMB/CIFS file based access to them for the average users. SecCSIE consists of five major components (for more details please refer to [START_REF] Seiger | SecCSIE: A Secure Cloud Storage Integrator for Enterprises[END_REF]):
Cloud Storage Protocol Adapter: To integrate and homogenize multiple cloud storage services we have implemented several protocol adapters. This includes adapters for common protocols like NFS, SMB/CIFS, WebDav and (S)FTP to access files over an IP network. Furthermore we provide access to Amazon S3, Dropbox and GMail storage by using existing FUSE (Filesystem in Userspace) models.
Data Dispersion Unit: Besides the cloud storage protocol adapter our data dispersion unit contributes to overcoming the vendor-lockin. By utilizing recent information dispersal algorithms [START_REF] Resch | AONT-RS: blending security and performance in dispersed storage systems[END_REF] it distributes the user's data over different storage providers with higher efficiency than simple redundant copies. This also increases the overall availability and performance.
Data Encryption Unit: The encryption unit encapsulates different encryption algorithms to ensure confidentiality of the data stored. It also takes measures to preserver the stored data's integrity, e.g. by using AES-CMAC. The necessary key management can be handled by SecCSIE itself or delegated to an existing public key infrastructure.
Metadata Database: Within the metadata database all relevant information are collected to reconstruct and access the data stored in the cloud. Overall, this includes configuration parameters for data dispersion and encryption unit. Thus, the metadata database is absolutely irreplaceable for the correct functioning of our storage integrator.
Management Console:
The management console implements a very straightforward web-based user interface to control SecCSIE's main functions. It provides rudimentary methods with which an enterprise's system administrator can check and restore the vitality of his storage cloud. Figure 5 gives an overview of the management console. Specific monitoring or configuration tasks can be accessed via the menu bar at the top or by clicking the respective component in the architecture overview.
The cloud storage protocol adapter together with the data dispersion and encryption units contribute to our overall objective of user-controlled migration. The trustworthy monitoring is accomplished by the storage proxy itself using the integrity checks provided by the data encryption unit as well as frequent checks of the network accessability of each storage provider. The management console provides an easy to use interface for this process so that one can estimate the reliability of the configured storage providers. Adaptation can be manually triggered in the management console. Furthermore, if an integrity check fails when accessing a file, it can be automatically restored by switching to another storage provider and data chunk. The number of tolerable faults depends on the configuration of the dispersion unit, thus, it can be adjusted to the user's preferences.
Discussion and Related Work
In contrast to a common hybrid cloud, a Π-Cloud provides the following advantages:
-The user of a Π-Cloud retains full control over his data and services respectively. -The user gains improved scalability as the Π-Gateway provides dedicated mechanisms to securely externalize data and services according to his security policies. -The user no longer suffers from a vendor-lockin as the Π-Gateway integrates arbitrary service provider into a homogeneous view.
Thus, we achieve our goal of user-controlled migration into the cloud. The Π-Gateway together with the Π-Cockpit also provide a framework for implementing a sound monitoring system. Together with the Π-P2P Ressources framework it provides broad support for adaption and optimization scenarios.
Being more or less a topic only for industry in the beginning, cloud computing has seen more and more interest by academia over the last 3 years. Thus, there exist several work with similar approaches to FlexCloud. The recently initiated Cloud@Home project [START_REF] Aversa | The Cloud@Home Project: Towards a New Enhanced Computing Paradigm[END_REF] for example also aims at clients sharing their ressources with the cloud. Although the project also addresses SLAs and QoS it only sets a minor focus on security. The same applies for Intel's recent cloud initiative [START_REF]Benefits of a Client-aware Cloud[END_REF].
A major part of current research work on cloud computing is about cloud storage. Most theoretical publications in this area like [START_REF] He | Study on Cloud Storage System Based on Distributed Storage Systems[END_REF][START_REF] Wang | Ensuring data storage security in Cloud Computing[END_REF] apply existing algorithms from cryptography, peer-to-peer networking and coding theory to improve the integrity and availability of cloud storage. More sophisticated approaches lik [START_REF] Kamara | Cryptographic Cloud Storage[END_REF] argue for a virtual private storage service that provides confidentiality, integrity and non-repudiation while retaining the main benefits of public cloud storage, i.e. availability, reliability, efficient retrieval and data sharing. However, although promising these approaches are still impractical to use. Other work tries to predict the required storage space to optimize the ressource allocation [START_REF] Bonvin | A self-organized, fault-tolerant and scalable replication scheme for cloud storage[END_REF] or aims at the better integration with existing IT infrastructures [START_REF] Xu | Enabling Cloud Storage to Support Traditional Applications[END_REF]. However, to our best knowledge none of these work has presented a usable prototype implementation. On the practical side of the spectrum there are works like [START_REF] Abu-Libdeh | RACS: a case for cloud storage diversity[END_REF] or [START_REF] Schnjakin | Plattform zur Bereitstellung sicherer und hochverfügbarer Speicherressourcen in der Cloud[END_REF]. Both provide a similar approach to ours but only provide web-service based access to the storage gateway that complicates an integration with existing environments.
Conclusion
We have presented the overall objectives and first results of the FlexCloud project. In general, we are trying to keep the cloud user in control when using cloud services. We aim at providing a platform, the Π-Box, that provides all functionality to span a so-called Π-Cloud for flexible and secure cloud computing applications. As a first evaluation scenario we have chosen the use case of enterprise cloud storage for which we have implemented an initial prototype of the Π-Box called SecCSIE.
Concerning future work, our short-term objectives aim at consolidating the results achieved with SecCSIE. This includes further testing and optimization especially with respect to performance evaluation. We also plan to improve our monitoring and optimize our data dispersion mechanisms with respect to the user's requirements. Long-term objectives include the generalization of the storage scenario to other service types. We are especially interested in dynamic ressource allocation, e.g. by means of peer-to-peer mechanisms. Furthermore we plan to investige the implementation and evaluation of user interfaces with respect to his role and skills to improve the surveillance and management of the Π-Cloud.
Fig. 1 .
1 Fig. 1. Support for major cloud computing quality management objectives
Fig. 2 .
2 Fig. 2. Controlling the cloud with the Π-Box
Fig. 3 .
3 Fig. 3. Functionality of the Space service platform
Figure 3
3 Figure3summarizes the functionality implemented by Space. In general, Space provides a platform for building marketplaces for contract-bound adaptive service execution. The service marketplace and contract manager components comprise mechanisms for trading services, i.e. offering, searching, configuring, using and rating them. The service hosting enviroment binds heterogenous implementation technologies to a unified interface for service deployment, execution and monitoring. Overall, Space provides all necessary functions for the deployment of arbitrary software services on virtualized physical ressources provided as IaaS containers.Being designed as a stand-alone server in the beginning, we are now working on bringing Space into the cloud, making it a fully distributed platform. Its
Fig. 4 .
4 Fig. 4. Automized confidentiality for cloud storage
Fig. 5 .
5 Fig. 5. Preliminary management console showing system architecture and data flow
The mentioned cloud types define different deployment models of cloud computing systems. In contrast to public clouds that make services available to the general public, private clouds are operated solely for an organization although the resources used might be outsourced to some service company. Hybrid clouds describe a mixture of public and private cloud, i.e. when users complement their internal IT resources with public ones. The term national clouds describes a scenario, where the location of the cloud resource pool is restricted to one country or legislative eco-system like the EU.
Acknowledgement
The authors would like to express their gratitude to all members of the FlexCloud research group, especially its former member Gerald Hübsch, for many fruitful discussions that contributed to the development of the ideas presented in this paper.
This work has received funding under project number 080949277 by means of the European Regional Development Fund (ERDF), the European Social Fund (ESF) and the German Free State of Saxony. The information in this document is provided as is, and no guarantee or warranty is given that the information is for any particular purpose. | 27,798 | [
"1003311",
"1003312"
] | [
"96520",
"96520"
] |
01481501 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481501/file/978-3-642-27585-2_12_Chapter.pdf | Muhammad Rizwan Asghar
email: [email protected]
Mihaela Ion
email: [email protected]
Giovanni Russello
email: [email protected]
Bruno Crispo
email: [email protected]
Securing Data Provenance in the Cloud
Keywords: Secure Data Provenance, Encrypted Cloud Storage, Security, Privacy 1 Introduction
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Motivation
The provenance of sensitive data may reveal some private information. For instance, in the above scenario, we can notice that even if the medical report is protected from unauthorised access, the data provenance still may reveal some information about a patient's sensitive data. That is, an adversary may deduce from the data provenance that the patient might have heart problems considering the fact that a cardiologist has processed patient's medical report. Therefore, in addition to provide protection to the sensitive data, it is vital to make the data provenance secure.
Another motivation to secure data provenance is for providing the unforgeability and non-repudiation. For instance, in the aforementioned healthcare scenario, assume that carelessness in reporting has resulted in the mis-diagnosis. In order to escape the investigation of mis-diagnosis, a victim would try to either forge the medical report with the fake data provenance or repudiate his/her involvement in generating the medical report. Moreover, the query to data provenance and the response should be encrypted, otherwise, a victim may threaten the auditor by eavesdropping the communication channel to check if his/her case is being investigated. The data, such as the medical report, may be critical; therefore, it should be subject to the availability at any time from anywhere.
A significant amount of research has been conducted on securing data provenance. For instance, secure data provenance schemes proposed in [START_REF] Hasan | The case of the fake picasso: preventing history forgery with secure provenance[END_REF][START_REF] Hasan | Preventing history forgery with secure provenance[END_REF] ensures confidentiality by employing state-of-the-art encryption techniques. However, this scheme does not address how an authorised auditor can perform search on data provenance. The scheme proposed in [START_REF] Lu | Secure provenance: the essential of bread and butter of data forensics in cloud computing[END_REF] provides anonymous access in the cloud environment for sharing data among multiple users. The scheme can track the real user if any dispute occurs. However, there is no detail about how the scheme manages data provenance in the cloud. Both [START_REF] Im | Provenance security guarantee from origin up to now in the e-science environment[END_REF] and [START_REF] Davidson | On provenance and privacy[END_REF] assume a trusted infrastructure, restricting the possibility of managing data provenance in cloud environments. Unfortunately, the existing research lacks in securing data provenance while offering search on data provenance stored in the cloud.
Research Contribution
This paper investigates the problem of securing data provenance in the cloud and proposes a scheme that supports encrypted search while protecting confidentiality of data provenance stored in the cloud. One of the main advantages of the proposed approach is that neither an adversary nor a cloud service provider learns about the data provenance or the query. Summarising, the research contributions of our approach are threefold. First of all, the proposed scheme ensures secure data provenance by providing confidentiality, integrity, non-repudation, unforgeability and availability in the cloud environment. Second, proposed solution is capable of handling complex queries involving non-monotonic boolean expressions and range queries. Third, the system entities do not share any keys and the system is still able to operate without requiring re-encryption even if a compromised user (or auditor) is revoked.
Organistaion
The rest of this report is organised as follows: Section 2 lists down the security properties that a secure data provenance scheme should guarantee. Section 3 provides a discussion of existing data provenance schemes based on the security properties listed in Section 2. Section 4 describes the threat model. The proposed approach is described in Section 5. Section 6 focuses on the solution details. Section 7 provides a discussion about how to optimise performance overheads incurred by the proposed scheme. Finally, Section 8 concludes this paper and gives directions for the future work.
Security Properties of a Data Provenance Scheme
A data provenance scheme must fulfil the general data security properties in order to guarantee the trustworthiness. In the context of data provenance, the security properties are described as follows:
-Confidentiality: Data provenance of a sensitive piece of data (that is, the source data) may reveal some private information. Therefore, it is necessary to encrypt not only the source data but also the data provenance. Moreover, a query to and/or a response from the data provenance store may reveal some sensitive information. Thus, both the query and its response must be encrypted in order to guarantee confidentiality on the communication channel. Last but not least, if data provenance is stored in the outsourced environment such as the cloud then the data provenance scheme must guarantee that neither the stored information nor the query and response mechanism must reveal any sensitive information while storing data provenance or performing search operations. -Integrity: The data provenance is immutable. Therefore, the integrity must be ensured by preventing any kind of unauthorised modifications in order to get the trustworthy data provenance. The integrity guarantees that data provenance cannot be modified during the transmission or on the storage server without being detected.
-Unforgeability: An adversary may forge data provenance of the existing source data with the fake data. The unforgeability refers that the source data is tightly coupled with its data provenance. In other words, an adversary cannot forge the fake data with existing data provenance (or vice versa) without being detected. -Non-Repudiation: Once a user takes an action, as a consequence, the data provenance is generated. A user must not be able to deny once data provenance has been recorded. The non-repudiation ensures that the user cannot deny if he/she has taken any actions. -Availability: The data provenance and its corresponding source data might be critical; therefore, it must available at any time from anywhere. For instance, the lifecritical data of a patient is subject to high availability, considering emergency situations that can occur at any time. The availability of the data can be ensured by a public storage service such as provided by the cloud service provider.
Related Work
In the following subsections, we describe state-of-the-art data provenance schemes which can be categorised as the general data provenance schemes and the secure data provenance schemes. The schemes in the former category are designed without taking into consideration the security properties while the schemes in the latter category explicitly aim at providing the certain security properties.
General Data Provenance Schemes
Several systems have been proposed for managing data provenance. Provenance-Aware Storage Systems (PASS) [START_REF] Muniswamy-Reddy | Provenance-aware storage systems[END_REF] is a first storage system towards the automatic collection and maintenance of data provenance. PASS collects information flow and workflow details at the operating system level by intercepting system calls. However, PASS does not focus on security of data provenance. Open Provenance Model (OPM) [START_REF] Moreau | Provenance and annotation of data and processes[END_REF] is a model that has been designed as a standard. In OPM version 1.1 [START_REF] Moreau | The open provenance model core specification (v1.1)[END_REF], data provenance can be exchanged between systems. Moreover, it defines how to represent data provenance at very abstract level. The focus of OPM is standardisation, however, it does not take into account the security and privacy issues related to data provenance. Muniswamy-Reddy et al. [START_REF] Muniswamy-Reddy | Provenance for the cloud[END_REF][START_REF] Muniswamy-Reddy | Provenance as first class cloud data[END_REF] explain how to introduce data provenance to a cloud storage server. They define a protocol to prevent forgeability between the data provenance and the source data. However, they leave the data provenance security as an open issue. Sar and Cao [START_REF] Sar | Lineage file system[END_REF] propose Lineage File System that keeps record of data provenance of each file at the file system level. Unfortunately, they do not address the security and privacy aspects of the file system. Buneman et al. [START_REF] Buneman | Why and Where: A Characterization of Data Provenance[END_REF] 3 use the term data provenance to refer to the process of tracing and recording the origin of data and its movements between databases. Data provenance, as defined by [START_REF] Buneman | Data provenance: Some basic issues[END_REF], broadly refers to a description of the origins of a piece of data and the process by which it arrived in the database. They explain why-provenance, who contributed to or why a tuple is in the output, and where-provenance, where does the a piece of data comes from. Unfortunately, they do not focus on the security of data provenance.
Zhou et al. [START_REF] Zhou | Unified declarative platform for secure netwoked information systems[END_REF] use the notion of data provenance to explain the existence of a network state. However, they do not address the security of data provenance. In EXtenSible Provenance Aware Networked systems (ExSPAN) [START_REF] Zhou | Efficient querying and maintenance of network provenance at internet-scale[END_REF], Zhou et al. extend [START_REF] Zhou | Unified declarative platform for secure netwoked information systems[END_REF] and propose ExSPAN which provides the support for queries and maintenance of the network provenance in a distributed environment. However, they leave the issue of protecting the confidentiality and authenticity of provenance information as open.
Secure Data Provenance Schemes
The Secure Provenance (SPROV) scheme [START_REF] Hasan | The case of the fake picasso: preventing history forgery with secure provenance[END_REF][START_REF] Hasan | Preventing history forgery with secure provenance[END_REF] automatically collects data provenance at the application layer. It provides security assurances of confidentiality and integrity of the data provenance. In this scheme, confidentiality is ensured by employing state-ofthe-art encryption techniques while integrity is preserved using the digital signature of the user who takes any actions. Each record in the data provenance includes the signed checksum of previous record in the chain. For speeding up the auditing, they have introduced the spiral chain where the auditors can skip verification of records wrote by the users they already trust. However, the SPROV scheme has some limitations. First, it does not provide confidentiality to the source data whose data provenance is being recorded. Second, it does not provide any mechanisms to query data provenance. Third, it assumes that secret keys are never revoked or compromised. Last but not least, it cannot be employed in the cloud as it assumes a trusted infrastructure in order to store data provenance.
Jung and Yeom [START_REF] Im | Provenance security guarantee from origin up to now in the e-science environment[END_REF] propose the Provenance Security from Origin up to Now (PSecOn) scheme for e-science, a cyber laboratory to collaborate and share scientific resources. In an e-science grid, researchers can ensure integrity of the scientific results and corresponding data provenance through the PSecOn scheme. The PSecOn scheme ensures e-science grid availability from anywhere at any time. When an object is created, updated or transferred from one grid to another then the corresponding data provenance is prepared automatically. Each e-science grid has its own public history pool that manages the signature on data provenance, signed with the private key of an e-science grid. The public history pool prevents repudiation of both the data sender and the data receiver. The PSecOn scheme encrypts the source data. It revokes the secret key of a user who is compromised. However, it does not provide any query-response mechanisms to search data provenance. The main drawback of PSecOn is its strong assumption of relying on a trusted infrastructure, restricting the possibility of managing data provenance in the cloud.
Lu et al. [START_REF] Lu | Secure provenance: the essential of bread and butter of data forensics in cloud computing[END_REF] introduce a scheme to manage data provenance in the cloud where data is shared among multiple users. Their scheme provides users access to the online data. To guarantee confidentiality and integrity, a user encrypts and signs the data while a cloud service provider receives and verifies the signature before storing that data. Once the data is in dispute, a cloud service provider can provide the anonymous access information to a trusting authority who uses the master secret key of the system to trace the real user. The shortcoming of this approach is that it only traces the user while it does not provide any details about how the data provenance is managed by the cloud service provider.
Aldeco-Perez and Moreau [START_REF] Aldeco | Securing provenance-based audits[END_REF] ensure integrity of data provenance by providing concrete cryptographic constructs. They describe the information flow of an auditable provenance system which consists of four stages including recording provenance, storing provenance, querying provenance and analysing provenance graph in order to answer questions regarding execution of the entities within the system. They ensure integrity at two levels. The first level is when data provenance is recorded and stored while the second level is at the analysis stage. Unfortunately, they do not provide any details about how to provide confidentiality to data provenance.
Braun et al. [START_REF] Braun | Securing provenance[END_REF] focus on security model of data provenance at the abstract level. They consider data provenance as a causality graph with annotations. They argue that the security of data provenance is different from the source data it describes. Therefore, each of these need different access controls. However, they do not address how to define and enforce these access controls. Tan et al. [START_REF] Tan | Security Issues in a SOA-based Provenance System[END_REF] discuss security issues related to a Service Oriented Architecture (SOA) based provenance system. They address the problem of accessing data provenance for auditors with different access privileges. As a possible solution, they suggest to restrain auditors by limiting the access to the results of a query using cryptographic techniques. However, there is no concrete solution. Davidson et al. [START_REF] Davidson | On provenance and privacy[END_REF] consider the privacy issue while accessing and searching data provenance. In [START_REF] Davidson | Privacy issues in scientific workflow provenance[END_REF], Davidson et al. formalise the notion of privacy and focus on a mathematical model for solving privacy-preserving view as a result of query by an auditor. However, their approach is theoretic and there is no concrete construction for addressing security.
Table 1 summarises existing data provenance schemes based on the security properties listed in Section 2. Currently, there is not a single data provenance scheme that could guarantee all the security properties listed in Section 2.
Threat Model
This section describes the system entities involved, potential adversaries and possible attacks. The proposed system may include following entities:
-User: A User is an individual who takes action on the source data and generates data provenance. It is managed in the trusted environment. In the healthcare scenario, medical staff members, such as doctor and lab assistant, are Users. -Auditor: An Auditor is the one who audits actions taken by a User. An Auditor also verifies data provenance up to the origin and identifies who took what action on the source data. An auditor may be an investigator or a regular quality assurance checker to check processes within an organisation. It is managed in the trusted environment. -Cloud Service Provider (CSP): A CSP is responsible for managing the source data and its corresponding data provenance in the cloud environment. It is assumed that a CSP is honest-but-curious, means it is honest to follow the protocol for performing required actions but curious to deduce stored or exchanged data provenance and the source data. The CSP guarantees the availability of data provenance store from anywhere at any time.
-Trusted Key Management Authority (KMA):
The KMA is fully trusted and responsible for generating and revoking the cryptographic keys involved. For each authorised entity described above, the KMA generates and transmits the keys securely. The KMA requires less resource and less management efforts. Since a very limited amount of data needs to be protected, securing the KMA is much easier and it can be kept offline most of the time.
The proposed scheme assumes that a CSP will not mount active attacks such as modifying the exchanged messages, message flow and the stored data without being detected. The main goals of an adversary is to gain information from the data provenance record about the actions performed, the provenance chain, and modifying existing data provenance entries.
Provenance Store
(2) Response
Auditor User (i) Data Provenance
Cloud Service Provider
The Proposed Approach
The proposed scheme provides the support for storing and searching data provenance in the cloud environment. The proposed scheme aims at providing the security properties listed in Section 2. In the proposed scheme, a CSP manages a Provenance Store to store data provenance. Moreover, the proposed scheme provides the support for storing the source data corresponding to the data provenance. The source data is stored in the Data Store which is also managed by a CSP. The CSP is in the untrusted environment while both the User and the Auditor are in the trusted environment. Figure 1 shows an abstract architecture of the proposed scheme. In the proposed scheme, after a User has taken an action on the source data, he/she (i) sends the corresponding data provenance to the Provenance Store. An Auditor may (1) send a query to the Provenance Store and as a result he/she (2) obtains the Response.
Structure of a Data Provenance Entry
This subsection describes how data provenance may look like. Typically, a provenance record may include, but not limited to, the following fields:
-Revision: indicates the version number.
-Date and time: indicating when the action was taken.
-User ID: who took the action.
-Action: provides the details of action taken on the source data. It is divided into four parts: Name, Reason, Description and Location. Name describes what action was taken. Reason states why the action was taken. Description gives the additional information that may include how the action was taken. Location indicates where the action was taken. -Previous Revision: indicates the version number of the previous action taken on the same source data. -Hash: of the current source data after the action has been taken. This guarantees the unforgeability. -Signature: is obtained after signing the hash of the all above fields with the private key of the User who took the action. This ensures the integrity and non-repudiation.
Once a User takes an action, the corresponding data provenance entry is sent to the Provenance Store. Table 2 illustrate how a typical data provenance entry looks like. The first entry in Table 2 has revision 1 with date 01-01-08 and time 14:40:30 hrs where action was taken by Alice who created a medical report when a patient visited her clinic located in Trento. The previous revision of this data provenance is 0 since it is the first entry. Bob adds the details of the blood test after that patient has visited his lab in Rovereto on 02-01-08 at 09:30:00 hrs. The previous revision corresponds to 1 as Bob is appending the existing medical report. Each entry includes the hash of the corresponding source data and signatures of Alice and Bob on entry 1 and 2, respectively.
In order to support the search, for an Auditor, on the encrypted data provenance stored in the Provenance Store, each field of the data provenance entry is transformed in to string or numerical attributes. One string attribute represents a single element while a numerical attribute of size n bits represents n elements. In the proposed scheme, we consider that the maximum revision number possible is represented by a numerical attribute of size m. For the ease of understanding, let us assume that the value of m is 4. The first entry in Table 2 contains the revision with value 1, which is 0001 in a 4-bit representation. This can be transformed in to 4 elements, i.e., 0 * * * , * 0 * * , * * 0 * and * * * 1. The date can be considered as 3 numerical attributes, the first numerical attribute to represent day in 5 bits, the second numerical attribute to represent month in 4 bits and the third numerical attribute to represent year in 7 bits. Similarly, the time can be considered as 3 numerical attributes, the first numerical attribute to represent hour in 5 bits, the second numerical attribute to represent minute in 6 bits and the third numerical attribute to represent second in 6 bits. The user ID is a string attribute. Each sub-field of action can be treated as a string attribute. The previous revision is again a numerical attribute of size m. In the proposed scheme, we omit the search support for the hash and the signature fields as we assume that an Auditor cannot query based on these fields as these are just large numbers of size X and Y bits, respectively. Typically, one can have both the User and the Auditor roles simultaneously.
The source data is stored in the Data Store managed by the CSP. For each revision in the Provenance Store, there is a corresponding data item in the Data Store. In other words, the Data Store maintains a table containing two columns: one column to keep the revision while the other to store the source data item after the action has been taken.
Query Representation
This section provides an informal description of the query representation used in the proposed scheme. To represent the query, we use the tree structure similar to one used in [START_REF] Bethencourt | Ciphertext-policy attribute-based encryption[END_REF]. The tree structure of the query allows an Auditor to express conjunctions and disjunctions of equalities and inequalities. Internal nodes of the tree represent AND and OR gates while leaf nodes represent the values of conditional predicates. The tree employs the representation of bag of bits in order to support comparison between numerical values. Let us consider that an Auditor sends the following query: search all actions taken by Bob in Trento with previous revision (PR) between 1 to 4. Alternatively, this query can be written as follows: UserID = Bob AND Action.Location = Trento AND PR ≥ 1 AND PR ≤ 4. The query is illustrated in Figure 2.
Solution Details
The main idea is to perform encryption for providing confidentiality to the data provenance both on the communication channel and in the cloud. In order to search the data provenance, an Auditor sends a query that is also encrypted. In fact, the search is performed in an encrypted manner, which is based on the Searchable Data Encryption (SDE) proposed by Dong et al. [START_REF] Dong | Shared and searchable encrypted data for untrusted servers[END_REF]. The SDE scheme allows an untrusted server to perform search on the encrypted data without revealing information about the data provenance or the query. The advantage of this scheme is the multi-user support without requiring any key sharing between Auditors/Users. In other words, each Auditor or User has a unique set of keys. The data provenance encrypted by a User can be searched and decrypted by an authorised Auditor. However, the SDE scheme in [START_REF] Dong | Shared and searchable encrypted data for untrusted servers[END_REF] only allows an Auditor to perform query containing comparison based on equalities. For supporting complex queries, we extend the SDE scheme to handle complex boolean expressions such as non-conjunctive and range queries in the multi-user settings.
In addition to providing support for search on the encrypted data provenance, each entry of the data provenance is encrypted using Proxy Encryption (PE) scheme proposed in [START_REF] Dong | Shared and searchable encrypted data for untrusted servers[END_REF]. In other words, an Auditor performs search on the encrypted data provenance using the extended version of the SDE scheme while the searched data corresponding to the query is accessed by the PE scheme. Furthermore, the source data corresponding to the data provenance is also encrypted using the PE scheme. The proposed solution guarantees all the security properties listed in Table 1 that the existing research on data provenance lacks.
In general, there are three main phases in the data provenance life cycle: the first phase is the storing data provenance in to the Provenance Store; the second phase is the searching data provenance when an Auditor sends a query; and the third phase is the accessing data provenance. In the following, we provide the details of algorithms involved in each phase, where the SDE and the PE schemes are used in each phase.
Intialisation
In this phase, the proposed scheme is intialised for generating the required keying material for all involved entities in the system.
-Init(1 k ) : The Trusted KMA takes as input the security parameter 1 k and outputs two prime numbers p, q such that q divides p -1, a cyclic group G with a generator g such that G is the unique order q subgroup of Z * P . It chooses x R ← -Z * q and computes h = g x . Next, it chooses a collision-resistant hash function H, a pseudorandom function f and a random key s for f . Finally it publicises the public parameters Params = (G, g, q, h, H, f ) and keeps securely the master secret key MSK = (x, s). -KeyGen(MSK, i) : For each User (or Auditor) i, the Trusted KMA chooses x i1 R ← -Z * q and computes x i2 = xx i1 . It securely transmits K u i = (x i1 , s) to the User (or Auditor) i and K s i = (i, x i2 ) to the CSP which inserts K s i in the Key Store, that is, K S = K S ∪ K s i4 .
Storing Data Provenance
During this phase, a User takes an action and creates a data provenance entry and the source data which are encrypted using the SDE and PE schemes. For both the SDE and PE schemes, the first round of encryption is performed by the User while the Second round of encryption is performed by the CSP. After this phase, the data provenance entry is stored in the Provenance Store while the source data is stored in the Data Store.
-Hash(D) : The User calculates hash over the source data D and populates the hash field of the data provenance entry with the calculated value. -Signature(e, K u i ) : The User i calculates a hash H(e) over the all fields (except the signature) in a data provenance entry e. Then, the User populates the signature field of the data provenance entry with the value calculated as follows: g -x i1 H(e). This paper has investigated the problem of securing provenance and presented a proposed scheme that supports encrypted search while protecting confidentiality of data provenance stored in the cloud, given the assumption that the CSP is honest-but-curious. The main advantage of our proposed scheme is that neither an adversary nor a cloud service provider learns about the data provenance or the query. The proposed solution is capable of handling complex queries involving non-monotonic boolean expressions and range queries. Finally, the system entities do not share any keys and even if a compromised User (or Auditor) is revoked, the system is still able to perform its operations without requiring re-encryption.
As future research directions, the proposed solution will be formalised in more rigorous terms to prove its security features. Moreover, a prototype would be developed for estimating the overhead incurred by the cryptographic operations of the proposed scheme. Other long-term research goals are 1) how to apply the scheme in the distributed settings 2) to investigate how to make such architecture more efficient in terms of query-response time without compromising the security properties.
Fig. 1 .
1 Fig. 1. An abstract architecture of the proposed scheme
Fig. 2 .
2 Fig. 2. Query representation
- 1 =
1 User-SDE(m, K u i ) : The User encrypts each element m of the fields (except the hash and the signature) of the data provenance entry in order to support encrypted search. The User chooses r R ← -Z * q and computes c * i (m) = ( ĉ1 , ĉ2 , ĉ3 ) where ĉ1 = g r+σ , σ = f s (m), ĉ2 = ĉx i1 1 , ĉ3 = H(h r ). The User transmits c * i (m) to the CSP. -User-PE(m, D, K u i ) : The User encrypts each element m of the fields (except the hash and the signature) of the data provenance entry and the source data D. The User chooses r R ← -Z * q and outputs the ciphertexts PE * i (m) = (g r , g rx i1 m) and PE * i (D) = (g r , g rx i1 D), which are sent to the CSP. -Server-SDE(i, c * i (m), K s i ) : The CSP retrieves the key K s i corresponding to the User i from the Key Store. Each User encrypted element c * i (m) is re-encrypted to c(m) = (c 1 , c 2 ), where c 1 = ( ĉ1 ) x i2 . ĉ2 = ĉx i1 +x i2 (g r+σ ) x = h r+σ and c 2 = ĉ3 = H(h r ). The re-encrypted entry c(e) (where each c(m) ∈ c(e)) of data provenance is stored in the Provenance Store.-Server-PE(i, PE * i (m), PE * i (D), K s i ) :The CSP retrieves the key K s i corresponding to the User i from the Key Store. Each User encrypted element PE * i (m) is reencrypted to PE(m) = (p 1 , p 2 ), where p 1 = g r and p 2 = (g r ) x i2 g rx i1 m = g r(x i1 +x i2 ) m = g rx m. Similarly, the PE * i (D) is re-encrypted to PE(D). Finally, the ciphertexts PE(e) (where PE(m) ∈ PE(e)) and PE(D) are sent to and stored 5 in the Provenance Store and the Data Store, respectively.
Table 1 .
1 Summary of data provenance schemes
Scheme Year Application Domain Confidentiality Repudiation No Non-Integrity Unforgeability No No No Yes -Yes -
Lu et al. [12] Cloud Computing Yes - - Yes Yes Yes Yes Yes
PSecON [11] E-Science Yes - - Yes Yes Yes Yes Yes
Davidson et al. [6, 7] 2010 - - - - - - - - - -
Table 2 .
2 Representation of data provenance
Revision Date Time User ID Name Reason Description Location Revision Action Previous Hash Signature
1 01-01-08 14:40:30 Alice Create Clinic Visit Medical Report Trento 0 X-bits Y-bits
2 02-01-08 09:30:00 Bob Append Lab Visit Blood Test Rovereto 1 X-bits Y-bits
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Typically, X and Y may be of size 128, 512 or more.
'-' means not applicable
This is only the scheme that follows data-oriented approach while rest of the schemes in this paper are based on process-oriented approach.
The Key Store is initialised as K S = Φ.
In the CSP, each entry c(e) of the data provenance corresponds with the ciphertexts PE(e) and PE(D).
Acknowledgment
The work of the first and third authors is supported by the EU FP7 programme, Research Grant 257063 (project Endorse).
Availability Provenance Query Response Source Data
Buneman et al. [START_REF] Buneman | Why and Where: A Characterization of Data Provenance[END_REF] Database System -
Searching Data Provenance
During this phase, an Auditor encrypts the search query and then sends the search query to the CSP. The CSP performs the encrypted matching against data provenance entires in the Provenance Store.
-Auditor-Query-Enc(Q, K u j ) : An Auditor transforms the query in to a tree structure Q, as shown in Figure 2. The tree structure Q denotes a set of string and numerical comparisons. Each non-leaf node a in Q represents a threshold gate with the threshold value k a denoting the number of its children subtrees that must be satisfied where a has total c a children subtrees, i.e. 1 ≤ k a ≤ c a . If k a = 1, the threshold gate is an OR and if k a = c a , the threshold gate is an AND. Each leaf node a represents either a string comparison or subpart of a numerical comparison (because one numerical comparison of size n bits is represented by n leaf nodes at the most) with a threshold value k a = 1. For every leaf node a ∈ Q, the Auditor chooses r R ← -Z * q and computes trapdoor T j (a) = (t 1 ,t 2 ) where t 1 = g -r g σ and t 2 = h r g -x j1 r g x j1 σ = g x j2 r g x j1 σ , where σ = f s (a). The Auditor encrypts all leaf nodes in Q and sends the encrypted tree structure
The CSP receives the encrypted tree structure T * j (Q). Next, it it retrieves the key K s j corresponding to the Auditor j and the data provenance entries. For each encrypted entry c(e), the CSP runs a recursive algorithm starting from the root node of T * j (Q). For each non-leaf node, it checks if the number of children that are satisfied is greater than or equal to the threshold value of the node. If so, the node is marked as satisfied. For each encrypted leaf node T * j (a) ∈ T * j (Q), there may exist a corresponding encrypted element c(m) ∈ c(e). In order to perform this check, it computes T = t = H(c 1 .T -1 ). If so, the leaf node is marked as satisfied. After running the recursive algorithm, if the root node of the encrypted tree structure T * j (Q) is marked as satisfied then the entry c(e) is marked as matched. This algorithm is performed for each encrypted entry c(e) in the Provenance Store and it finds sets of ciphertexts PE(e) and PE(D) corresponding to the matched entries.
Accessing Data Provenance
During this phase, the data provenance entries can be accessed and then ultimately be verified by the Auditor. First, the CSP performs one round of decryption for sets of ciphertexts found during the search. The Auditor performs the second round of decryption to access data provenance and its corresponding source data. Furthermore, an Auditor gets the verification key from the CSP in order to verify the signature on the data provenance entries.
-Server-Pre-Dec( j, PE(e), PE(D), K s j ) : The CSP retrieves the key K s j corresponding to the Auditor j from the Key Store. Each encrypted element PE(m) ∈ PE(e) is pre-decrypted by the CSP as PE j (m) = ( p1 , p2 ), where p1 = g r and p2 = g rx m . (g r ) -x j2 = g r(x-x j2 ) m = g rx j1 m. Similarly, PE(D) is pre-decrypted by the CSP as PE j (D). Finally, the ciphertexts PE j (e) and PE j (D) are sent to the Auditor.
-Auditor-Dec(PE j (e)), PE j (D)), K u j ) : Finally, the Auditor decrypts the ciphertext PE j (m) ∈ PE j (e) as follows: p2 . p-x j1 1 = g rx j1 m . g -rx j1 = m. Similarly, the source data D is retrieved from PE j (D)).
-Get-Veri f ication-Key(i) : In the proposed solution, an Auditor may verify the signature by first obtaining the verification key of the User who took the action. This algorithm is run by the CSP. It takes an input the User ID i. For calculating the verification key, the CSP first obtains the key K s i = (i, x i2 ) corresponding to the User i and then calculates the verification key as follows:
-Veri f y-Signature-Key(e, g -x i1 H(e ), g x i1 ) : Given the signature g -x i1 H(e ) over the data provenance entry e and the verification key g x i1 , an Auditor can verify the signature first by calculating g -x i1 H(e ) g x i1 = H(e ). Next, an Auditor calculates the hash over the data provenance entry e as H(e). Finally, an Auditor checks if
H(e)
?
= H(e ). If so, the signature verification is successful and this algorithm returns true and f alse otherwise.
Revocation
In the proposed solution, it is possible to revoke a compromised User (or Auditor). This is accomplished by the CSP.
-Revoke(i) Given the User (or Auditor) i, the CSP removes the corresponding key K s i from the Key Store as K S = K S \ K s i . Therefore, the CSP needs to check the revocation of a User or an Auditor before invoking any actions including storing, searching and accessing the data provenance.
Discussion
This section provides a discussion about how to optimise the storage and performance overheads incurred by the proposed scheme.
Storage Optimisation
The storage can be optimised if the source data changes are stored as difference (as is done in any subversion system) instead of managing a complete source data item against each revision. In other words, the complete source data item is stored against the first revision while for the subsequent revisions, only the changes are stored.
Performance Optimisation
In order to improve the search performance, the indexing and partitioning of the data provenance can be done. However, this is subject to the future work. Moreover, the performance at the Auditor level can be improved by maintaining a list of verification keys of the Users who are taking actions very frequently instead of interacting each time with the CSP. | 37,913 | [
"1003309",
"1003313",
"1003310",
"978113"
] | [
"110155",
"304024",
"110155",
"304024",
"110155",
"304024"
] |
01481502 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481502/file/978-3-642-27585-2_1_Chapter.pdf | Erik Wästlund
email: [email protected]
Julio Angulo
email: [email protected]
Simone Fischer-Hübner
email: [email protected]
Evoking Comprehensive Mental Models of Anonymous Credentials
Keywords: Credential Selection, Anonymous Credentials, Mental Models, Usability
Anonymous credentials are a fundamental technology for preserving end users' privacy by enforcing data minimization for online applications. However, the design of user-friendly interfaces that convey their privacy benefits to users is still a major challenge. Users are still unfamiliar with the new and rather complex concept of anonymous credentials, since no obvious real-world analogies exists that can help them create the correct mental models. In this paper we explore different ways in which suitable mental models of the data minimization property of anonymous credentials can be evoked on end users. To achieve this, we investigate three different approaches in the context of an e-shopping scenario: a card-based approach, an attribute-based approach and an adapted card-based approach. Results show that the adapted card-based approach is a good approach towards evoking the right mental models for anonymous credential applications. However, better design paradigms are still needed to make users understand that attributes can be used to satisfy conditions without revealing the value of the attributes themselves.
Introduction
Data minimization is a fundamental privacy principle which requires that applications and services should use only the minimal amount of personal data necessary to carry out an online transaction. A key technology for enforcing the principle of data minimization for online applications are anonymous credentials [START_REF] Brands | Rethinking Public Key Infrastructure and Digital certificates -Building in Privacy[END_REF], [START_REF] Camenisch | Efficient non-transferable anonymous multi-show credential system with optional anonymity revocation[END_REF], [START_REF] Chaum | Security without identification: Transaction systems to make big brother obsolete[END_REF]. In contrast to traditional electronic credentials, which require the disclosure of all attributes of the credential to a service provider when performing an online transaction, anonymous credentials let users reveal any possible subset of attributes of the credential, characteristics of these attributes, or prove possession of the credential without revealing the credential itself, thus providing users with the right of anonymity and the protection of their privacy.
Even though Microsoft's U-Prove and IBM's Idemix anonymous credential technologies are currently introduced into commercial and open source systems and products, the design of easily understandable interfaces for introducing these concepts to end users is a major challenge, since end users are not yet familiar with this rather new and complex technology and no obvious real-world analogies exist. Besides, users have grown accustomed to believe that their identity cannot remain anonymous when acting online and have learned from experience or word of mouth that unwanted consequences can come from distributing their information to some services providers on the Internet.
In other words, people do not yet posses the right mental models regarding how anonymous credentials work and how anonymous credentials can be used to, for example, protect their personal information.
In order to tackle the challenge of designing interfaces that convey the principle of data minimization with the use of anonymous credentials, we have, within the scope of the EU FP7 project PrimeLife1 and the Swedish U-PrIM project 2 , investigated the way mental models of average users work with regards to anonymous credentials and have tried to evoke their correct mental models with various experiments [START_REF] Wästlund | Privacy and Identity Management for Life, chap. The Users' Mental Models' Effect on their Comprehension of Anonymous Credentials[END_REF].
In this article, we first provide background information on the concepts of anonymous credentials and mental models and then present previous related work. Then, we describe the experiments that were carried out using three different approaches, and present the analyses and interpretations of the collected data. Finally, we provide conclusions in the last section.
Background
In this section we present a description of the concept of anonymous credentials and the definition of mental models.
Anonymous Credentials
A traditional credential (also called a certificate or attribute certificate) is a set of personal identifiable attributes which is signed by a certifying trust party and is bound to its owner by cryptographic means (e.g., by requiring the owner's secret key to use the credential). With a credential system, users can obtain a credential from the certifying party and demonstrate possession of these credentials at the moment of carrying out online transactions. In terms of privacy, the use of (traditional or anonymous) credentials is better than the direct request to the certifying party, as this prevents the certifying party from profiling the user. When using traditional credentials, all of the attributes contained in the credential are disclosed to the service provider when proving certain properties during online transactions. This contradicts the privacy principle of data minimization and can also lead to unwanted user profiling by the service provider.
Anonymous credentials (also called private certificates) were first introduced by Chaum [START_REF] Chaum | Security without identification: Transaction systems to make big brother obsolete[END_REF] and later enhanced by Brands [START_REF] Brands | Rethinking Public Key Infrastructure and Digital certificates -Building in Privacy[END_REF] and Camenisch & Lysyanskaya [START_REF] Camenisch | Efficient non-transferable anonymous multi-show credential system with optional anonymity revocation[END_REF] and have stronger privacy properties than traditional credentials. Anonymous credentials implement the property of data minimization by allowing users to select a subset of the attributes of the credential or to prove the possession of a credential with specific properties without revealing the credential itself or any other additional information. For instance, a user who has a governmentally issued anonymous passport credential (with attributes that are typically stored in a passport, such as the date of birth) can prove either the fact that she is older than 18 without revealing her actual age, her date of birth or any other attribute of the credential, such as her name or personal identification number. In other words, anonymous credentials allow the selective disclosure of identity information encoded into the credential. However, also information about the certifier is revealed (if the user uses for instance a governmentally issued credential, information about the government of the user (i.e. his nationality) is also revealed as meta-information) -illustrating the disclosure of this type of meta-information to end users poses further HCI challenges.
In addition, the Idemix anonymous credential system has also the property that multiple uses of the same credential cannot be linked to each other. If, for instance, the user later wants to shop another video which is only permitted for adults at the same video online shop, she can use the same anonymous credential as proof that she is over 18 without the video shop being able to recognize that the two proofs are based on the same credential. This means that the two rental transactions cannot be linked to the same person. The main focus of our usability studies, which we present in this paper, has so far been on the comprehension of the selective data disclosure property.
Mental Models
Mental models are people's perceptions or understandings on how a system works. A mental model provides a deep understanding of people's motivations and thought processes [START_REF] Johnson-Laird | Mental models: towards a cognitive science of language, inference, and consciousness[END_REF], [START_REF] Jonassen | Operationalizing mental models: strategies for assessing mental models to support meaningful learning and design-supportive learning environments[END_REF], [START_REF] Young | Mental Models: Aligning Design Strategy with Human Behavior[END_REF]. One of the major obstacles when introducing new technology to the general public is presenting the technology in terms that the average user will comprehend without having to resort to the advice of an expert or complicated instruction manuals. For the users to adapt novel technologies, they have to comprehend their advantages, disadvantages, and the benefits that the technology can bring into their daily lives. The introduction of incremental innovations is often framed in the terms of previously existing systems or objects that users are already familiar with. For example, people can generate a mental picture of how fast, functional, aesthetic, and effective the new system is in comparison with its predecessors. Then, they are able to adjust their already existing mental models accordingly, without great effort. However, when it comes to radical changes or completely new innovations the adaptation of the mental model is not always an easy task. It is therefore that designing interfaces that support the relatively new anonymous credential technology is an excruciating challenge for user interface (UI) designers.
In this work, we explore different user interface approaches based on three different metaphors (card-based, attribute-based and adapted card-base approaches) that we have developed in order to get users to start thinking in the right direction when it comes to anonymous credentials and their private information on the Internet. In other words, our aim is to investigate which of these approaches works better at evoking a comprehensive mental model of anonymous credentials.
Related work
Within the scope of the PRIME3 project, our usability tests of PRIME prototypes revealed that users often did not trust privacy-enhancing technologies and their data minimization properties, as the possibility to use Internet services anonymously did not fit to their mental model of Internet technology [START_REF] Camenisch | Trust in PRIME[END_REF], [START_REF] Pettersson | Making PRIME usable[END_REF]. Camenisch et al. [START_REF] Camenisch | Securing user inputs for the web[END_REF] discuss contextual, browser-integrated user interfaces for using anonymous credential systems. In user tests of anonymous credential selection mockups developed within the PRIME project, test subjects were asked to explain what information was actually given to a web site that demanded some proof of age when a passport was used to produce that proof (more precisely, the phrase Proof of ''age > 18'' [built on ''Swedish Passport''] was used as a menu selection choice in the mockup). The test results showed that the test participants assumed that all data normally visible in the physical item referred to (i.e., a passport) was also disclosed to the web site [START_REF] Pettersson | HCI Guidelines[END_REF]. Hence, previous HCI studies in the PRIME project showed already that designing user interfaces supporting the comprehension of anonymous credentials and their selective disclosure property is a challenging task. As far as we are aware of, no many other studies have considered the usability of anonymous credentials, neither the way people perceive this relatively new technology.
More than a decade ago, Whitten & Tygar [START_REF] Whitten | Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0[END_REF] discussed the related problem that the standard model of user interface design is not sufficient to make computer security usable to people who are not already knowledgeable in that area. They conclude that a valid conceptual model of security has to be established and must be quickly and effectively communicated to the user.
Methodology
As part of the PrimeLife project, we have conducted a series of experiments based on interactive mockups for an e-shopping scenario that used anonymous credentials technology for proving that the user holds a credit card and another credential (passport or driving license) with the same name. During the experiments we used three different approaches to evoke different mental models of anonymous credentials and observed which of these would best fit the representation of an actual anonymous credentials system. The different UIs were then tested at various instances with individuals coming from different age groups, backgrounds, and genders. Many of them were employees or students from diverse disciplines at Karlstad University (KAU) and many others were recruited at various locations, such as Karlstad's train station. The methodologies, test designs, and results from the first two approaches, i.e. the card-based and attribute-base approaches, have been reported in more detailed by Wästlund & Fischer-Hübner [START_REF] Wästlund | Privacy and Identity Management for Life, chap. The Users' Mental Models' Effect on their Comprehension of Anonymous Credentials[END_REF]. We present an overview of the results of those two previous approaches here. Then, we introduce the third concept of an adapted card-based approach, the description of its interface and the results from testing. The first design concept was based on the idea that people are already acquainted with the way cards work in the non-digital world. A person can usually pay for a product at a store with a credit card and use an identification card to verify their identity, such as their driving license or password. For that reason, a card-based metaphor was used, in which test participants were introduced to the concept of electronic credentials as being images of the ordinary "cards" they are already familiar with. However, in the non-digital cards do not possess the property of data minimization, thus the challenge lied on how to convey the idea of selective disclosure to users through an interface.
The card-based approach
A number of mockup iterations were implemented using this metaphor in order to test the different levels of understanding on the concept of anonymous credentials. In the initial iterations, the property of data minimization was illustrated with an animation that "cut out" selected attributes from the card and transitioned them into a newly created virtual card, which was to be revealed to a service provider (Figure 1). The idea was to make users visually aware of the pieces of information that were being cut out and moved into the new virtual card, making it clearer that only the information on the virtual card was being sent to the service provider. In later iterations the attributes of the card which were not to be disclosed to the service provider were blacked out, leaving only the card attributes to be sent visible to the user (Figure 2). In total, a number of seventh design iterations were carried out with slight improvements at every iteration cycle and testing the alternatives with five test participants at a time. Results showed that using this approach, 86% of test participants (30 out of 35) believed that the anonymous credentials would work in the same fashion as the commonly used non-digital plastic credentials. In other words, they thought that more information from the source card (passport or driving license) was sent to the service provider than it really was sent. Only 14% participants understood up to a point the principle of data minimization, indicating that using a card-based metaphor is not an ideal approach to show this concept.
The attribute-based approach
The second design concept was based on what we called the attribute-based approach (Figure 3), in which test participants were told that "attributes" of information were imported from different certifying authorities. Participants could select the authorities that certified certain attributes, and thereby choosing the attributes they would like to reveal to the service provider. After they had selected the attributes they were asked to confirm their decision at a second step, before the information was sent. A total of sixth iterations were made using this attribute-based approach with an average of 8.5 participants per iteration. This time only 33% of the test participants (16 out of 48) did not understand the data minimization property and thought that more information than the one needed was disclosed to the service provider. However, 67% understood the selective disclosure principle, showing an improvement over the card-based approach. Curiously enough, with the attribute-based approach, some of the test participants made the error of thinking that their personal identification number and address would be disclosed as well, even though these attributes were not part of the e-shopping scenario.
Moreover, post-test interviews also revealed that, some of the participants who used the attribute-based UI with the instructions to select a verifying authority believed that their data was being sent via the verifier who would then be able to trace all transactions they made. Hence, those participants got the wrong impression that the verifier (e.g., the police or the Swedish road authorities) could trace their online activities. An interesting finding in this approach regarded the use of the Swedish personal number. As this number is widely used in Sweden, users anticipated that this number should be present in the transaction portray in the scenario, despite the fact that it was neither asked for nor shown anywhere in the interface.
The adapted card-based approach
Our latest design concept is basically a hybrid version of the two previous approaches. The idea was to keep the notion of cards and card selection, since people already accept and comprehend that metaphor, but at the same time emphasizing the data minimization properties of the application. In order to accomplish this, the third approach was based on the idea of an adapted card-based metaphor, in which users were made aware of the fact that the information in their source cards would be adapted to fit the needs of the current online transaction (Figure 4). The idea was to show only the selected information inside the newly created adapted card, and to convey the notion that only the information in this card was sent to the service provider and nothing else. Test design. In order to test this approach, one more interactive mockup was created. The setup for this round of testing was made as consistent as possible to the setup for testing the two previous approaches, using the same e-shopping scenario and the same method for inputting answer to their questions (i.e., participants could freely write their beliefs about what information about them was being sent). This time we tested the users' understanding that a service provider does not need to know the exact value of an attribute in the credential, for example the exact age of the user, but instead the service provider would only need to know if an attribute satisfies a certain condition, for instance, that the user is over 18 years old.
During a test session participants were first asked to read a description of the test, which was written to fit the purposes of this metaphor and to introduce participants to the notion of selective disclosure. The test description read as follows:
You are going to test an Adaptable Electronic ID System -a new way of paying on the Internet. This new way is based on the idea that you have installed this security and privacy system in your computer that only you have access to. The system lets you buy online in a secure and privacy-friendly way -no one else than you can use your information.
The system allows you to import all types of electronic IDs and use them online, such as your driving license and passport, and other personal information, such as your credit cards. The unique feature of this system is that it adapts your IDs to the current online payment situation and makes sure to send only the information that is necessary for this transaction.
During this test, you will pretend that your name is Inga Vainstein and that you use this Adaptable Electronic ID System to be able to shop safely and privately on the Internet. You will buy and download an e-book (audio-book) from Amazon.com which is only available for adults over 18 years old, and you will pay it with your new Adaptable Electronic ID System.
In order to create a realistic e-shopping experience, participants were then presented with an interactive Flash animation resembling a Firefox browser window showing the Amazon.com website. Participants were asked to carry out the task of buying an e-book using the presented animation, as instructed in the test description. At the moment of paying for the book, the Amazon.com website was dimmed in the background and the credential selection user interface popped-up. Using this interface, shown in Figure 4, participants were asked to select a payment method, either Visa or American Express, and a way to verify their name and the fact that they were over 18 by choosing either their driving license or their passport. A mouse-over state was added to each of the credentials, so that if participants would drag the mouse over a credential, they could get a preview of the information they were about to select, as shown in Figure 5. Once a credential (or "card") was selected, a green frame was place around it to indicate the selection.
When a credential was selected, the adapted information from that credential was also faded in with a smooth transition into the card in the middle with the title "Adapted card for this purchase only" (Figure 6). For example, if the participant chose a driving license a method for identification, the attribute Name, the condition Over than 18? and the issuer of the credential appeared in the adapted card with the corresponding values.
When participants were done selecting the credentials they press the "Send" button located in the bottom right corner and they were asked question "What information do you believe you have sent to Amazon.com? " (and the subheading "Write what pieces of personal information you think will be sent to Amazon.com when you pressed the 'Send' button"). In order to account for the users' understanding that the issuer of the credential is also send to the service provider, we included the multiple choice question "Additionally, does Amazon.com know some of the following? " with the options to answer "The fact that you hold a Swedish passport", "The fact that you hold a driving license", "None of the above", and "Other ".
Afterwards, participants were also inquired about their beliefs of other third parties being able to get a hold of their information for the transaction ("When you transferred your information to Amazon.com (by clicking the 'Send' button), do you think anybody else will be able to get a hold of that information? "). This question was asked since our experience with previous tests of the attribute-based approach showed that some participants believed that their information would also be sent to the issuer in the credential, which is the wrong mental model of information flow (for example, when identifying themselves with their passport credential, the police would also get their information, since the police is the issuer of the credential). In this test, we wanted to confirm that the interface did not mislead participants to create this incorrect mental model.
Finally, participants were asked to fill in some demographic information and other short questions about their experience paying for services or products online.
Data collection and results. A total of 29 participants were invited to do the test, 16 males and 13 females from different ages (18 to 57 years old) coming from a different cultural backgrounds (15 Swedish, 5 Germans, 3 Mexicans, 2 Iranians, 1 Italian, 1 Chinese, 1 Japanese, 1 Nepali). Some of them were recruited at KAU, and the majority were recruited outside the University premises. All of them had previous experience paying over the internet.
The tests were carried out with the use of laptop computers and smart tablet computers running the prototyped Flash animation. The data was gathered using a common survey online tool and analyzed in terms of the number of extra attributes that participants mistakenly believed were sent and the concealed attributes that they mistakenly believe were not sent to a service provider during the transaction portrayed in the e-shopping scenario. Also, to examine the participants' understanding on attributes satisfying conditions, we classified the data in two categories: the answers that stated that the service provider only knows the fact that they are over 18 years old, and the answers which mention either the age, date of birth or personal identification number (which in Sweden is an identification for age).
The results showed that 65% of the test participants (19 out of 29) understood the data minimizing properties of the adapted card approach which is approximately the same as in the attribute based approach (66%). However, of the ten remaining test participants that overestimated the amount of data being send, six added only the attribute of "address". Presumably, these participants were thinking that their address was being sent in order to be able receive the product by mail, and misunderstood the scenario in which an e-book was being downloaded into their computer and no postal address was necessary. Assuming that these six participants were thinking in terms of their own experiences when buying products online and having them delivered at home, we can deduce that a total of 86% of the test participants (25 out of 29) understood that not all data from the source was being send, but that only a subset of data was being selected and subsequently sent; thus understanding the property of data minimization.
Regarding the mental model of information flow, only 2 out of 29 participants mentioned that the issuer of the credential (i.e., the police) would be able to get a hold of their information. This is a great improvement from the attribute-based approach, in which many of participants seemed to think that their information would travel via the issuing authority. Besides, we believed that the two participants of this test who responded that the police would be able to get a hold of their information, were actually thinking in terms of the authority the police has to access their information at a certain point in the future, but not that their information was flowing through the police when sending it to the service provider.
With regards to the attributes being selected to satisfy a condition (i.e., proving if the user is over 18 years old), 35% of the participants (10 out of 29) understood that they had proved only the fact that they were over 18, three participants made no reference to age at all, and the remaining sixteen stated that they had revealed their age, birth date, or personal identification number (some as part of revealing the full source credential). This low proportion leaves further challenges for the design of user interfaces that convey the notion that attributes can satisfy conditions without their actual value being sent to service providers.
Conclusions
The results of our user studies show that users often lack adequate mental models to protect their privacy online. Our work with a credential selection mechanism for anonymous credentials highlights the difficulties in using metaphors when describing this novel technology. In our first rounds of testing the majority of users believed that anonymous credentials would work in the same fashion as the plastic credentials we compared them to, such as driving licenses or passports. However, in our latest tests we focused on the main difference between the two types of credentials (i.e. that they are adapted) and thus successfully changing the induced mental model of most test participants.
Taken together, the results from the three rounds of testing using the three different approaches clearly show how inducing adequate mental models is a key issue in the successful deployment of the novel technology of anonymous credentials. Our results also show that the adapted card-based approach is a right step towards evoking a comprehensive mental model for anonymous credential applications, and that using a traditional card-based approach (as presented in our first approach) is not recommended since it does not seem to fit the appropriate mental models of this technology. The adapted card-based approach also seems to be very efficient at making users understand that the issuer of a credential is not involved in the flow of the data during an online transaction. Moreover, the results also indicate that better user interface paradigm are needed for making users understand that attributes in a credential can be used to satisfy conditions, and that service providers would not have knowledge of the actual value of the attribute when it is not requested.
As a future suggestion for evoking correct mental models of anonymous credentials we suggest the exploration of a form filling approach, based on the idea that users are already accustomed to fill forms when carrying out online transactions. In this approach users would be presented with a common Internet form with its boxes already filled with values from a credential and some visual indication showing that these values are certified by the issuer of the credential. The data minimization property in this case can be illustrated by only filling the textboxes required by the service provider and indicating to the user that additional data is not needed for a particular transaction.
Moreover, the increased use of smart mobile devices brings the challenge of creating user-friendly interfaces that allow users to select anonymous credentials and are able to convey the property of data minimization.
All in all, it can be noted that, when it comes to privacy, the effects of incorrect mental models leads to difficulties in using a given application or not being able to take adequate steps in order to protect one's information. Even though our attempt to evoke the correct mental models of anonymous credentials has shown positive results throughout the different approaches, there is still room for improvement and future research in this area and in the usability of credential selection in general.
Fig. 1 .
1 Fig. 1. Cutting out attributes to be revealed as part of a newly created virtual card.
Fig. 2 .
2 Fig. 2. Card-based approach blacking out non-disclosed attributes.
Fig. 3 .
3 Fig. 3. One example of the attribute-based approach.
Fig. 4 .
4 Fig. 4. The adapted card-based approach.
Fig. 5 .
5 Fig. 5. Examples of mouse-over states when selecting one of the credentials.
Fig. 6 .
6 Fig. 6. Example of the adapted card containing the selected information.
EU FP7 integrated project PrimeLife (Privacy and Identity Management for Life), http://www.primelife.eu/
U-PrIM (Usable Privacy-enhancing Identity Management for smart applications) is funded by the Swedish Knowledge Foundation, KK-Styftelsen, http://www.kau.se/ en/computer-science/research/research-projects/u-prim
PRIME (Privacy and Identity Management for Europe) https://www. prime-project.eu/
Acknowledgments
Parts of the research leading to these results have received funding from the Swedish Knowledge Foundation (KK-stiftelsen) for the U-PrIM project and from the EU 7 th Framework programme (FP7/2007-2013) for the project PrimeLife. The information in this document is provided "as is", and no guarantee or warranty is given that the information is fit for any particular purpose. The PrimeLife consortium members shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials subject to any liability which is mandatory due to applicable law. | 32,394 | [
"1003314",
"997970",
"997967"
] | [
"301187",
"301187",
"301187"
] |
01481503 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481503/file/978-3-642-27585-2_2_Chapter.pdf | Marcel Heupel
email: [email protected]@uni-siegen.de
Dogan Kesdogan
Towards usable interfaces for proof based access rights on mobile devices
Access rights management is in the middle of many collaboration forms such as group formation or sharing of information in different kinds of scenarios. There are some strong mechanisms to achieve this, like anonymous credential systems. However in general their usage is not very intuitive for lay users. In this paper we show the potential of using proof-based credential systems like Idemix to enhance the usability of privacy-respecting social interaction in different collaborative settings. For instance transparently performing authorization without any user intervention at the level of the user interface becomes possible. In order to improve the usability, we complement this by introducing a mental model for intuitive management of digital identities. The approach should also empower users to define their own access restrictions when sharing data, by building custom proof specifications on the fly. We show this exemplary with a developed prototype application for supporting collaborative scenarios on a mobile device. We also present first evaluation results of an early prototype and address current as well as future work.
Introduction
For quite some time, a major trend in our information society is the increasing use and disclosure of personal information in private and in business life. The recent massive propagation of mobile devices and mobile applications gains strength from leveraging efficient, secure and privacy-respecting interaction as well as communication patterns between individuals and communities that are seamlessly supported with mobile devices in term of enjoyable user experience [START_REF] Bourimi | Enhancing privacy in mobile collaborative applications by enabling end-user tailoring of the distributed architecture[END_REF].
On the one hand security and privacy are one of the most-cited criticism for pervasive and ubiquitous computing [START_REF] Hong | An architecture for privacy-sensitive ubiquitous computing[END_REF]. On the other hand usability is a prerequisite for security and privacy. Therefore, it is part of a major effort to balance and improve security and privacy design of mobile applications by considering usability aspects especially due to the limitations and capabilities of mobile devices (e.g. screen size, limited memory, computation capabilities and ease of localization). One of the most disregarded and critical topics of computer security has been and still is, the understanding of the interplay between usability and security [START_REF] Cranor | Security and Usability[END_REF]. In social and collaborative interaction settings, advantages such as enhancing social contacts, personalizing services and products compromise with notable security and privacy risks arising from the user's loss of control over their personal data and digital footprints [START_REF] Di | me project. di.me -integrated digital[END_REF]. From the usability perspective, large amounts of scattered personal data lead to information overload, disorientation and loss of efficiency. This often results in not using security options offered by the application.
One of the means for enhancing privacy in communication to individuals and services is to allow for the usage of partial identities or digital faces, i.e. user data selected to be disclosed for a particular purpose and context. Privacy-enhancing technical systems and applications supporting collaborative users activities have to allow for user-controlled identity management (IdM). Furthermore, such an IdM system has to be deployable on mobile platforms by providing good performance in terms of response time as a quality of service factor for usability [START_REF] Shneiderman | Designing the User Interface: Strategies for Effective Human-Computer Interaction[END_REF] and also as part of the security protection goal availability [START_REF] Bourimi | Towards Usable Client-Centric Privacy Advisory for Mobile Collaborative Applications Based on BDDs[END_REF]. Poor response times lead to end-user frustration and negatively affect the usage of the applications especially when no adequate help or feedback is provided. With respect to the different capabilities and restrictions of modern mobile devices (e.g. smartphones and tablet PCs), addressing security and usability aspects becomes crucial. Experts from various research communities believe that there are inherent trade-offs between security and usability to be considered [START_REF] Boyle | Privacy factors in video-based media spaces[END_REF][START_REF] Cranor | Security and Usability[END_REF][START_REF] Shneiderman | Designing the User Interface: Strategies for Effective Human-Computer Interaction[END_REF]. These general requirements are based on the objectives of the EU FP7 project di.me [START_REF] Di | me project. di.me -integrated digital[END_REF].
One of the most powerful and future promising IdM systems is IBM's "Identity Mixer" (Idemix) [START_REF] Camenisch | An efficient system for non-transferable anonymous credentials with optional anonymity revocation[END_REF], which is also able to run on smart cards [START_REF] Bichsel | Anonymous credentials on a standard java card[END_REF]. Due to the strong cryptographic algorithms, proving of powerful and complex statements (like e.g. inequality of attributes) needs quite some computation time [START_REF] Camenisch | Design and implementation of the idemix anonymous credential system[END_REF][START_REF] Verslype | Petanon: A privacy-preserving e-petition system based on idemix[END_REF] and thereby influences the performance of the whole application. This is especially true if only devices with relatively weak computation power, like mobile phones, are used. However, since the newest generation of smartphones and tablets come with really strong processors a new evaluation of the capability of those devices seems reasonable.
In a user controlled IdM the user needs to have the capability to define the access rights by himself. Therefore a good and usable interface is essential. Lab tests with some early prototypes showed, that many users had problems with defining complex proof based access rights. Therefore we aim to implement and evaluate a mental model for the representation of partial identities in the user interface (UI), which is strong oriented on real world observation, where the identity of the user stays the same and only the view of participating third parties can vary.
In this paper, we present our current work to enhance the usability of end-user controlled access rights management in privacy-respecting mobile collaborative settings.
The reminder of this paper is organized as follows. An overview on the stateof-the-art is given in Section 2. In section 3 we present the derived requirements primarily based on Di.Me. Next, section 4 presents our approach. Finally, we conclude with a presentation and discussion of our evaluation results in Section 5 and present our conclusions and outline ongoing and future work in Section 6.
State of the art
Access control means in general controlling access to resources, which are e.g. made available through applications. It entails making a decision on whether a user is allowed to access a given resource or not. This is mostly done by many techniques like comparing the resources access attributes with the users granted authorities. For access control, the authentication of the who is the process of verifying a principals3 identity whereas the authorization is the process of granting authorities to an authenticated user so that this user is allowed to access particular resources. Therefore, the authorization process is mostly be performed after the authentication process. Often, IdMs are responsible for authentication. There is a lot of work about Idm an access control in the literature. A good overview of the field of user centric identity management is given by Jøsang and Pope [START_REF] Jøsang | User centric identity management[END_REF] and also by El Maliki and Seigneur [START_REF] Maliki | A survey of user-centric identity management technologies[END_REF].
With respect to the different capabilities and restrictions of modern mobile devices (e.g. smart-phones and tablet PCs), addressing authentication, authorization and usability aspects becomes crucial. Often, the complexity of authentication and authorization is reflected in UI which is critical for mobile applications deployed on mobile devices with limitations in the screen space. A contribution from the usability field to enhance authentication is e.g. the usage of graphical passwords. An example is the usage of pass-faces for graphical authentication in Android smart-phones to unlock the main screen. However also those approaches have been proven to be not secure enough e.g. due to the smudge traces that can emerge on the screen surface. A recent publication showed that is really easy to guess the right pattern and break such authentication system [START_REF] Aviv | Smudge attacks on smartphone touch screens[END_REF]. Biometrics also allows for enhancing authentication but are still "classified as unreliable because human beings are, by their very nature, variable" [START_REF] Cranor | Security and Usability[END_REF][START_REF] Kryszczuk | Credence estimation and error prediction in biometric identity verification[END_REF]. Related to authorization, most systems need the interaction of the end-users at least in form of confirmations. The challenges increase if (lay) users are asked to set access rights for others, delegate rights, or manage their own security and privacy preferences. In the context of this work, the EU Project PICOS (Privacy and Identity Management for Community Services) represents a good and current example. The 2010 First Community Prototype Lab and Field Test Report D7.2a [START_REF] Picos Team | PICOS Public Deliverables Site[END_REF] cites that users had problems to use the PICOS privacy manager on mobile devices (Nokia MusicExpress 5800). Notifications and (automatic) advisory might lead to actions which the user finds intrusive or annoying in some cases (such as in the well-known case of Windows pop-ups or MSWord's paper-clip). Especially in collaborative applications as socio-technical systems, this will affect the psychological acceptance of the application which leads to not using security and privacy mechanisms. This mostly results in expensive change requirements affecting the technical realization of mobile applications [START_REF] Cranor | Security and Usability[END_REF][START_REF] Lee | Mobile Applications: Architecture, Design, and Development[END_REF]. Indeed, people involvement varies and the usage can range from occasional to frequent according to a given setting and circumstances.
For both, authentication and authorization, cryptography is an established used mechanism for increasing confidentiality and integrity of exchanged data. However, a total security or privacy provision is an illusion [START_REF] Hong | An architecture for privacy-sensitive ubiquitous computing[END_REF] because current approaches are not able to avoid at least threats and attacks e.g. emerging from loosing devices or based on physical access to them [START_REF] Dwivedi | Mobile Application Security[END_REF]. Approaches mostly only focus on hindering such attacks or making them difficult.Trade-os between security and other (non-)functional requirements such as usability and cognitive mental models supporting interaction design are well-described in tremendous lot of classical literature in the corresponding research communities, e.g. Computer-Supported Collaborative Work (CSCW), Human-Computer Interaction (HCI), psychological, and sociological sciences etc. Nevertheless, the current state of the art leaves room for considerable improvement how such systems can support an usable and secure user experience. Security and usability research for developing usable (psychologically acceptable) security mechanisms and mental models is a young research field which depends on the context in which those mechanisms have to be used [START_REF] Cranor | Security and Usability[END_REF]. Researchers especially from the CSCW and HCI research fields generally agree on that security and privacy issues arise due to the way systems are designed, implemented, and deployed for a specific usage scenario [START_REF] Boyle | Privacy factors in video-based media spaces[END_REF][START_REF] Cranor | Security and Usability[END_REF][START_REF] Shneiderman | Designing the User Interface: Strategies for Effective Human-Computer Interaction[END_REF]. Because of this and many facts cited above, we argue that security and privacy design by considering usability is specific to the project context. Thus we analyze user-controlled access rights management related requirements in this paper based on concrete Di.Me requirements by considering security and usability along with performance in our initial design and architecture. Furthermore, many related work is focusing on improving collaborative interaction related to access rights in general. For instance, the recently opened social networking platform Google plus [START_REF]The google+ project[END_REF] proposed a similar approach in some points. They also emphasized to focus on real world behaviors and introduced an promising approach with their circles concept, which is oriented on the real world circle of friends. However, the room for improvements still needs further work as we intend to reach with the work presented in this paper.
Requirements
Our approach is based on the usage of the proof-based anonymous credential system Idemix. The gathered requirements related to usability result from using Idemix for our first prototype of a mobile application for Android devices to support complex mobile collaborative scenarios. Further requirements were derived from the scenario are based on our work at the EU FP7 project digital.me. In contrast to related work, we used the latest reference implementation of the Idemix specification released on June 2011 and provided first performance evaluation for non-atomic Idemix operations.
Requirements derived from the scenario
In our scenario, Alice is attending on a business conference. Therefore she activates her business profile on her mobile device, selects some of the attributes she likes to reveal (like her last name, or her occupation) and broadcasts them. Other conference participants can now find her by browsing broadcasted contact information and can send her a contact request. Alice also adds some additional information about selected ongoing projects. This information can only be obtained by invitees, who are also sharing their profile and are working as engineer in the automotive sector. In our use case, Bob, another participant of the conference, is browsing the profile information. When he comes across Alices' profile, he likes to read the additional project papers. By requesting the information, Bobs device receives a challenge, stating that he has to prove that he fulfills certain conditions (e.g. that he is working as an engineer in the automotive sector). This is automatically carried out by the devices in the background. After Alice is convinced that Bob is actually working as an engineer and grants him access to the requested documents. The whole protocol is automatically carried out by the devices in the background without disturbing Alice. She can review at a later time, who requested her documents. If she likes, she could activate a notification about requests for her information and also manually grant or prohibit access. Figure 2 illustrates the main scenario as an interaction diagram.
Publishing data with individual access rights: The attendees of the conference should be enabled to create and publish individual profiles and publish them on the conference platform. Other attendees can log in to the conference server and browse the list of profiles. Because there are different kinds of profile attributes, like e.g. affiliation, interests, real name, and the users probably do not want to publish all of them to everyone. The users might want to publish selected attributes, like for example project papers or the real name only to selected individuals. This finer access rights are even more important for dynamic attributes like location, current activity or reachability.
Creation of partial identities
In order to publish different sets of information, maybe by also using different pseudonyms, we have the need for multiple partial identities. Together with the definition of fine grained access rights, this also gives the possibility to define multiple values for profile attributes. This would be very useful concerning the availability or reachability, because when in a meeting, the user might still want to be available for members of his family in case of emergency, while being not available for everyone else.
From the described scenario we could derive the following important requirements:
R1 The user can use partial identities R2 The user can browse data published by others R3 Fine grained access rights for published data can be defined R4 Capability to prove attribute values (Idemix + Certificate Authority (CA))
Non-functional requirements
Based on our experiences from the early evaluation of our first prototype and derived from requirements from the di.me project we defined several major nonfunctional requirements which will be explained in the following.
Minimization of user interaction:
The main goal is to balance and also improve the security and privacy in our scenario with the usability of the prototype. Therefor an important step, especially on mobile devices with limited screen size and interaction capabilities, is the minimization of user interaction in general. Besides this main non-functional requirement, we have further functional requirements derived from the scenario described above.
New concept for partial identities: A central point of our approach should be to make the UI as intuitive as possible. Therefore we are trying to implement a new mental model for (partial) digital identities, which is strongly oriented on the real world observations. The point we are addressing in our approach is the fact, that it is not intuitive for human beings, to have multiple identities as it is common practice in the digital world. Most people definitely act different when they are interacting with different people, but this happens almost unconscious. People will not actively switch identities like embodying a different person, they will stay the same person. What we try to implement in our approach, is a new concept, where we have no names or avatars for digital faces, profiles, identities etc. in the UI. Instead the different partial identities of the user will be represented as a picture of the contacts this identity is used for. The identity of the user stays the same, only the view of others can vary. The mental model for the UI will visually represent the faces with pictures of the people this face is used to interact with.
Approach
To verify our prototype and evaluate the UI concepts in a running application we extended the Android-based prototype used in previous lab trials as well as the shared (collaborative) conference server. We provided various user interfaces (UIs) for creating digital faces and credentials as well as their attributes, formulating proofs, selecting attributes to be disclosed in a given context and certifying them with the help of an Idemix CA. Figure 7 illustrates the implemented architecture. Since the first prototype used XML-RPC, the mobile client application is now able to perform the main protocols of Idemix (e.g. Get Credential or Show Proof ) also via XMPP. This Section should give a short overview about the implementation details and also present the developed interfaces.
Implementation of the scenario (R2, R4)
We provided various user interfaces (UIs) for creating digital faces and credentials as well as their attributes, formulating proofs, selecting attributes to be disclosed in a given context and certifying them with the help of an Idemix CA.
In contrary to our first prototype, which purpose was more to test the feasibility and performance of Idemix on a modern smartphone [START_REF] Heupel | Porting and evaluating the performance of idemix and tor anonymity on modern smartphones[END_REF], we did not integrate an additional Tor client in our approach. This significantly increased the overall response times and is only a small trade-off concerning privacy. Since we are using a XMPP server for communication, the IP addresses are not that easily traceable and moreover our scenario takes place in a more or less closed environment, the conference. Most people will be in the same network anyway. However, if users still like to hide their IPs to the XMPP server, they can still easily install a separate Tor client like Orbot [START_REF]The Tor Project[END_REF] on their smartphone and obfuscate the network traffic. In order to publish data we build a server, where people can upload their profiles and also request a list of profiles previously published by other users. Automated proof generation in the background (NFR1) Besides the possibility to create customized proof statements, our approach also supports automatic proof generation in the background, without the need for user interaction. Like stated in the scenario, users have the option to publish information or documents with custom restrictions. When a user is trying to access that restricted data, a challenge is sent by the data provider to the user, that certain predicates need to be proven in order to gain access to the data. For instance the user can be asked to prove that he is working in the automotive industry. If the necessary credential is available, a proof containing the required predicate, is computed automatically in the background and sent back to data provider.
The Interface (NFR1)
In order to ease the interaction by on the one hand make it very intuitive and on the other hand minimize the entering of data. Once the user has registered for the conference and received the initial credential from the conference CA, an initial root profile is created automatically. Profiles are called digital faces or just faces in our context. The default digital face contains the attributes that have been certified in the registration process. If the user wished to create a new digital face, he/she can use this default face as a starting point and add or remove attributes.
Context dependent identity management (R1, NFR2, NFR3)
During the creation of a digital face it is possible to define the context variables, when this face is to be used. The context is defined mainly by the people the user is interacting with, but can be further extended by also taking the current activity or location into account. As an example, if Alice creates a face with her name, affiliation, and current activity and decides to use this in the context of work, she would set that this face is automatically used when interacting with persons from the group colleagues. Now she could also define exceptions, in which another face (e.g. private) is used, for example by setting an individual rule for her colleague Bob, or by making it also dependent on her current location (e.g. only show her phone number while in the office). Figure 4 shows an abstract illustration of the concept. for each different context, a different subset of all attributes is disclosed. Definition of fine-grained acces rights with intuitive user interfaces (R3) We build a UI that eases the complex process of proof generation with Idemix, even for lay users. In addition to the context dependent disclosure of digital faces, the user can define individual conditions that another person has to fulfill in order to access selected information of files. To to this, the user just clicks on the attribute of interest and a dialog will show up, where a statement like affiliation = xyz, or age >30 can be defined with a few clicks, similar to the a building block concept.
Experiences and Discussion
According to our first experiences based on empirical evaluations of our prototype, end-users are able to use our approach. In the following, we describe how we carried out first lab testes and observed the users by the usage of our prototype.
To organize the development and evaluation process of the prototype, we followed the AFFINE methodology [START_REF] Bourimi | Affine for enforcing earlier consideration of nfrs and human factors when building sociotechnical systems following agile methodologies[END_REF], which is an agile framework to enforce the consideration of non-functional requirements like e.g. usability and security. The lab tests were carried out periodically considering the provided feedback in each test iteration. For this, we followed as mentioned before an agile framework for integrating non-functional requirements earlier in the development process while considering end-users' as well as developers needs. Since the adopted AFFINE framework described in [START_REF] Bourimi | Affine for enforcing earlier consideration of nfrs and human factors when building sociotechnical systems following agile methodologies[END_REF] is Scrum based, we provided continuously running prototypes granting so fast feedback loops. We split up the evaluation of the new user interface in two phases. In the first phase, we have conducted functional unit tests. For this, we extended the provided unit tests in the original Java Idemix implementation to check new functionalities related to R4. These functional tests concentrated on validating the intended interaction possibilities. In the second phase, we evaluated the developed UIs of our system in different lab tests carrying out different tasks within the implemented "Conference Scenario". Different members from various departments in our university were invited to use the prototype in a simulated conference situation. The persons who contributed in our lab tests had different background in using collaborative systems and social software and had no knowledge about proof-based credential systems. Thus we provided an introduction to the essentials of Idemix from the usage perspective as we described it above in the corresponding Section. We observed that the positive resonance with respect to Idemix functionality generated some kind of curiosity which motivated the testers. Latter was very interested in knowing how they can generate attributes especially those one with vague assessments (e.g. "I m older than 18" etc.). However, this was a first indicator that the performance of the developed system has to fit the worst cases of a real usage scenario. The main issue thereby is that Idemix bases in its computation on data represented in the XML format which is expensive in terms of resources especially on mobile devices.
End-users enjoyed the transparent access rights management that was carried out in the background in general. However, many users wished to be able to view the access protocol and asked for new UIs in order to view detailed logs at a later time. Protocoling access at the level of the mobile device will surely represent a new performance challenge over the time.
The visualization of the digital faces by representatives of the persons that face is shown found good acceptance in the group of testers. However, some questions arose about how to decide which of the persons in a group should be the representative, or if a merged picture would be better. Some users also brought up the suggestion, to also include symbolic graphics chosen by the user, which can also be associated to a specific context. This could be especially useful when no pictures are available of the person or the group.
Conclusion and future work
With our approach we presented a way to represent a very strong and complex concept for fine grained access control to personal information with the anonymous credential system Idemix and an unconventional mental model for the representation of partial identities supported by context-dependent identity management. With the extended prototype we were able to perform first usability and performance tests which gave us promising results. We built and evaluated various prototypes for checking the feasibility of our approach. First evaluations showed this feasibility and beyond good initial acceptance we got valuable feedback for future improvements. Currently we are fine-tuning the prototype as a preparation for widespread user tests in order to get valuable data about user acceptance and behavior pattern when dealing with partial identities. It will be also targeted to look deeper into the behavior patterns of users dealing with partial identities and to evaluate them.
Fig. 1 .
1 Fig. 1. Reachability management
Fig. 2 .
2 Fig. 2. Interaction flow of the scenario
Fig. 3 .
3 Fig. 3. Architecture of the implemented scenario
Fig. 4 .
4 Fig. 4. Identity management becomes context management
Fig. 5 .
5 Fig. 5. Sequence diagram for accessing restricted information
Fig. 6 .
6 Fig. 6. Selected GUI masks of the prototype
A principal can be a user, a device, or a system, but most typically it means a user. | 29,607 | [
"1003315",
"1003316"
] | [
"145504",
"145504"
] |
01481507 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481507/file/978-3-642-27585-2_6_Chapter.pdf | Martin Szydlowski
Manuel Egele
Christopher Kruegel
Giovanni Vigna
email: [email protected]
Challenges for Dynamic Analysis of iOS Applications
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Mobile devices and especially smart phones have become ubiquitous in recent years. They evolved from simple organizers and phones to full featured entertainment devices, capable of browsing the web, storing the user's address book, and provide turn-by-turn navigation through built in GPS receivers. Furthermore, most mobile platforms offer the possibility to extend the functionality of the supported devices by means of third party applications. Google's Android system, for example, has the official Android Market [START_REF]Apps -Android Market[END_REF], and a series of unofficial descendants that provide third party applications to users. Similarly, Apple initially created the AppStore for third party applications for their iPhones. Nowadays, all devices running iOS (i.e., iPhone, iPod Touch, and iPad) can access and download applications from the AppStore. To ensure the quality and weed out potentially malicious applications, Apple scrutinizes each submitted application before it is distributed through the AppStore. This vetting process is designed to ascertain that only applications conforming to the iPhone Developer Program License Agreement [START_REF]iPhone Developer Program License Agreement[END_REF] are available on the AppStore. However, anecdotal evidence has shown that this vetting process is not always effective. More precisely, multiple incidents have become public where applications distributed through the AppStore blatantly violate the user's privacy [START_REF] Beschizza | iPhone game dev accused of stealing players' phone numbers[END_REF][START_REF] For | The Register. iphone app grabs your mobile number[END_REF], or provide functionality that was prohibited by the license agreement [START_REF]Apple Approves, Pulls Flashlight App with Hidden Tethering Mode[END_REF]. After Apple removed the offending applications from the AppStore no new victims could download these apps. However, users that installed and used these apps prior to Apple noticing the offending behavior had to assume that their privacy had been breached.
Recent research [START_REF] Egele | PiOS: Detecting Privacy Leaks in iOS Applications[END_REF][START_REF] Enck | A Study of Android Application Security[END_REF] indicates that AppStore applications regularly access and transmit privacy sensitive information to the Internet. Therefore, it is obvious that the current vetting process as employed by Apple requires improvement. With static analysis tools available to investigate the functionality of malicious applications, one has to assume that attackers become more aware of the risk of getting their malicious applications identified and rejected from the AppStore. Thus, we assume attackers become more sophisticated in hiding malicious functionality in their applications. Therefore, we think it is necessary to complement existing static analysis techniques for iOS applications with their dynamic counterparts to keep the platform's users protected. We are convinced that the combination of static and dynamic analysis techniques make a strong ensemble capable of identifying malicious applications. To this end, this paper makes the following contributions:
-We highlight the challenges that are imposed on dynamic analysis techniques when targeting a mobile platform such as iOS. -We implement and evaluate a dynamic analysis approach that are suitable for the iOS platform. -We create an automated system that exercises different aspects of the application under analysis by interacting with the application's user interface.
Dynamic Analysis
Dynamic analysis refers to a set of techniques that monitor the behavior of a program while it is executed. These techniques can monitor different aspects of program execution. For example, systems have been developed to record different classes of function calls, such as API calls to the Windows API [START_REF] Willems | Toward automated dynamic malware analysis using CWSandbox[END_REF], or system calls for Windows [START_REF] Dinaburg | Ether: malware analysis via hardware virtualization extensions[END_REF] or Linux [START_REF] Mutz | Anomalous system call detection[END_REF]. Systems performing function call monitoring can be implemented at different layers of abstraction within the operating system. For example, the JavaScript interpreter of a browser can be instrumented to record function and method calls within JavaScript code [START_REF] Hallaraker | Detecting malicious javascript code in mozilla[END_REF]. Dynamic binary rewriting [START_REF] Hunt | Detours: binary interception of Win32 functions[END_REF] can be leveraged to monitor the invocation of functions implemented by an application or dynamically linked libraries. Similarly, debugging mechanisms can be employed to gather such information [START_REF] Vasudevan | Stealth breakpoints[END_REF][START_REF] Vasudevan | Cobra: Fine-grained malware analysis using stealth localized-executions[END_REF][START_REF] Vasudevan | Spike: engineering malware analysis tools using unobtrusive binary-instrumentation[END_REF]. Furthermore, the operating system used to perform the analysis might provide a useful hooking infrastructure. Windows, for example, provides such hooks for keyboard and mouse events. The dtrace [START_REF]DTrace[END_REF] infrastructure available on Solaris, FreeBSD, and Mac OS X can also be used to monitor system calls.
An orthogonal approach to function call monitoring is information flow analysis. That is, instead of focusing on the sequence of function calls during program execution, the focus is on monitoring how the program operates on interesting input data [START_REF] Chow | Understanding data lifetime via whole system simulation[END_REF]. This data could, for example, be the packets that are received from the network, or privacy relevant information that is stored on the device. By tracking how this data is propagated through the system, information flow monitoring tools can raise an alert if such sensitive data is about to be transmitted to the network [START_REF] Egele | Dynamic spyware analysis[END_REF][START_REF] Yin | Panorama: capturing system-wide information flow for malware detection and analysis[END_REF]. In the case of incoming network packets the same technique can be applied to detect attacks that divert the control flow of an application in order to exploit a security vulnerability [START_REF] Portokalidis | Argos: an emulator for fingerprinting zero-day attacks for advertised honeypots with automatic signature generation[END_REF].
Challenges for Dynamic Analysis on the iOS Platform
State of the art. Existing dynamic analysis techniques are geared towards applications and systems that execute on commodity PCs and operating systems. Therefore, a plethora of such systems are available to analyze x86 binaries executing on Linux or Windows. While the x86 architecture is most widely deployed for desktop and server computers, the landscape for the mobile device market has a different shape. In the mobile segment, the ARM architecture is most prevalent. The rise of malicious applications [START_REF] Felt | A Survey of Mobile Malware in the Wild[END_REF] for mobile platforms demands for powerful analysis techniques to be developed for these systems to fight such threats. However, existing dynamic analysis techniques available for the x86 architecture are not immediately applicable to mobile devices executing binaries compiled for the ARM architecture. For example, many dynamic analysis approaches rely on full system emulation or vitalization to perform their task. For most mobile platforms, however, no such full system emulators are available. While Apple, for example, includes an emulator with their XCode development environment, this emulator executes x86 instructions, and therefore requires that the application to emulate is recompiled. Thus, only applications that are available in source code can be executed in this emulator. However, the AppStore only distributes binary applications, which cannot be executed in the emulator. Furthermore, the emulator's source code is not publicly available, and therefore, cannot be extended to perform additional analysis tasks. OS X contains the comprehensive dtrace1 instrumentation infrastructure. Although the iOS kernels and OS X kernels are quite similar iOS does not provide this functionality.
Graphical user interfaces (GUI). An additional challenge results from the very nature of iOS applications. That is, most iOS applications are making heavy use of event driven graphical user interfaces. Therefore, launching an application and executing the sample for a given amount of time might not be sufficient to collect enough information to assess whether the analyzed application poses a threat to the user or not. That is, without GUI interaction only a minimal amount of execution paths will be covered during analysis. Therefore, to cover a wide range of execution paths, any dynamic analysis system targeting iOS applications has to be able to automatically operate an applications' GUI.
Source vs. binary analysis. Combined static and dynamic analysis approaches, such as Avgerinos et al. [START_REF] Avgerinos | Aeg: Automatic exploit generation[END_REF], can derive a semantically rich representation of an application by analyzing its source code. However, applications distributed through the AppStore are available in binary form only. Therefore, any analysis system that targets iOS applications can only operate on compiled binaries.
Analyzing Objective-C. The most prevalent programming language to create iOS applications is Objective-C. Although Objective-C is a strict superset of the C programming language it features a powerful runtime environment that provides functionality for the object-oriented capabilities of the language. With regard to analyzing a binary created from Objective-C it is especially noteworthy that member functions (i.e., methods) of objects are not called directly. Instead, the runtime provides a dynamic dispatch mechanism that accepts a pointer to an object and the name of a method to call. The dispatch function is responsible for traversing the object's class hierarchy and identifying the implementation of the corresponding method.
The above mentioned problems combined with the constraint hardware resources of mobile devices pose significant challenges that need to be addressed before a dynamic analysis system for the iOS platform becomes viable.
Strategies to Overcome these Challenges
To tackle the above mentioned challenges, this work makes two major contributions. First, we implement and evaluate a dynamic analysis approach that is suitable for the iOS platform and provides a trace of method calls as observed during program execution. Second, we create an automated system that exercises different functionality of the application under analysis by interacting with the application's user interface.
Dynamic analysis approaches
As mentioned above not all dynamic analysis techniques available on the x86 architecture are feasible on iOS devices executing ARM instructions. Although there are many different approaches to dynamic analysis, we think that function call traces are a viable first step in providing detailed insights into an application's behavior. Therefore, in this section we elaborate on the lessons we learned while implementing a system that allows us to monitor the invocation of function calls of iOS applications.
Objective-C is the most prevalent programming language used to create applications for the iOS platform. However, as opposed to C++ where methods (i.e., class member functions) are invoked via the use of vtable pointers, in Objective-C methods are invoked fundamentally different. More precisely, methods are not called but instead a so-called message is sent to a receiver object. These messages are handled by the dynamic dispatch routine called objc msgSend. This dispatch routine is responsible for identifying and invoking the implementation for the method that corresponds to a message. The first argument to this dispatch routine is always a pointer to the so-called receiver object. That is, the object on which the method should get invoked (e.g., an instance of the class NSMutableString). The second argument is a socalled selector. This selector is a string representation of the name of the method that should get invoked (e.g., appendString). All remaining arguments are of no immediate concern to the dispatch function. That is, these arguments get passed to the target method once it is resolved. To perform this resolution, the obj msgSend function traverses the class hierarchy starting at the receiver and searches for a method whose name corresponds to the selector. Should no match be found in the receiver class, its superclasses are searched recursively. Once the corresponding method is identified, the dispatch routine invokes this method and passes along the necessary arguments. Due to the prevalence of Objective-C to create iOS applications we chose to implement a dynamic analysis approach that monitors the invocation of Objective-C methods instead of classic C functions.
Monitoring the dispatcher. One approach of monitoring all method invocations through the dynamic dispatch routine would be to hook the dispatch function itself. This could be achieved by following an approach similar to Detours [START_REF] Hunt | Detours: binary interception of Win32 functions[END_REF].
That is, one would copy the initial instructions of the dynamic dispatcher before replacing them in memory with an unconditional jump to divert the control flow to a dedicated hook function. This hook function could then perform the necessary analysis, such as resolving parameter values, and logging the method invocation to the analysis report. Once the hook function finished executing, control would be transferred back to the dispatch function and regular execution could continue. Of course, the backed up initial instructions that got overwritten in the dispatcher need to be executed too before control is transferred back to the dispatch function. Although such an approach seems straight forward, the comprehensive libraries available to iOS applications also make extensive use of the Objective-C runtime. Therefore, such a generic approach would collect function call traces not only on the code the application developer created but also on all code that is executed within dynamically linked libraries. Often, however, function call traces collected from libraries are repetitive. Thus, we chose to implement our approach to only trace method invocations that are performed by the code the developer wrote.
Identifying method call sites. As a first step in monitoring Objective-C method calls we leverage our previous work PiOS [START_REF] Egele | PiOS: Detecting Privacy Leaks in iOS Applications[END_REF] to generate a list of call sites to the dynamic dispatch function. Furthermore, PiOS is often capable of determining the number and types of arguments that are passed to the invoked method. This information is recorded along with the above mentioned call sites. Subsequently, this information is post processed to generate gdb2 script files that log the corresponding information to the analysis report. More precisely, for each call site to the dynamic dispatch function, the script will contain a breakpoint. Furthermore, for each breakpoint hit the type (i.e., class) of the receiver as well as the name of the invoked method (i.e., the selector) get logged. Additionally, if PiOS successfully determined the number of arguments and their types, this information will also be logged.
Automated GUI interaction
Most iOS applications feature a rich graphical user interface. Furthermore, most functionality within those applications gets executed in response to user interface events or interactions. This means that unless an application's user interface is exercised, most of the functionality contained in such applications lies dormant. As dynamic analysis only observes the behavior of code that is executing, large parts of functionality in such applications would be missed unless the GUI gets exercised.
Therefore, one of the challenges we address in this work is the automated interaction with graphical user interfaces. Such interaction with an application's GUI can be achieved on different levels. Desktop operating systems commonly support tools to get identifiers or handles for currently displayed GUI elements (e.g., UI explorer on Mac OS X). However, no such system is readily available for iOS.
Therefore, we turned our attention to alternative solutions to exercise an application's user interface. A straight forward approach could, for example, randomly click on the screen area. This method proved effective in detecting click-jacking attacks [START_REF] Balduzzi | A solution for the automated detection of clickjacking attacks[END_REF] on the World Wide Web. A more elaborate technique could read the contents from the device's frame buffer and try to identify interactive elements, such as buttons, check-boxes, or text fields by applying image processing techniques. Once such elements are identified, virtual keystrokes or mouse clicks could be triggered in the system to interact with these elements. We combined these two approaches into a proof-of-concept prototype that allows us to automatically exercise graphical user interfaces of iOS applications.
To interact with the device and get access to the device's frame buffer we leverage the open source Veency3 VNC server. To communicate with the VNC server, and perform the detection and manipulation of UI elements, we have modified the python-vnc-viewer 4 , an open source VNC client implementation in Python.
The basic idea behind this approach is to sample the screen and tap (i.e., click) locations on the screen that are determined by a regular grid pattern. Additionally, to identify interactive user interface elements, we perform the following steps in a loop: We capture the contents of the screen buffer and compare it to the previous screenshot (if present). If a sufficiently large fraction of pixels has changed between the images, we assume an interactive element has been hit. To tell input fields from other interactive elements apart, the current screenshot is compared to a reference image where the on-screen keyboard is displayed. This comparison is based on a heuristic that allows slight variations in the keyboard's appearance (e.g., different language settings). If we can determine that a keyboard is displayed, we send tap events to the first four keys in the middle row (i.e., ASDF on a US layout) and the return/done key to dismiss the keyboard again. When no keyboard is detected, we advance the cursor to the next location and send a tap event. In either case, we wait a brief amount of time before repeating the procedure, to give the UI time to respond and complete animations. We empirically determined a wait time of 3 seconds to be sufficient.
To avoid hitting the same UI elements repeatedly, we keep a greyscale image with dimensions identical to the frame buffer in memory. We call this greyscale image a clickmap. For each tap event, we perform a fuzzy flood-fill algorithm on the screenshot, originating from the tap coordinates, to determine the extents of the element we have tapped. That approach works well for monochrome or slightly shaded elements, like the default widgets offered by the interface builder for iOS applications. We mark these extents in the clickmap to keep track of the elements we have already accessed. That is, before a tap event is actually sent, the clickmap is consulted. If the current coordinates belong an element we already clicked, no tap event will be sent. Therefore, we avoid hitting the same element repeatedly, especially when the element in question is the background. Whenever we have a new screenshot containing changes, we clear the changed area in the clickmap so that new UI elements that might have appeared will be exercised too.
Evaluation
In this section we present the results we obtained during the evaluation of our prototype implementation. For the purpose of this evaluation we created a sample application that contains different user interface components such as buttons, text fields, and on/off switches. A screenshot of the application is depicted in Figure 1. The rationale for creating such a sample application is that by creating the application, we got intimately familiar with its functionality and operation. Furthermore, our experience with the static analysis of iOS applications allowed us to build corner cases into the application where we know static analysis can only provide limited results. For example, the test application would dynamically generate a new text field, once a specific button is clicked.
Method Call Coverage
PiOS identified a total of 52 calls to the dynamic dispatch function. Once our test application is launched, no further method calls are made unless the different user interface elements are exercised. This is common behavior for iOS applications that are heavily user interface driven. During application startup only 8 of the 52 method calls (i.e., 16%) are executed. This underlines that dynamic analysis approaches that do not take the GUI of an iOS application into account, can only provide limited information about the application's functionality. Moreover, the methods that can be observed during program startup are generically added by Apple's build system and are almost identical for all applications targeting the iOS platform. Therefore, the valuable insights into application behavior that can be derived solely from the program startup phase are limited at best. By executing our prototype to exercise the user interface of our test application, we could observe 36 methods being called. That corresponds to 69% of all methods being covered. Most importantly, we were able to exercise most of the functionality that is not part of the initial startup procedures for the applications. Our system did not observe the remaining 16 methods being invoked. One method would only be called if the user interface was in a very specific state. 5However, our technique to exercise the user interface did not put the application in that state. All remaining 15 calls were part of the shutdown procedures (e.g., destructors) for the application. However, these methods are only invoked if the application terminates voluntarily. If the user presses the home button on the device, the application is terminated and no cleanup code is executed. As there is not generic way to determine whether a certain user interface element will exit an application, our analysis terminates the application by tapping the home button. Thus, we did not observe these shutdown procedures being executed.
Comparison with Static Analysis
There are different possibilities on how to compare the static and dynamic analysis results. For example, static analysis covers all possible and thus also infeasible execution paths. Dynamic analysis can only observe the code paths that are executed while the program is analyzed. Therefore, we first evaluate how many and which methods get invoked during dynamic analysis. To compare the dynamic and static analysis results we first analyzed our test application with PiOS.
Static analysis results. PiOS detected 52 calls to the objc msgSend dynamic dispatch function. In 49 cases (i.e., 94%) PiOS was able to statically determine the class of the receiver object and the value of the selector. Furthermore, PiOS validates these results by looking up the class of the receiver in the class hierarchy. A method call is successfully resolved if the class exists in the class hierarchy and this class or one of its superclasses implements a method whose name corresponds to the value of the selector.
The remaining three method calls that PiOS was unable to resolve are part of the function that dynamically creates and initializes the new text field. In our sample application this action is performed the first time the Reset button is clicked.
Comparison. We compared the receiver and selector for the 36 method calls present in the static and dynamic analysis reports. In all but 3 instances the results were identical. In two of these three instances PiOS identified the receiver type as NSString, whereas the dynamic analysis indicates that the actual type is CFConstantStringClassReference. However, according to Apple's documentation 6 these two types can be used interchangeably. In the third instance PiOS identified the receiver as NSString and dynamic analysis indicates the correct type to be NSPlaceHolderString. The difference is that for NSPlaceHolderString the initialization is not complete yet. This inconsistency is plausible as the only time this happened is in a call to initStringWithFormat to finish initialization.
Method call arguments.
For 12 calls PiOS was able to determine the types of the arguments that get passed to the invoked methods. Thus, in these cases the dynamic analysis script is also logging information pertaining to these arguments. More precisely, for arguments of type NSString or any of its related types, a string representation of the argument is printed in the log file. For all other types, the address of the corresponding argument is printed instead.
Improvements for static analysis.
As mentioned previously, static analysis is sometimes unable to compute the target method of an objc msgSend call. More precisely, if PiOS is unable to statically determine the type of the receiver object or the value of the selector, PiOS cannot resolve the target method. This was the case for 3 method calls in our sample application. All 3 instances were part of the function that dynamically creates the additional text field. However, the dynamic analysis exercised this functionality and the analysis report contains the type of the receiver object and the value of the selector. Thus, the results from dynamic analysis can be leveraged to increase precision of our static analysis system.
Limitations and Future Work
Our proof-of-concept implementation relies on gdb to collect information about the program during execution. Thus, it is easily detectable by applications that try to detect whether they are analyzed. Therefore, we plan to evaluate stealthier ways of performing the necessary monitoring tasks in the future. Furthermore, our current implementation does not take system calls into account. However, for example Cydia's mobile substrate framework could be a good starting point to investigate system call monitoring on iOS devices.
Although our method of exercising the user interface resulted in high code coverage when compared to the functionality of program startup alone, we see room for improvement in this area too. That is, our current approach does not handle highly non-uniform colored (i.e., custom designed) user interface elements correctly. More precisely, our system does not detect the boundaries of such elements reliably and therefore might tap the same element multiple times. Furthermore, we only consider tap events and omit all interactions that use swipe or multi touch gestures. Therefore, to improve the automatic user interface interaction, one could try to extract the information about UI elements from the application's memory during runtime. This would entail getting a reference to the current UIView element and find a way to enumerate all UI elements contained in that view. We plan to investigate such techniques in future work.
The wide range of related work in dynamic analysis mainly focuses on desktop operating systems. Due to the challenges mentioned above these techniques are not readily applicable to mobile platforms. For brevity, we refer the reader for such techniques to [START_REF] Egele | A survey on automated dynamic malware analysis techniques and tools[END_REF] and focus this section on related work that performs analysis for mobile platforms. TaintDroid [START_REF] Enck | Taint-Droid: an information-flow tracking system for realtime privacy monitoring on smartphones[END_REF] is the first dynamic analysis system for Android applications. However, it is limited to applications that execute in the Dalvik virtual machine. Thus, by modifying the open source code of the virtual machine, the necessary analysis steps can be readily implemented. However, the authors state that "Native code is unmonitored in TaintDroid". Therefore, systems like TaintDroid are not applicable to the iOS platform as iOS applications execute on the hardware directly. That is there is no middle-layer that can be instrumented to perform analysis tasks.
Mulliner et al. [START_REF] Mulliner | Using Labeling to Prevent Cross-Service Attacks Against Smart Phones[END_REF] use labeling of processes to prevent cross-service attacks on mobile devices. However, this approach relies on a modified Linux kernel to check and verify which application is accessing which class of devices. Such checks only reveal the communication interfaces that are used by an application. Applications on the AppStore, however, are prevented from accessing the GSM modem, and thus can only access the network or Bluetooth components. Thus, such a system is too coarse grained to effectively protect iOS users. Furthermore, the source code for iOS is not available, and thus the necessary modifications to the operating systems' kernel cannot be made easily.
In previous work we presented PiOS [START_REF] Egele | PiOS: Detecting Privacy Leaks in iOS Applications[END_REF] as an approach to detect privacy leaks in iOS applications using static analysis. This work demonstrated that it is indeed common for applications available in the AppStore to transmit privacy sensitive data to the network -usually without the users consent or knowledge. Furthermore, Enck et al. [START_REF] Enck | Understanding android security[END_REF] presented Kirin a system that statically assesses, whether the permissions requested by an Android application collide with the user's privacy assumptions.
Conclusion
The popularity of Apple's iOS and the AppStore attracted developers with malicious intents. Recent events have shown that malicious applications available from the AppStore are capable of breaching the user's privacy by stealing privacy sensitive information, such as phone numbers, address book contents, or GPS location data from the device. Although static analysis techniques have shown that they are capable of detecting such fraudulent applications, we are convinced that attackers will employ obfuscation techniques to thwart static analysis. Therefore, this paper discusses the challenges and open problems that have to be overcome to provide comprehensive dynamic analysis tools for iOS applications. We tackled two of these challenges by providing prototype implementations of techniques that are able to generate method call traces for iOS applications, as well as exercising application user interfaces. Our evaluation highlights the necessity for taking user interfaces into account when performing dynamic analysis for iOS applications.
Fig. 1 .
1 Fig. 1. A screenshot of the sample application. The lower text field is dynamically created upon the first click to the Reset button.
http://developers.sun.com/solaris/docs/o-s-dtrace-htg.pdf
http://www.gnu.org/s/gdb/
http://cydia.saurik.com/info/veency/
http://code.google.com/p/python-vnc-viewer/
If the switch has been switched from the default on setting to off and the Reset button is clicked afterwards, a message is sent to animate the switch back to its default on setting.
http://developer.apple.com/library/mac/#documentation/CoreFoundation/ Reference/CFStringRef/Reference/reference.html
Acknowledgements
This work was partially supported by the ONR under grant N000140911042 and by the National Science Foundation (NSF) under grants CNS-0845559, CNS-0905537, and CNS-0716095. We would also like to thank Yan Shoshitaishvili for his help with the evaluation device. | 32,557 | [
"1003325",
"1003326",
"1003327",
"1003328"
] | [
"19098",
"300693",
"300693",
"300693"
] |
01481508 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481508/file/978-3-642-27585-2_7_Chapter.pdf | Marine Minier
email: [email protected]
-W Phan
email: [email protected]
Energy-Efficient Cryptographic Engineering Paradigm
We motivate the notion of green cryptographic engineering, wherein we discuss several approaches to energy minimization or energy efficient cryptographic processes. We propose the amortization of computations paradigm in the design of cryptographic schemes; this paradigm can be used in line with existing approaches. We describe an example structure that exemplifies this paradigm and at the end of the paper we ask further research questions for this direction.
Post-proceedings printed on environmental-friendly acid-free paper to maximize durability and thus minimize printing effort and need for new paper; in line with our amortization of computations paradigm.
1 Motivation: the Future is Green
As worldwide demand for energy increases, our present world realizes that energy resources are valuable assets, and researchers simultaneously aim to develop techniques to generate energy from renewable resources and to ensure efficient usage of energy so that energy derived notably from non-renewable energy resources is not unnecessarily wasted.
The ICT sector in 2008 contributed to 2% of global carbon emissions [START_REF]SMART 2020: Enabling the Low Carbon Economy in the Information Age[END_REF] i.e. 830 MtCO 2 e, and expected to increase to 1.43 GtCO 2 e by 2020. In addition to making devices more energy-efficient thus reducing the carbon footprint, ICT stakeholders aim to utilize ICT to enable energy efficiency across the board in other non-ICT areas, in order to achieve energy savings of 15% (7.8 GtCO 2 e) of global emissions by 2020. Thus, not only will ICT substantially influence global energy consumption, ICT mechanisms and networking will feature prominently in other non-ICT areas for better utilization of energy.
In the networking context, telecoms providers are moving to energy-efficient equipment and networks, and it is expected that by 2013 such equipment will be 46% of global network infrastructure [START_REF] Research | Green' Telecom Equipment will Represent 46% of Network Capital Expenditures by 2013[END_REF]. For many years now, networking researchers have investigated ways to design and implement energy-efficient devices, networks and mechanisms ranging from lightweight resource-constrained devices like RFIDs and wireless sensor nodes to energyaware routing algorithms. Within network security, researchers have investigated impacts on energy due to authentication and key exchange protocols [START_REF] De Meulenaer | On the Energy Cost of Communication and Cryptography in Wireless Sensor Networks[END_REF][START_REF] Delgado-Mohatar | An Energy-Efficient Symmetric Cryptography based Authentication Scheme for Wireless Sensor Networks[END_REF], notably for wireless sensor networks (WSNs) where prolonging battery life is of utmost importance. Researchers have also compared the energy consumption of different cryptographic mechanisms e.g. RSA and ElGamal implemented in WSNs [START_REF] Kayalvizhi | Energy Analysis of RSA and ELGAMAL Algorithms for Wireless Sensor Networks[END_REF], pairing based cryptography notably key exchange [START_REF] Szczechowiak | On the Application of Pairing based Cryptography to Wireless Sensor Networks[END_REF]; and shown that public-key cryptography has non-negligible impact on sensor lifetime [START_REF] Bicakci | The Impact of One-Time Energy Costs on Network Lifetime in Wireless Sensor Networks[END_REF].
Most of these work consider energy efficiency of implementation on specific hardware, or compare energy consumption of cryptographic primitives on certain platforms. Few have treated how to design a cryptographic scheme whose structure is well suited for energy efficiency. Indeed, cryptographic schemes are central to security protocols in different layers of the network protocol stack, e.g. SSL/TLS at the transport layer, IPsec at the network layer, so if one aims to design energy-efficient security protocols for these layers, it makes sense to ensure that the underlying cryptographic schemes used by these energy-efficient protocols are also designed with energy efficiency in mind. To the best of our knowledge, a couple of such results exist, as to be discussed next.
Indeed, the solution to the energy efficiency problem should be a holistic one e.g. considering all levels of abstraction, with no obvious weak link in the green sense.
Related Work. Kaps et al. [START_REF] Kaps | Cryptography on a Speck of Dust[END_REF] made recommendations for cryptographic design suited for ultra low-power settings: * scalability: able to efficiently scale between bit serial and high parallelism so that implementers can trade off speed for power * regularity: only a few different primitives should be used * multihashing and multiencryption: sequentially calling a simpler and seemingly less secure hash function or encryption multiple times to achieve higher security. This approach is similar to the iterated structure of constructing hash functions based on compression functions and ciphers based on round functions * precomputation/offline: the bulk of the processing is performed offline, before going online to process the incoming input so there is less latency, thus less energy wasted during the wait.
[13] also contrasted different alternatives to implementing basic functions e.g. algebraic representation versus table lookup, polynomial arithmetic for hardware implementation, constant shifts/rotations vs data-dependent ones, logic functions vs arithmetic ones. These are more in terms of choosing the proper basic functions to minimize energy consumption. Meanwhile, Troutman and Rijmen [START_REF] Troutman | Green Cryptography: Cleaner Engineering through Recycling[END_REF] advocate a green approach to the design process, i.e. recycling primitives since long established ones garner more trust. They illustrated the concept with a discussion on AES' pedigree and how AES-type primitives were subsequently recycled in some manner in many SHA-3 candidates.
Rogaway and Steinberger [START_REF] Rogaway | Security/efficiency Tradeoffs for Permutation-based Hashing[END_REF][START_REF] Rogaway | Constructing Cryptographic Hash Functions from Fixed-key Blockciphers[END_REF] show how to reuse block ciphers to construct hash compression functions, essentially emphasizing on the minimalist approach while at the same time attaining assurance (given that the reused primitive has been in existence for some time and therefore seen considerable public analysis). The minimalist approach gives rise to less hardware or smaller memory footprint. More interestingly, in [START_REF] Rogaway | Constructing Cryptographic Hash Functions from Fixed-key Blockciphers[END_REF], they consider the approach of fixing the underlying block cipher's key input so that the corresponding block cipher key schedule can be computed offline instead of in online streaming mode, thus achieving better efficiency.
Essentially, the approach to recycle primitives advocated by [START_REF] Kaps | Cryptography on a Speck of Dust[END_REF][START_REF] Troutman | Green Cryptography: Cleaner Engineering through Recycling[END_REF] is aimed to optimize the efforts (and energy) of designers that had already been spent on designing a primitive, and to optimize the code size of the primitive's implementation. Indeed, recycling a primitive means less energy needs to be spent to design a new one, and less code size is taken up by having multiple different primitives since the same primitive code can be called multiple times instead of different codes for different primitives.
The approach in [START_REF] Kaps | Cryptography on a Speck of Dust[END_REF] to perform the bulk precomputation minimizes online time (i.e. time when actual input data is incoming and needs to be processed). This approach is similar to concepts in cryptography relating to remotely keyed encryption [START_REF] Weis | Remotely Keyed Encryption with Java Cards: a Secure and Efficient Method to Encrypt Multimedia Streams[END_REF] and online/offline signatures [START_REF] Shamir | Improved Online/Offline Signature Schemes[END_REF] for the application of real-time streaming. Less online time means less time is dependent on receiving the transmitted input which may cause latency and therefore unnecessary usage of energy during then.
Minimal Energy Consumption and Lightweightness vs Energy Efficiency. It should be noted that minimizing energy consumption or emphasis on lightweight design do not necessarily imply energy efficiency, and vice versa. By definition, energy efficiency refers to efficient use of energy, without unnecessary wastage. A lightweight cryptographic scheme is designed to use low-power computations, have small code size or require small memory space, yet it may involve multiple different low-power operations; so from the viewpoint of recycling primitives, this approach is not energy efficient. In contrast, a design that efficiently reuses primitives, may have higher power requirements than a lightweight scheme if the primitive itself involves complex operations that have high power consumption. This Paper. Here, we first discuss the notion for green cryptographic engineering wherein are approaches for energy-minimizing and/or energy-efficient cryptographic processing. We also propose an approach such that the energy consumed in performing any substantial computation, is optimized by amortizing the output of the computation over different states of the scheme: we denote this as amortization of computations. We describe the relevance of this notion against recent cryptographic schemes proposed in literature. We conclude by posing further questions to be answered, some of which we are currently investigating.
Green Cryptographic Engineering Paradigm
Being green means being energy efficient, and we will use these terms interchangeably. Efficient refers to being fit for purpose and not being wasteful: both in the sense that no energy is drawn unnecessarily and that energy once drawn should be put to good use. Energy here is in the sense of any kind of resources. To appreciate this resources term, it is worth being explicit here what kinds of resources one may be interested in not wasting: human effort: within a cryptographic engineering paradigm, we can think of making efficient use of the effort (and time spent is therefore also implied) of cryptographic designers, cryptanalysts, implementers and users of cryptographic schemes. computational effort: efficient use of this resource can be in the sense of minimizing the amount of computation required (and therefore the cost of access to such computational power), or making full use of the output of any computation. space: space refers to the capacity required to store or execute the scheme's implementations. This includes, e.g. ROM or RAM size especially for embedded computing platforms. energy supply: computational machines require electrical power to run, so being green here could mean minimizing the amount of electrical power required by such computations. time: the issue of interest here is to minimize the amount of time required to perform cryptographic operations. Towards that aim, researchers could investigate parallelizable structures that reduce the time-per-output ratio.
Cryptographic engineering, i.e. the process life cycle for design, analysis, implementation and use of cryptographic schemes and cryptographic based security systems, can be approached with a green strategy, bearing in mind the aim to optimize usage of the above listed resources.
Each stage of the process can be green in the following directions:
• minimized energy consumption: there is considerable research in this regard, notably the work on designing lightweight cryptographic schemes for resource-constrained devices such as RFID and wireless sensor networks; lightweight in the sense of energy consumption, code size and/or memory requirements.
Applying this approach to cryptographic scheme design, lightweight and/or low-energy operations can be selected for use as basic building blocks within the cryptographic scheme, such as logical operations, and addition/rotation/XOR (ARX) constructions rather than multiplications.
• amortization of primitives: this approach is essentially in terms of recycling primitives that have already been designed or implemented [START_REF] Kaps | Cryptography on a Speck of Dust[END_REF]. This efficient usage of primitives leads to efficient code size, since the size remains the same irrespective of how many times the primitive is run. The regularity and multihashing/encryption suggestions of [START_REF] Kaps | Cryptography on a Speck of Dust[END_REF] are of this type of approach. This approach is implicit in typical cryptographic designs for efficiency and simplicity, e.g. iterative block cipher structures, feedback shift register based stream ciphers, Merkle-Damgård hash function structures, modular design paradigms i.e. constructions based on fundamental building blocks, and modes of operation that transform existing primitives into other primitives e.g. stream cipher, hash function's compression function or message authentication code from block cipher. This approach is also implicit in the cryptanalytic community, where the effort invested in discovering new cryptanalytic techniques is amortized via its application (at times via some adaptation or generalization) to multiple schemes of the same type or of different types, e.g. differential cryptanalysis and the notion of distinguishers initially invented for block ciphers, later applied to stream ciphers and hash functions. Towards a longer term aim, one can design fundamental operations that can form generic structures for use as building blocks within different types of cryptographic schemes, e.g. the inversion based Sbox of AES used to construct the AES round function, the LEX stream cipher and the AES-inspired SHA-3 candidates. Or design common primitives used for a multi-type cryptographic scheme, e.g. in the case of AR-MADILLO [START_REF] Badel | AR-MADILLO: A Multi-purpose Cryptographic Primitive Dedicated to Hardware[END_REF].
• input-independent bulk (pre)computation: the approach here it to partition the computation effort into input-dependent and input-independent functions, and where the bulk of computations are designed into the input-independent functions so that most computations can be done in offline mode or so that this scales well even for inputs of large sizes or that are time-consuming, thus drawing less energy. This approach is exemplified in hash function designs e.g. Fugue [START_REF] Halevi | The Hash Function "Fugue[END_REF]; where the security of such designs relies on heavy-weight finalization functions such that the iterated message block-dependent compression functions can be designed to be less computationally intensive.
• amortization of computations: the approach here is to reuse the effort put into a computation, more specifically, reflected in its output, multiple times at that state of output and in subsequent states in feedforward fashion. Doing so allows subsequent states to be more directly influenced by the current state 'for free', i.e. without extra computations to produce that state;
and the net effect of this is that we then require less number of iterations overall in order to retain similar level of security. This approach is exemplified in the structure we propose later in this paper. Our structure is a generic one that subsumes feedforward-based constructions in literature, e.g. block cipher based hash compression functions like Davies-Meyer and Miyaguchi-Preneel are special cases when the state is only reused once, as well as more recent schemes that we will discuss in more detail in section 4.
During the conference, we hope to engage attendees in a discussion of other possible green approaches to the cryptographic engineering process, or whether there are any other instantiations of the above-listed approaches in existing cryptographic schemes.
A Computation-Amortizing Structure
One energy-efficient cryptographic structure that instantiates the amortization of computations approach is as follows. As with conventional design strategy, the structure is iterative, based on a round function F (s i , s * i -) where s i denotes the conventional type of input state to the function and where s * idenotes one or more previous states for i -< i. Within F (), we have the following steps in sequence:
1. s i ← heavy(s i ) 2. s i+1 ← light(s i , s * i -)
where heavy() is a computationally-intensive operation e.g. multiplication, while light() is a lightweight operation e.g. logical functions. This way, from the computational viewpoint, the input vector s * idoes not substantially add to computational requirements (and thus energy) of F , and yet from a cryptographic viewpoint, adds substantially to the mixing process within F .
The basic idea here is that an intermediate state s i after computation function F , is reused several times at the same state location (i.e. spatial amortization) and also fedforward in time (i.e. temporal amortization) to be reused in subsequent states s j (j > i), and aside from the first time that the state is used, subsequent uses of that state will involve non-computationally intensive functions to mix that state back in. The gist is to fully utilize (and reuse) any state including the outputs of any function, as many times as possible; essentially using the 'copy' and 'feedforward' (these are computationally non-intensive) operations.
Amortization then comes from the fact that the light inputs are actually the reusing of states from other parts of the structure (so we kind of have them for free without having to do extra computations to get them), and the way that they should influence the F function should be in a non-computationally intensive manner e.g. XOR, logical operations.
In this way, it is intuitive that the required number of rounds can be less than conventional structures of F that process only the current state s i , while maintaining the security level.
In analyzing this kind of construction, a new measure so-called points of influence (POI) can be considered. This notion measures how many other state points are immediately and directly affected by a change in a state at one point within a cryptographic scheme. Indeed, the probability of a differential distinguisher should be reduced given a higher points of influence, because the probability is a function of the number of active points affected by a difference in a state.
A preliminary step concerning diffusion when looking at F as a round function could be to consider that the injection of the s * istate looks like a subkey addition in the block cipher context or a message reinjection in the hash compression function context. Indeed, it turns out that block cipher key addition and hash function message injection operations are typically designed to be light, thus fitting nicely with this convention. Thus, a first insight concerning diffusion could be to look at diffusion properties of the key into the cipher. A parallel could be made to evaluate the diffusion when considering s * itaking into account the points of influence. Concerning hash functions, the theory of goal would be to readapt security proofs to the case of several intermediate message dependencies.
From this notion of diffusion which is among the most important when talking about cryptography (the other one is of course the confusion), we could derive the probability of success when looking at an adversary within the context of the differential or the linear distinguishers.
Computation Amortization in Practice
Our computation-amortizing structure in section 3 can in fact fit different kinds of cryptographic schemes in literature, including hash functions, message authentication codes (MACs) and stream ciphers.
For hash function structures, the feedforward strategy, i.e. to feed a state (output from some function) forward to be combined with a future state, is well established. For instance, the PGV [START_REF] Preneel | Hash Functions based on Block Ciphers: a Synthetic Approach[END_REF] structure for constructing hash compression functions from block ciphers is a popular example of this strategy in practice.
This strategy is also exemplified in recent hash function structures constructed from compression functions where the feedforward enters two (instead of just one) future states. These include the 3C and 3C+ [START_REF] Gauravaram | Constructing Secure Hash Functions by Enhancing Merkle-Damgård Construction[END_REF] constructions, where up to two copies of each state are fedforward to the final state for combination; and the ESS [START_REF] Lehmann | A Modular Design for Hash Functions: Towards Making the Mix-Compress-Mix Approach Practical[END_REF] and its predecessor [START_REF] Shrimpton | Building a Collision-Resistant Compression Function from Noncompressing Primitives[END_REF], where one of the states is forwarded for combination with two different states.
More interestingly, related compression function constructions appear in independent work due to Rogaway and Steinberger in [START_REF] Rogaway | Security/efficiency Tradeoffs for Permutation-based Hashing[END_REF][START_REF] Rogaway | Constructing Cryptographic Hash Functions from Fixed-key Blockciphers[END_REF]. For some input x, the function of [START_REF] Rogaway | Security/efficiency Tradeoffs for Permutation-based Hashing[END_REF] is defined as:
for i ← 1 to k do s i ← f i (x, s 1 , . . . , s i-1 ) y i ← π i (s i ) endfor return g(x, s 1 , . . . , s k )
where f i denotes some function and π i denotes some permutation. If we let the f i and g functions be lightweight functions, and indeed, the particular compression function construction in [START_REF] Rogaway | Constructing Cryptographic Hash Functions from Fixed-key Blockciphers[END_REF] is one where such f i and g are linear functions, i.e.
f i = a 0 x + i-1 j=1 a j s j ,
where a j (for j ∈ {0, . . . , i -1}) is some constant and we let g = f k+1 ; then the resultant construction of [START_REF] Rogaway | Constructing Cryptographic Hash Functions from Fixed-key Blockciphers[END_REF] can be seen as an instance of our computation-amortizing structure of section 3, where our heavy() function is instantiated with a permutation π i .
In terms of MAC structures, the SS-NMAC scheme of [START_REF] Dodis | Message Authentication Codes from Unpredictable Block Ciphers[END_REF] builds on the structure of [START_REF] Shrimpton | Building a Collision-Resistant Compression Function from Noncompressing Primitives[END_REF] and retains the feedforward strategy. More specifically, for about half of its internal functions, the output state is fedforward onto two other states for combination.
For stream ciphers which are traditionally represented by first an updating function f that updates the content of an internal state s i and then a filtering function g that filters the content of s i , the model proposed for computation-amortizing structure leads to intrinsically modify the stream cipher in the following way: ∇ If the updating function f takes the role of the computation-amortizing F function described in section 3 to act on s i and on s * i -, then this means that the internal state becomes bigger. By this way maybe the f function could be lightened due to its bigger internal state. An example of a stream cipher that has a memory state is given in [START_REF] Berger | Software Oriented Stream Ciphers Based upon FCSRs in Diversified Mode[END_REF] for software cases. The case where g the filtering function takes the role of the F function must be carefully studied because at this time, it is not so clear that adding a light() part could be so directly derived to discard classical attacks because the resistance of a stream cipher against traditional distinguishers mainly depends on the clever choice of g.
Another way to build stream ciphers is to use a block cipher in the so-called counter mode (CTR) of operation where the keystream is generated by encrypting successive values of a counter, and then each plaintext block is XORed with each keystream block. As it is not possible to directly modify a block cipher using the computation-amortizing structure proposed in section 3 (due to the required invertibility property for conventional block cipher structures), we could try to modify the counter mode itself reinjecting previous values. For example, we could derive those possible modes from MILENAGE, the oneblock-to-many mode used in 3GPP [START_REF]Specification of the MILENAGE Algorithm Set: An Example Algorithm Set for the 3GPP Authentication and Key Generation Functions f1, f1*, f2[END_REF] which was proved to be secure in [START_REF] Gilbert | The Security of "One-Block-to-Many" Modes of Operation[END_REF].
Open Research Questions
While it may be that no matter how energy inefficient the cryptography is and still it is unlikely to be the most inefficient component of a system, yet the right strategy should be to approach this analogous to security's celebrated weakest link property, i.e. viewing a system to be only as energy-efficient as its most energy-inefficient component. Thus, as researchers work hard to design and implement network security systems and protocols that are energy-efficient, the cryptographic schemes that underlie these network security protocols should also be designed with explicit energy efficiency.
The green cryptographic engineering paradigm discussed in section 2 is a good start towards approaching this aim, so that the process of engineering cryptographic schemes (design, analysis, and/or implementation) can be performed while maintaining energy efficiency.
The motivation is clear. Security is already largely seen frequently as an impediment to performance, yet needs to be there in view of constant threats of attacks. In a future where energy resources are increasingly becoming a scarcity, security will become all the more undesirable if in addition to trading off performance, they are also energy inefficient.
We conclude for now with some further research questions for this particular direction: † Can we design cryptographic structures that are provably secure and provably energy efficient? † Can we transform provably secure cryptographic structures into ones that are energy efficient, while still retaining provable security? † How do we model energy efficiency in the provable sense? To this aim, we have started to formalize faulty and/or loss of energy models where the designer or the user is not malicious in the security sense but does not take any care towards the energy efficiency goal (because he could be lazy, ignorant or indifferent to energy efficiency). Then provable energy efficiency for a scheme is to show that energy loss only occurs with negligible probability. † What provably energy efficient notions can be defined? † What metrics can be used to measure energy efficiency of cryptographic schemes? Along the lines of the green cryptographic engineering paradigm approaches discussed in section 2, we suggest investigating metrics such as primitive multiplicity (the number of times a primitive is recycled), state multiplicity (the number of times a state is used elsewhere), points of influence (how many other state points are immediately affected when a change is made in one state). | 27,674 | [
"1084364",
"1003329"
] | [
"219748",
"303590"
] |
01481509 | en | [
"info"
] | 2024/03/04 23:41:48 | 2011 | https://inria.hal.science/hal-01481509/file/978-3-642-27585-2_8_Chapter.pdf | Inger Anne Tøndel
email: [email protected]
Åsmund Ahlmann Nyre
email: [email protected]
Towards a similarity metric for comparing machine-readable privacy policies
Current approaches to privacy policy comparison use strict evaluation criteria (e.g. user preferences) and are unable to state how close a given policy is to fulfil these criteria. More flexible approaches for policy comparison is a prerequisite for a number of more advanced privacy services, e.g. improved privacy-enhanced search engines and automatic learning of privacy preferences. This paper describes the challenges related to policy comparison, and outlines what solutions are needed in order to meet these challenges in the context of preference learning privacy agents.
Introduction
Internet users commonly encounter situations where they have to decide whether or not to share personal information with service providers. Ideally, users should make such decisions based on the content of the providers' privacy policy. In practice, however, these policies are difficult to read and understand, and are rarely used at all by users [START_REF] Jensen | Privacy practices of internet users: Self-reports versus observed behavior[END_REF]. Several technological solutions have been developed to provide privacy advice to users [START_REF] Cranor | User interfaces for privacy agents[END_REF][START_REF]Privacy Finder[END_REF][START_REF] Camenisch | Privacy and identity management for everyone[END_REF][START_REF] Levy | Improving understanding of website privacy policies with fine-grained policy anchors[END_REF]. A common approach is to have users specify their privacy preferences and compare these to privacy policies of sites they visit. As an example, the privacy agent AT&T Privacy Bird [START_REF] Cranor | User interfaces for privacy agents[END_REF] displays icons to the user based on such a comparison, indicating whether the preferences are met or not. In general, these types of solutions provides a Yes/No answer to whether or not to accept a privacy policy. There is no information on how much the policy differs from the preferences. A policy that is able to fulfil all preferences except for a small deviation on one of the criterion, will result in the same recommendation to the user as a policy that fails to meet all the user's requirements. The user is in most cases informed about the reason for the mismatch, and can judge for himself whether the mismatch is important or not. Still, there are situations where such user involvement is inefficient or impossible, and the similarity assessment must be made automatically.
Automatic comparison of privacy policies is important to be able to give situational privacy recommendations to users on the web. The Privacy Finder [START_REF]Privacy Finder[END_REF] search engine ranks search results based on their associated privacy practices. Policies are classified according to a predefined set of requirements and grouped into four categories. Thus, sites that are not able to fulfil one of the basic criteria, but offer high privacy protection on other areas will be given a low score. In order to provide more granularity and fair comparisons, a more flexible and accurate similarity metric is needed. Another application area of a similarity metric is for preference learning in user agents [START_REF] Tøndel | Learning privacy preferences[END_REF]. This is the application area that we focus on in this paper. To avoid having users manually specify their preferences, machine learning techniques can be utilised to deduce users' preferences based on previous decisions and experiences [START_REF] Berendt | Privacy in e-commerce: stated preferences vs. actual behavior[END_REF]. Thus, having accepted a similar policy before may suggest that the user is inclined to accept this one as well. Evidently, this approach requires a more precise mechanism to determine what constitutes a similar policy.
Automatic comparison of privacy policies is particularly complicated due to the subjective nature of privacy [START_REF] Bagüés | Personal privacy management for common users[END_REF]. What parts of a policy are most important is dependent on the user attitude and context, and will influence how the similarity metric is to be calculated. In this paper we investigate the difficulties of defining a similarity metric for privacy policy comparison in the context of automatic preference learning. Several privacy policy languages are available, examples being P3P [START_REF]W3C. Platform for Privacy Preferences[END_REF], PPL [START_REF] Trabelsi | Second release of the policy engine[END_REF] or XACML [START_REF]OASIS eXtensible Access Control Markup Language (XACML)[END_REF]. Throughout this paper we use P3P in the examples to illustrate challenges as well as potential solutions, but our work is not restricted to P3P. The focus lies on the high-level concepts that need to be solved rather than the particular language dependent problems. The remainder of this paper is organised as follows. Section 2 gives an introduction to Case-Based Reasoning (CBR) and how it can be used to enable user agents to learn users' privacy preferences. Section 3 provides an overview of existing similarity or distance metrics that can be used for comparing policies. Section 4 describes the challenges of policy comparison in more detail, and takes some steps towards a solution. Then Section 5 discusses the implications of our suggestions, before Section 6 concludes the paper.
Case-Based Reasoning for privacy
Anna visits a website she has not visited before. Anna's privacy agent tries to retrieve various information on the website, including its machine-readable privacy policy. Then the agent compares its knowledge of the website with its knowledge of Anna's previous user behaviour. In this case, the agent warns Anna that the privacy policy of the website allows wider sharing than what Anna has been known to accept in the past. Anna explains to the agent that she will accept the policy since the service offered is very important to her. The agent subsequently records the decision and explanation to be used for future reference.
Case Based Reasoning (CBR) [START_REF] Kolodner | An introduction to case-based reasoning[END_REF][START_REF] Aamodt | Case-based reasoning: Foundational issues, methodological variations, and system approaches[END_REF] resembles a form of human reasoning where previously experienced situations (cases) are used to solve new ones. The key idea is to find a stored case that closely resembles the problem at hand, and then adapt the solution of that problem. Figure 1 gives an overview of the main CBR cycle. First, the reasoner retrieves cases that are relevant for the new situation. Then the reasoner selects one or a few cases (a ballpark solution) to use as a starting point for solving the new situation. Then this ballpark solution Fig. 1. CBR cycle [START_REF] Kolodner | An introduction to case-based reasoning[END_REF] is either adapted so that it fits the new situation better, or is used as evidence for or against some solution. The solution or conclusion reached is then criticised before it is evaluated (i.e. tried out) in the real world. It is the feedback that can be gained in the evaluation step that allows the reasoner to learn. In the end, the new case is stored to be used as a basis for future decisions. Central to the CBR approach is the retrieval of relevant cases to use as a basis for making decisions. The prevailing retrieval algorithm is K-Nearest Neighbour (KNN) [START_REF] Mitchell | Machine Learning[END_REF], which requires a definition of what is consider the nearest case. In a privacy policy setting this translates to finding the most similar privacy policies, which is the main focus of this paper.
Existing distance metrics
There are several existing metrics for computing the difference (or distance) between text strings, vectors, objects and sets and these are often referred to as distance metrics. We have so far talked about similarity metrics which are really just the inverse of the distance metrics. That is, as the distance increases, the corresponding similarity decreases. However, in order to refer to the different metrics in their original form, we will use the term distance metric throughout this section.
Before we survey existing metrics, it is important to clarify what a distance metric actually is. A distance metric is a function d on a set M such that d : M × M → R. Where R is the set of real numbers. Further, the function d must satisfy the following criteria for all x, y ∈ M [START_REF] Bozkaya | Distance-based indexing for high-dimensional metric spaces[END_REF]:
d(x, y) = d(y, x) (symmetry) (1) d(x, y) > 0 ⇐⇒ x = y (non-negative) (2) d(x, x, ) = 0 (identity) (3) d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality) (4)
The symmetry requirement seems obvious, as we normally do not consider the ordering of the objects to compare (whether x or y comes first). As a consequence of this, the second requirement states that there can be no negative distances, since this would indicate a direction and hence the order of comparison would matter. Further it seems obvious that the distance between an object and itself must be zero. The final requirement simply says that the distance between any two objects must always be less than or equal to the distance between the same objects if a detour (via object z) is added. This requirement corresponds to the statement that "the shortest path between two points is the straight line". In the following we introduce three main types of distance metrics; for comparing sets, for comparing vectors or strings, and for comparing objects that are defined through an ontology.
For comparing sets of objects, the Jaccard distance (J d ) is one alternative metric. It defines the distance between two sets s 1 and s 2 as:
J d (s 1 , s 2 ) = 1 - |s 1 ∩ s 2 | |s 1 ∪ s 2 | (5)
The metric counts the number of occurrences in the set intersection and divides it by the number of occurrences in the set union. Then it is normalised to return a number in the range [0, 1]. All occurrences are treated equally, hence the metric does not cater for situations in which some set members are more important than others.
Distance metrics for comparing vectors or strings are commonly used to construct error detecting and correcting codes [START_REF] Hankerson | Coding Theory and Cryptography: The Essentials[END_REF]. The predominant such metric is the Hamming distance1 , which is defined as the number of positions in which a source and target vector disagree. When used in binary representations the Hamming distance can be computed as the
d H (s, t) = wt(s ⊕ t) (6)
where the function wt(v) is defined as the number of times the digit 1 occurs in the vector v [START_REF] Hankerson | Coding Theory and Cryptography: The Essentials[END_REF]. However when used in for instance string comparison, the textual definition above must be used. In order to compare vectors or strings of unequal size, the Levenshtein distance introduces insertion and deletion as operations to be counted in addition Hamming distance discrepancy count .
More formally, it is defined as the number of operations required to transform a source vector s to a target vector t, where the allowed operations are insertion, deletion and substitution. Consequently, for equal size vectors, the Levenshtein distance and Hamming distance are identical.
Ontology distances utilise the inherent relationships among objects either explicitly or implicitly defined through an ontology [START_REF] Bernstein | How similar is it? towards personalized similarity measures in ontologies[END_REF]. The approach is used to compute the semantic similarity of objects rather than their textual representation. For example, the distance between Apple and Orange is shorter than the distance between Apple and House. In order to determine the distance from one object to another, we can simply count the number of connections in the defining ontology from a source object to a target object via their most recent common ancestor [START_REF] Bernstein | How similar is it? towards personalized similarity measures in ontologies[END_REF].
Towards a similarity metric for privacy policies
In this section we start by explaining the various ways in which similarity may be interpreted related to the different parts of privacy policies, and also give an overview of the main parts of the solution needed. Then we make suggestions and present alternatives for creating similarity metrics for comparing individual statements of policies, and also for aggregation of the similarity of statements. Finally we explain how the similarity metrics can be used together with expert knowledge and user interaction in the context of a preference learning user agent.
What makes privacy policies similar?
In order to be able to automatically determine whether two privacy policies are similar, it is necessary to answer a few basic questions:
-What makes policies similar? Can policies offer roughly the same level of protection without having identical practices? And if so, how to determine the level of protection offered? -What type of policy content is more important for privacy decisions? Which policy changes are likely to influence users' privacy decisions? And is it possible to draw conclusions in this respect without consulting the user?
There are surveys available that are able to give some insights into what aspects of privacy are more important to users. As an example, studies performed by Anton et al. in 2002 and2008 [19] show that Internet users are most concerned about privacy issues related to information transfer, notice/awareness and information storage. However, in order to address the above questions in a satisfactory way in this context, more detailed knowledge is needed.
In this section we start our discussion of similarity by investigating possible interpretations of similarity in the context of P3P policies. Then we outline main issues that need to be addressed in order to arrive at a solution. Similarity in the context of P3P P3P policies provide, among other things, information on the data handling practices; including the data collected, the purpose of the data collection, the potential recipients of the data and the retention practices. Figure 2 shows the alternatives for describing purpose, recipients and retention using P3P, and also gives an overview of the data types defined in the P3P Base Data Schema [START_REF]W3C. Platform for Privacy Preferences[END_REF].
Consider the case where a user wants to compare whether two policies describe collection of similar types of data. Similar could in this context mean that they collect the same data, that they collect a similar number of data items, that they describe data practices that are at the same level in the hierarchy (as an example, clickstream and bdate is at the same level) or that they collect information that is semantically similar (e.g. part of the same subtree). It could also be possible to add sensitivity levels to data and consider policies to be similar if they collect information of similar sensitivity. For purpose, similarity can mean identical purposes, the same number of purposes or purposes with similar privacy implications. Which similarity interpretation to use it not obvious.
Comparing data handling practices based on their privacy implications is particularly complicated as there is in general no common understanding of the implications of the various practices. Thus, the similarity of for instance the purposes individual-decision and contact is a matter of opinion -one user may consider individual-decicion to be far worse than contact, while a second may argue that contact is far worse, and a third may consider them to be similar. For recipient and retention it is a bit different, as the alternatives in general can be ordered based on their privacy implications; e.g. no-retention is always better than stated-purpose, which is again better than business-practices, etc. Thus, stated-purpose is considered closer to no-retention than to business-practices The categories are however broad, and e.g. who is included in the recipient group ours or unrelated will probably vary between policies. The practices of two poli- <DATA r e f="#dynamic . h t t p " /> </DATA-GROUP> </STATEMENT> cies that both share data with other-recipient may thus not be considered similar by users.
Comparing full policies further complicates matters. Some users may for instance be most concerned with the amount of information collected, while others are more concerned about the retention practices. When considering the similarity of two policies, it is thus necessary to take into account this variation of importance.
What is needed In order to compare privacy policies and use them in a CBR system we need similarity metrics for individual parts of the policy, as well as for entire policies. Such metrics need to be able to handle missing statements, and should also support similarity weights to be able to express the criticality or importance of individual statements. Central to the success of the metrics is the ability to understand what similarity means in a given context. Expert knowledge can provide necessary input to the similarity calculations, but as the end-users are experts on their own privacy preferences, it is also important to allow them to influence the similarity calculations.
In CBR, the similarity metric and weight function is normally what is required to compute the k -Nearest Neighbours (i.e. the k most similar policies). Thus, even if there are no policies that would be denoted similar, the algorithm will always return k policies. To cater for this, we require the notion of a similarity threshold such that the algorithm will return only policies that are within the threshold value and that are thus considered to be similar enough so that one can be used as a basis to give advice on whether to accept the other.
When using policies to provide advice to users on what to accept or not, it is important to have some understanding of not only the similarity of policies, but also which policy is better or worse. Thus, in addition to a similarity metric, we need a direction vector that can provide this information. This is in part discussed in Section 5, but is considered outside the scope of this paper.
Divide and conquer
In our work, we make the assumption that the end-users' preferences when it comes to handling of their personal data is highly dependent on the type of data in question. Thus, we suggest comparing policies with a basis in what data is collected. To illustrate how this can be done we again look to P3P. P3P policies contain one or more statements that explain the handling of particular types of data. Listing 1.1 provides an example of such a statement considering the handling of clickstream data and http data. Due to our data centred approach, we translate such P3P statements into cases (case description) by adding one case per DATA item in a statement. As an example, Listing 1.1 contains two DATA elements and therefore results in two case descriptions; as given in Listing 1.2.
Local similarity: Attributes
In common CBR terminology, the term local similarity is used when considering the similarity of individual attributes. As already pointed out, similarity may be interpreted in different ways, and in the following we suggest how similarity can be calculated for the attributes data type, purpose, recipients and retention. Table 1 gives an overview of our suggestions. The suggestions have been made taking into account the possible interpretations of similarity and issues related to attribute representation. The data type field is, at least in P3P, based on a data schema that defines a relatively clear semantic relationship between the possible values. Hence it is natural to use an ontology representation and the corresponding ontology metric to compute the similarities between these objects. This implies an understanding of similarity as the closeness of the concept. The metric will take into account the distance between the objects as defined in the ontology, resulting in e.g. family name being more similar to given name than, say, birthyear.
For purposes, retention and recipients there is no such data schema available that describes how close the alternatives are to each other. If such a schema is made, this can of course be used in similarity calculations. For retention and recipient, however, the alternatives can be said to be ordered, describing practices on a scale from low to high level of privacy-invasiveness. This ordering can be preserved if representing the values as vectors. To illustrate, if we use the ordering, from top to bottom, given in Figure 2 as our basis, we can represent the recipient attribute as a five-dimensional binary vector v = (v 0 , v 1 , v 2 , v 3 , v 4 ) where v i = 1 indicates that the i-th recipient type is present in the attribute. Following this reasoning, the vector representation of the retention attributes given in Listing 1.2 would be v = (0, 1, 0, 0, 0) corresponding to the set representation ['stated-purpose']. The recipient attribute may have a similar representation, however then using a six-dimensional binary vector corresponding to the six possible values it may take.
For purposes it is difficult, not to say impossible, to find an implicit ordering of the possible values. It is for instance difficult to say whether admin purpose is far away from development purpose, other than the fact that they are different. As a consequence, we believe that the set representation and the corresponding Jaccard distance metric are suitable. This implies looking at what purposes are identical.
All of the distance metrics introduced in Section 3 treat all instances equally, and do not take into account that a set or vector member may be more important than another, or that some of the connections in an ontology may be more costly than others. For all the attributes, it is possible to extend the original distance metrics to take into account the cost associated with a difference. This way, a high cost purpose will be more different from a low cost purpose than another high cost purpose, and a jump from e.g. public recipient to unrelated be rated different than a jump from delivery to ours.
Global similarity: Cases
The term global similarity is used when aggregating the local similarity values from attribute comparisons to say something about the similarity the entire case descriptions. As our case descriptions are on the level of privacy statements, the global similarity will state the degree to which two statements are similar. Usually, the global similarity is computed by a function that combines the local similarity values.
The most basic of such functions are the average or sum of attribute similarity values. However, since such metrics give equal importance to all attributes, it is more common to use some sort of weighted sum, or weighted average. That way, some attributes may be given more importance (greater relative weight) than others, and therefore will contribute a greater part to the overall similarity assessment. Further, the weight or relative importance of the attributes may be specified by the user or updated on the basis of user feedback, so that these values are also learned by the CBR system.
This may then further be combined with threshold values, such that the global similarity will only be computed if the local similarities are within a predefined threshold. This is to ensure that policies are similar enough to make Fig. 3. The cases that belong to one policy may be found to be similar to a number of different historical cases belonging to different historical policies comparison meaningful and also provide value to the subsequent recommendation made by the CBR system.
Similarity of policies
Calculating the similarity of cases is not the same as considering the similarity of policies, but like for cases, similarity of policies can be computed based on a weighted sum where the weightings taking into account the importance of various data items. But for preference learning user agents, we are not really that interested in comparing full policies. The reason for this is illustrated in Figure 3. As can be seen in this figure, for each individual cases of a policy there is a search for similar historical cases, and the cases that are a result of these searches may belong to a number of different policies, originally. Comparing all these historical policies to the current privacy policy is not necessarily useful. Instead it is important to determine, based on these similar cases, what advice to provide to the user. Thus it is more important to consider whether or not the user has accepted this kind of practice (as described by the case) in the past. For each case of a policy it should be possible to reach a conclusion about whether or not the user is likely or not to accept this practice (e.g. either yes, no or indeterminate). Then, to reach a conclusion about the policy, the total likelihood of acceptance could be computed based on a weighted sum taking into account the importance of each case (based on the type of data it concerns). 4 gives an overview of the type of solution we envision for policy comparison in the context of a privacy agent. When the user visits a website, the machine-readable policy of this website is used as a basis for providing the user with recommendations as to whether or not to share personal information with this site. During policy evaluation the new policy is divided into a number of cases, based on the data collected, and each of these cases are evaluated towards the historical cases. The historical cases that are most similar to the current situation are used to come to a conclusion on what recommendation to give the user.
In order to be able to retrieve similar cases, the similarity metric is used together with the similarity weight function and the similarity threshold. The necessary input to the similarity calculations, such as costs and weights, are included as expert knowledge. Expert knowledge also provide information on what alternatives are better or worse in terms of privacy protection. Privacy experts will be in the best position to provide this type of information, and let users benefit from their expertise. However, what is considered to be the most important privacy concepts will likely vary between user groups, and also individuals. It can also be dependent on the legal jurisdiction [START_REF] Fischer-Hübner | Ui prototypes: Policy administration and presentation version 1[END_REF]. Expert knowledge can be specified in a way that takes into account some of these likely variations. However, it is also possible to make solutions that allow users to influence the expert knowledge that is used as a basis for making recommendations. We will come back to this shortly.
When the most similar cases have been retrieved, these are combined and adapted in order to come to a conclusion regarding the current situations. As already explained, this can be done by by simply taking a weighted majorityvote based on the set of cases selected. However, as the agent should be able to explain its reasoning to the user, there is also a need to build an argument that can be used to explain the agent's decision. In this process, expert knowledge also has a role to play by e.g. explaining why something is important.
The conclusion reached is presented to the user through for example a warning to the user of problems with the policy, or no warning if the policy is likely to be accepted by the user. The user, in the same way, may or may not provide feedback on this decision, e.g. by stating that he disagrees with the reasoning behind the warning. Either way, the acceptance or correction of the recommendation given is important and makes the agent able to learn and thereby improve its reasoning. A correction of the agent's reasoning may trigger a re-evaluation of the policy, and result in updates to the current cases that are stored in the case repository. But the correction can also, at least in some cases, be used to improve the expert knowledge used in the policy evaluation. After all, users are experts on their own privacy preferences, and can make corrections of type "I do not care who gets my email address", or "I will never allow telemarketing, no matter the benefits"
Discussion
In this paper we have shown how privacy policies can be divided into a number of cases that can then be compared to other cases individually. We have also proposed what type of similarity metric to use to compare attributes of cases, and shown how the results of these individual comparisons can be used to say something about the similarity of cases and policies at a higher level. In this section we discuss important parts of our suggestions, focusing mainly on areas where further research is needed.
The role of the expert
Existing distance metrics can be modified to take into account costs, but for this to work we need a way to determine these varying costs. As has already been pointed out previously in this paper, deciding what is better or worse when it comes to privacy, and how much better or worse it is, is very much a matter of opinion rather than facts. Privacy experts are the ones most capable of making such statements when it comes to cost, but further research may be needed in order to agree on useful cost values. The same goes for weights that are used for computing global similarity values, and for making recommendations to users.
Involving the end-user
In our suggestions for applying similarity metrics for preference learning user agents, we have emphasised the need to include user input and use this input to improve the similarity measures. For this to work, there is a need for good user interfaces and also a need to understand what users will be able to understand and communicate related to their preferences. A key dilemma is to find the right level of user involvement. It is important to involve users in the learning process, but if users receive a lot of requests for feedback on similarity calculations, this may be considered to be annoying interruptions and will likely result in users refraining from using the agent. It is also important to find ways to weigh the opinions of the users against the expert knowledge.
Differences between policy languages
In this paper we have used P3P as an example language, and the metrics suggested have been discussed based on the way policies are represented in P3P. The metrics selected may be different if policies are presented using other languages. As an example, the PPL specifications [START_REF] Trabelsi | Second release of the policy engine[END_REF] show examples where retention is specified using days rather than the type of practice. This will result in the use of a different type of metric, e.g. the Euclidean distance. However, how would you compare a retention of 30 days with, say, business-practices? Here, again, experts are the ones that can contribute with knowledge on how to solve this, but to gain such explicit knowledge and agree on the necessary parameters will likely require further research.
Aggregation of similarity values
Though this paper provides some suggestions as to how the similarity of cases and policies can be computed, more work is needed on this topic. The examples we use only consider parts of the P3P policies. In addition, there will in many cases be a need to take into account the direction of the difference (better/worse). This direction cannot be included directly in the similarity metric, as this would violate the symmetry criteria in the very definition of a distance metric. Still the distance is important when making choices based on the result of a metric.
For preference learning privacy agents, the policy will only be one of several factors to consider when making recommendations to users. Additional factors include context information and community input [START_REF] Tøndel | Learning privacy preferences[END_REF]. This complicates the similarity calculations.
CBR vs. policy comparison in general
Up till now we have mainly discussed the problems related to policy comparison in a situation where historical decisions on policies are used to determine what recommendations to give to users in new situations. However, in the introduction we pointed at other types of applications where automatic policy comparison can be useful, e.g. in privacy-aware search engines. So, how do our suggestions relate to such other uses?
We have considered situations in which privacy policies are compared with policies the user has accepted or rejected previously, but this is not that different from comparing a policy to those of similar types of sites to e.g. find how a web shop's privacy practices are compared to other web shops. In both cases there is no strict pre-specified matching criteria to use. Expert knowledge will be important in both cases, to assess what aspects of a policy are better or worse, and how much better or worse. But where we have been mainly concerned with identifying similar cases, other uses may be more interested in identifying policies that offer better protection, and say something about how much better this protection is. Case retrieval will be different in such settings, as it is important to consider all cases belonging to one policy together. There is also a need to calculate the similarity of policies, and not only cases.
Conclusion and further work
In order to develop new and improved privacy services that can compare privacy policies in a more flexible manner than today, there is a need to develop a similarity metric that can be used to calculate how much better or worse one policy is compared to another. This paper provides some steps towards such a similarity metric for privacy policies. It proposes similarity metrics for individual parts of policies, and also addresses how these local similarity metrics can be used to compute more global similarity values. Expert knowledge serves as important inputs to the metrics.
In our future work we plan on implementing measures for policy comparison in the context of preference learning user agents. The similarity metrics will be evaluated by comparing the similarity values that are automatically computed with the values stated by users, when asked. The input received will also be used to improve the expert knowledge that the calculations rely on.
Fig. 2 .
2 Fig. 2. Alternatives when describing data, purpose, recptients and retention in P3P
Listing 1 . 1 .
11 Excerpt from a P3P privacy policy <STATEMENT> <PURPOSE><admin/><d e v e l o p /></PURPOSE> <RECIPIENT><o u r s /></RECIPIENT> <RETENTION><s t a t e d -p u r p o s e /></RETENTION> <DATA-GROUP> <DATA r e f="#dynamic . c l i c k s t r e a m " />
Listing 1 . 2 .
12 Case descriptions d a t a t y p e=dynamic . c l i c k s t r e a m d a t a t y p e = dynamic . http , r e c i p i e n t s = [ ' o u r s ' ] r e c i p i e n t s = [ ' o u r s ' ] , p u r p o s e = [ ' admin ' , ' d e v e l o p ' ] p u r p o s e = [ ' admin ' , ' d e v e l o p ' ] , r e t e n t i o n = [ ' s t a t e d -p u r p o s e ' ] r e t e n t i o n = [ ' s t a t e d -p u r p o s e ' ]
Fig. 4 .
4 Fig. 4. Applying similarity metrics in preference learning user agents
Figure
Figure 4 gives an overview of the type of solution we envision for policy comparison in the context of a privacy agent. When the user visits a website, the machine-readable policy of this website is used as a basis for providing the user with recommendations as to whether or not to share personal information with this site. During policy evaluation the new policy is divided into a number of cases, based on the data collected, and each of these cases are evaluated towards the historical cases. The historical cases that are most similar to the current situation are used to come to a conclusion on what recommendation to give the user.In order to be able to retrieve similar cases, the similarity metric is used together with the similarity weight function and the similarity threshold. The necessary input to the similarity calculations, such as costs and weights, are included as expert knowledge. Expert knowledge also provide information on what alternatives are better or worse in terms of privacy protection. Privacy experts will be in the best position to provide this type of information, and let users benefit from their expertise. However, what is considered to be the most important privacy concepts will likely vary between user groups, and also individuals. It can also be dependent on the legal jurisdiction[START_REF] Fischer-Hübner | Ui prototypes: Policy administration and presentation version 1[END_REF]. Expert knowledge can
Table 1 .
1 Overview of local similarity metrics
Attribute Similarity interpretation Metric Input
Data type Semantic similarity Ontology Data schema
(+ costs)
Purpose Equality Set (Costs)
Recipients Privacy implications Vector/string (Costs)
Retention Privacy implications Vector/string (Costs)
Note that the Hamming distance has also been defined for sets (H d )[START_REF] Arasu | Efficient exact set-similarity joins[END_REF] can be considered a variation of the Jaccard distance, the main difference being that the Hamming distance is not normalised
Acknowledgments
We want to thank our colleague Karin Bernsmed for useful input in the discussions leading up to this paper. | 37,869 | [
"1003330",
"1003331"
] | [
"86695",
"86695"
] |
01481775 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01481775/file/03_DeformedSpace.pdf | Luca Roatta
email: [email protected]
Discretization of space and time: how matter deforms space and time
Assuming that space and time can only have discrete values, it is shown how space and time are deformed in presence of matter. It's introduced a conceptual model to explain the procedure followed.
Introduction
Let's assume, as work hypothesis, the existence of both discrete space and discrete time, namely spatial and temporal intervals not further divisible; this assumption leads to some interesting consequences. Here we find how the presence of matter deforms space and time.
So, if we suppose that neither space nor time are continuous, but that instead both are discrete, and following the terminology used in a previous document [START_REF] Roatta | Discretization of space and time in wave mechanics: the validity limit[END_REF] , we call l 0 the fundamental length and t 0 the fundamental time.
Determining how matter deforms space and time
The existence of a minimum length below which it is impossible to descend, necessarily implies that in a discrete space (for conceptual simplicity we consider only one dimension), in a vacuum, for any value of the length r there must be a integer number n such that r=nl 0 .
General Relativity affirms that matter deforms spacetime, that is a continuum in GR.
Here we try to understand how the discrete space behaves in presence of material bodies.
However, we can not start simply considering a material body A located in the origin of the discrete x axis and a second material body B located at a distance r from A, because if we want to understand how the two material bodies deform the discrete space, we can not think to move a body from somewhere to the desired position, because so doing the space would already be deformed right from the beginning.
We must in some way build our system so that the deformation of space appears only when the two bodies are in the desired position. Of course it is a conceptual construction, whose lawfulness we reserve to evaluate based on the results obtained.
The figure below shows the initial situation in presence of space not yet deformed:
Fig. 1 Said n the number of space cells that are between the two bodies, it follows that: r=nl 0 (1) Let us now expand the body A until it reaches a linear dimension denoted by d. This results in a shortening of the cells.
To visually clarify this, we use an image that, although based on different physical concepts, can help to better understand the process.
Let us suppose that the two bodies are joined together by a spring; the expansion of the body A causes a shortening of the spring and thus a decrease of the distance between the coils of the spring, while the number of the coils does not change. Now, if the body A is rigid, the shortening of the spring will coincide exactly with the real size of the body A. If the body A is soft, such as a ball of dough that leavens, then a shortening of the spring will still occur, but it will no longer match the size of A, that being soft will extend up to incorporate part of the coils.
From this dual model (rigid body and soft body) it is evident that the size of A will be higher than, or at most equal to, d, the value that corresponds to the shortening of the spring, but not lower (it is difficult to imagine a super-rigid body, having dimension less than d, whose expansion causes a shortening equal or even superior to d). Basically, if the body A were soft, the shortening of the spring due to its expansion would correspond to the shortening caused by the expansion of a rigid body having dimension d smaller than the real dimension of the soft body A. Another observation to make is that d should only depend on the properties of the body A because no other element has been considered. Of course, also the linear dimension d must comply with the condition d≥l 0 . So there will be no expansion, and consequently no deformation in space, if d for any reason can not reach the value l 0 .
In our case, therefore, no a priori assumption is made about the physical meaning of d, nor about his relationship with the real dimension of A. As a result of the expansion, there are no changes for the value of r (it is like fixing the body B at the distance r), nor for the value of n, because in this process no space cell is lost. The value of l 0 instead changes, because now depends on r (and d); let us denote this value by l 0 (r): the cells shrink to make room for the expansion of the body A.
The figure below shows the situation in which the body A is not enlarged up to incorporate the body B: this means d<r. Fig. 2 It is evident that d +n l 0 (r )=r (2) from which, given that for Eq. ( 1) is n=r/l 0 , we obtain:
l 0 (r )=l 0 r -d r (3)
The figure below shows the situation in which the body A is enlarged up to incorporate the body B: this means d>r.
Fig. 3
It is evident that d-nl 0 (r)=r (4) from which, given that for Eq. ( 1) is n=r/l 0 , we obtain:
l 0 (r )=l 0 d-r r (5)
It has already been shown [START_REF] Roatta | Discretization of space and time in wave mechanics: the validity limit[END_REF] that l 0 /t 0 =c. In presence of matter, l 0 (r) must be used instead of l 0 . If we want to maintain c constant, also t 0 must be replaced by t 0 (r), so we have l 0 (r)/t 0 (r)=c. It follows that: t 0 (r )= l 0 (r ) c (6)
Conclusion
The assumption that both space and time are discrete, using a simple conceptual model, has led to find how the presence of matter deforms space and time. | 5,531 | [
"1002559"
] | [
"302889"
] |
01481878 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://minesparis-psl.hal.science/hal-01481878/file/Agogu%C3%A9%20et%20al%202016%20JSM%20open%20innovation.pdf | Marine Agogué
Elsa Berthet
Tobias Fredberg
Pascal Le Masson
Blanche Segrestin
Martin Stoetzel
Martin Wiener
Anna Yström
Explicating the role of innovation intermediaries in the 'unknown': a contingency approach
Keywords: innovation intermediaries, open innovation, collaborative innovation, degree of unknown, innovation management
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Scholars have recognised the role and the growing importance of intermediaries in innovation [START_REF] Howells | Intermediation and the Role of Intermediaries in Innovation[END_REF] as change agents for open innovation. Increasing technological complexities, maturing markets, and global competition require that knowledge and creative brainpower be sought not merely internally within a firm, but also externally in creative communities and from external experts. Innovation intermediaries can have various missions. They can support brokering for either problem solving (Hargadon & Sutton, 1997[START_REF] Gianiodis | Advancing a Typology of Open Innovation[END_REF] or technology transfer [START_REF] Bessant | Building Bridges for Innovation: The Role of Consultants in Technology Transfer[END_REF]. They can also play an active role in networking among dispersed but complementary organisations [START_REF] Klerkx | Establishment and Embedding of Innovation Brokers at Different Innovation System Levels: Insights from the Dutch agricultural sector[END_REF]. Whatever their mission is, intermediaries "connect companies to external sources or recipients of innovation and mediate their relationships with those actors" [START_REF] Nambisan | The Role of the Innovation Capitalist in Open[END_REF]. They facilitate the identification of external knowledge providers and make external knowledge accessible. In a similar manner, they come into play when transfer to the market is the only means of commercialisation because internally developed knowledge or ideas cannot be utilised for the company's proprietary products or services. Intermediaries act as agents that improve connectivity within and among innovation networks [START_REF] Stewart | Intermediaries, Users and Social Learning in Technological Innovation[END_REF], which is of high importance with regard to systemic innovations [START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF].
Research has improved our understanding of the managerial challenges inherent in exploratory intermediation. For instance, it is necessary to build trust among participants and to coordinate contributors when the outputs of the collaboration are uncertain, just as in other types of collaborative innovation [START_REF] Fawcett | Supply Chain Trust: The Catalyst for Collaborative Innovation[END_REF]. Similarly, there is a need to organise specific learning processes and ensure that there is sufficient consensus among partners (van [START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF] when the needed knowledge does not already exist. Significantly, the recent literature has stressed that the role of intermediaries can be critical to the exploration of new opportunities and the development of new ways to address shared issues, such as sustainability and environmental issues [START_REF] Michaels | Matching knowledge brokering strategies to environmental policy problems and settings[END_REF]. For instance, intermediaries can initiate change (Lynn et al. 1996;[START_REF] Callon | Is Science a Public Good?[END_REF], build networks [START_REF] Mcevily | Bridging Ties: A Source of Firm Heterogeneity in Competitive Capabilities[END_REF] and determine "where to look in the first place" (Howells, 2006, p. 723). It is important to note that there is a significant difference between being an intermediary in cases where problems are known, actors can be recognized and there is sufficient knowledge available to solve the problems (most likely to result in more incremental innovations), and cases where the problems are ill-defined, the role of actors is not given, and where not even the art of knowledge needed, is known (this is most likely the case at the outset of a process that generates more radical innovations).
Research has emphasised that intermediaries face increasing difficulties in addressing these challenges [START_REF] Birkinshaw | The Five Myths of Innovation[END_REF][START_REF] Sieg | Managerial Challenges in Open Innovation: A Study of Innovation Intermediation in the Chemical Industry[END_REF]. Notably, their activities get more diverse and more complex, which implies that their role and position within the innovation system becomes unclear and even sometimes problematic [START_REF] Klerkx | Establishment and Embedding of Innovation Brokers at Different Innovation System Levels: Insights from the Dutch agricultural sector[END_REF]. The higher complexity of the role and activities of intermediaries relies on the fact that they face increasingly emergent and/or ill-defined situations where learning processes are necessary [START_REF] Klerkx | Adaptive management in agricultural innovation systems: The interactions between innovation networks and their environment[END_REF]. In line with this, recent works have characterized new forms of intermediaries such as "architects of the unknown" [START_REF] Agogué | Rethinking the role of intermediaries as an architect of collective exploration and creation of knowledge in open innovation[END_REF] or "colleges of the unknown" (Le Masson et al., 2012). They suggest that specific management principles for intermediation might be needed.
To understand the increasing complexity of the activities and roles of innovation intermediaries, we propose to introduce the "degree of unknown" as a new contingency variable. This introduction leads to a new framework of analysis. For example, situations of low degree of unknown occur when actors in collaborative innovation endeavours are attracted by a clear common goal, which an intermediary can express and communicate, or when conflicting stakeholders can work together because the necessity and expectations are sufficiently high for all. Such situations are typically also connected with a low degree of information obscurity and more incremental innovations. However, what if there is no common goal or common vision and little clarity of how the innovation field is progressing?
What if the intermediary alone cannot identify a common goal or common problem? What if there is no legitimate place to which an intermediary can invite potential stakeholders to begin to work together to create a common goal? In this paper, we characterize such situations as being of high degree of unknown.
Our goal in this paper is to better define the role of innovation intermediaries when the degree of unknown is high. We address the following research questions: What are the managerial challenges met by innovation intermediaries in situations of high degree of unknown? And how can they potentially address these challenges?
The following sections are organised as follows: we first review the literature on intermediaries to highlight common core functions of the different types of intermediaries.
Then, we introduce the "degree of unknown" as a new dimension for analysing the role of intermediaries. We analyse the conditions under which the intermediaries can fulfil their core functions when the degree of unknown is very high. We present four empirical cases in which intermediaries face situations of high level of unknown and address the related managerial challenges. We present examples of solutions implemented by the studied intermediaries. We conclude by discussing the theoretical and empirical perspectives introduced by this work.
Literature review: Highlighting core functions of Innovation Intermediaries
Previous studies have distinguished different roles and functions of innovation intermediaries (e.g., [START_REF] Howells | Intermediation and the Role of Intermediaries in Innovation[END_REF]. Beyond connecting actors, this literature highlight that intermediaries fulfil a range of specific functions to foster collective innovation. Drawing on earlier contributions, each touching on different aspects of the intermediary role (e.g. [START_REF] Klerkx | Establishment and Embedding of Innovation Brokers at Different Innovation System Levels: Insights from the Dutch agricultural sector[END_REF][START_REF] Nambisan | The Role of the Innovation Capitalist in Open[END_REF][START_REF] Fawcett | Supply Chain Trust: The Catalyst for Collaborative Innovation[END_REF][START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF], we have identified four core functions that appear to be fulfilled by all types of intermediaries in the context of innovation: (i) connecting actors; (ii) involving, committing, and mobilising actors;
(iii) solving, avoiding, or mitigating potential conflicts of interests; and (iv) (actively)
stimulating the innovation process and innovation outcomes.
Different types of innovation intermediaries have been analysed and described in previous studies. Three distinct types of intermediaries that occur in different settings can be distinguished: [START_REF] Le Masson | Revisiting Absorptive Capacity with a Design Perspective[END_REF] intermediaries for problem solving, (2) intermediaries for technology transfer, and (3) intermediaries as coordinators of networks in innovation systems. This categorisation might not be exhaustive, but it highlights the fact that whatever the mission of innovation intermediaries, and although they may face different problems or challenges, they all fulfil the four core functions listed above.
Intermediaries for Problem Solving
The intermediary "broker for problem solving" comes into play when a company lacks knowledge or skilled resources for solving a specific problem or for developing innovative new ideas. The intermediary offers access to external knowledge by either establishing bridges to external experts (e.g., in the case of marketplaces) or contributing knowledge from their own experiences (e.g., in consulting activities).
There are many actors that play the same role as brokers for problem solving, such as the following: consultants [START_REF] Bessant | Building Bridges for Innovation: The Role of Consultants in Technology Transfer[END_REF], knowledge-intensive business services or KIBS [START_REF] Klerkx | Balancing Multiple Interests: Embedding Innovation Intermediation in the Agricultural Knowledge Infrastructure[END_REF], 2009), knowledge brokers [START_REF] Hargadon | Firms as Knowledge Brokers: Lessons in Pursuing Continuous Innovation[END_REF]Hargadon & Sutton, 1997), innovation marketplaces [START_REF] Lichtenthaler | Innovation Intermediaries: Why Internet Marketplaces for Technology Have Not Yet Met the Expectations[END_REF] and idea scouts or technology scouts [START_REF] Nambisan | A Buyer's Guide to the Innovation Bazaar[END_REF]. Previous studies on intermediation described actors such as Evergreen IP [START_REF] Nambisan | A Buyer's Guide to the Innovation Bazaar[END_REF], InnoCentive [START_REF] Sieg | Managerial Challenges in Open Innovation: A Study of Innovation Intermediation in the Chemical Industry[END_REF][START_REF] Surowiecki | The Wisdom of Crowds: Why the Many are Smarter than the Few and how Collective Wisdom shapes Business, Economies, Societies, and Nations[END_REF][START_REF] Diener | The Market for Open Innovation: Increasing the Efficiency and Effectiveness of the Innovation Process[END_REF], NineSigma, Yet2.com, and IDEO [START_REF] Hargadon | Firms as Knowledge Brokers: Lessons in Pursuing Continuous Innovation[END_REF].
In this configuration, the primary function of the intermediary is clearly to connect seeking companies with problem solvers. However, the literature also describes other important functions, which are fulfilled either primarily by the intermediaries or in coordination with the client companies:
Not only potential solvers but also problem seekers should be mobilised. Hence, there is a need to "enlist scientists" (Sieg et al., 2010, p. 285) who are not used to submitting their problems to external parties.
Knowledge transactions require both that problems be articulated to external actors and that the "problem recipients" be able to make sense of the defined problem. As [START_REF] Sieg | Managerial Challenges in Open Innovation: A Study of Innovation Intermediation in the Chemical Industry[END_REF] have shown, the client company must carefully select the right problem and thereby manage the conflict (or trade-off) between seeking the "Holy Grail" solution and offering solvable tasks to externals experts. Selecting problems at early stages in the innovation process has been found to be favourable because the solution space is still sufficiently large and internal scientists have not yet become dulled to complexity issues and technical jargon.
Finally, the intermediary will fulfil its role only if innovative solutions can be found, which often requires the stimulation of special learning processes. It has been demonstrated that the role of the intermediary is not only to scan and transfer information but also to organise the articulation, combination and manipulation of knowledge [START_REF] Bessant | Building Bridges for Innovation: The Role of Consultants in Technology Transfer[END_REF]. Thus, this type of intermediary is also concerned with building its own innovation capabilities [START_REF] Howells | Intermediation and the Role of Intermediaries in Innovation[END_REF][START_REF] Klerkx | Balancing Multiple Interests: Embedding Innovation Intermediation in the Agricultural Knowledge Infrastructure[END_REF].
The manner in which problems are decomposed and formulated is recognised as a critical success factor for innovation brokers.
The four above-described core functions of this type of intermediary are summarised in the following table : Table 1. Core functions of an intermediary as a broker for problem solving
Core Functions Examples
Connect Connect seeking companies with problem solvers (e.g., [START_REF] Nambisan | A Buyer's Guide to the Innovation Bazaar[END_REF] Involve / commit / mobilise Enlist scientists by defining common rules supported by internal "champions" [START_REF] Sieg | Managerial Challenges in Open Innovation: A Study of Innovation Intermediation in the Chemical Industry[END_REF] Solve / avoid conflict Define the right problem; avoid conflict between exceedingly high expectations ("Holy Grail") and limited solution capacities [START_REF] Sieg | Managerial Challenges in Open Innovation: A Study of Innovation Intermediation in the Chemical Industry[END_REF] Stimulate innovation Articulate and combine knowledge [START_REF] Bessant | Building Bridges for Innovation: The Role of Consultants in Technology Transfer[END_REF], re-engineer knowledge [START_REF] Klerkx | Balancing Multiple Interests: Embedding Innovation Intermediation in the Agricultural Knowledge Infrastructure[END_REF]
Broker for Technology Transfer
This type of intermediation is required when new technologies have been invented and developed but the inventor cannot commercialise them internally either because of a lack of resources, lack of business or market knowledge or noncompliance with the prevailing business model and/or business strategy. In such situations, intermediaries offer support in bringing the technology to the market by providing access to potential users of the technology using sufficient resources, legal and IP knowledge, or venture capital opportunities, for instance.
We find various labels in the literature for a second configuration, such as technology brokers or IP brokers, university technology transfer offices, liaison departments [START_REF] Hoppe | Intermediation in Invention[END_REF], technology-to-business centres, out-licensing agencies (Shohet & Prevezer, 1996), business incubators [START_REF] Pollard | Innovation and Technology Transfer Intermediaries: A Systemic International Study, in Innovation through Collaboration[END_REF][START_REF] Nambisan | A Buyer's Guide to the Innovation Bazaar[END_REF], and venture capitalists [START_REF] Nambisan | A Buyer's Guide to the Innovation Bazaar[END_REF]. All of these actors are recognised to facilitate the transfer of knowledge or technology across firm or sector boundaries.
Intermediaries such as Ignite IP [START_REF] Nambisan | A Buyer's Guide to the Innovation Bazaar[END_REF], Forthright Innovation and the Lanarkshire Business Incubator Centre [START_REF] Pollard | Innovation and Technology Transfer Intermediaries: A Systemic International Study, in Innovation through Collaboration[END_REF], and the Siemens Technology-to-Business Centre and Technology Accelerator units (Gassmann & Becker, 2006) have been studied with regard to the intermediary's role as broker for technology transfer.
The primary function of the intermediary is to organise new connections between distant academic-or industry-based science and industry players in search of new opportunities [START_REF] Turpin | Bricoleurs and boundary riders: managing basic research and innovation knowledge networks[END_REF]. However, the role of this intermediary is not limited to liaison services:
Technology providers and potential users must be convinced and mobilised. To function properly, the intermediary must engage in various marketing activities, and must make the technologies visible to potential investors [START_REF] Thursby | Objectives, Characteristics and Outcomes of University Licensing: A Survey of Major U.S. Universities[END_REF].
Special attention should be paid to potential conflicts of interests. The intermediary is positioned between the inventor (or research unit) and the companies interested in the new technology. Therefore, the intermediary must consider the interests of inventors, which are often not limited to financial aspects (e.g., academic publications, honour and reputation, or competition aspects), as well as the interests of financial investors who seek to gain as much knowledge of the technology and its profitability prospects as possible before the actual transaction occurs (Shohet & Prevezer, 1996).
Finally, new uses of the technology must be explored to value the technological potential existing beyond the evident and trivial applications. Here, the intermediary often becomes deeply involved from a technical perspective as well, supporting the identification of potential technology applications and providing assistance in structuring and "moving" the knowledge from the inventor to the investor (Becker & Gassmann, 2006).
Hence, the four core functions of this type of intermediary can be summarised as follows:
Table 2. Core functions of an intermediary as a broker for technology transfer
Core Functions Examples
Connect Establish connections between academic or industry science and external players in the market [START_REF] Turpin | Bricoleurs and boundary riders: managing basic research and innovation knowledge networks[END_REF] Involve / commit / mobilise Perform marketing activities to attract potential investors [START_REF] Thursby | Objectives, Characteristics and Outcomes of University Licensing: A Survey of Major U.S. Universities[END_REF] Solve / avoid conflict Balance heterogeneous (conflicting) stakeholder interests, particularly financial and non-financial objectives (Shohet & Prevezer, 1996) Stimulate innovation Actively engage in the exploration of new technology uses and the transfer of knowledge (Becker & Gassmann, 2006)
Networker or Bridger in Innovation Ecosystems
The literature has described a third type of configuration in which intermediaries facilitate dynamic collaboration in innovation projects on a larger scale and for longer time horizons.
We speak of "innovation systems" intermediation [START_REF] Inkinen | Intermediaries in regional innovation systems: hightechnology enterprise survey from Northern Finland[END_REF] when considering innovation not from a company perspective, but rather, on a macro-economic level for geographical or industrial clusters (which may even include entire nations and their governments). Collaboration in such innovation systems is encouraged by not only technology policies but also dedicated organisations operating at the core of the innovation system. We find various occurrences of this type of intermediaries: science/technology parks [START_REF] Löfsten | Science Parks and the growth of new technology-based firms -Academic-industry links, innovation and markets[END_REF], geographical innovation clusters (McEvily & Zaheer, 1999), regional technology centres, technical committees, task forces, standards bodies [START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF], and "brokers in innovation networks" [START_REF] Winch | The Organization of Innovation Brokers: An International Review[END_REF].
These intermediaries support networking and bridging amongst a multitude of actors within a certain industry or within a geographical cluster. They create common visions, define common objectives, invite new participants, and provide all types of support. In this last configuration, the function of the intermediary is still to connect people and organisations.
However, the connection is all the more complicated because the relevant stakeholders are not always identified ex ante and successful intermediation requires ongoing multilateral exchange to be adopted within the network, in opposition to the singular mission-complete ("problem solved" or "technology transferred") objectives in the first two intermediary configurations. Intermediaries must initiate linkages and facilitate accessibility to resources and knowledge. This process includes building infrastructures, sustaining networks, and facilitating exchange between the actors (van [START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF].
Here again, other functions are equally important:
Technology providers and potential users must be convinced and mobilised. Convincing is a matter of framing a common issue that is considered a problem by potential actors in the innovation system. Sufficient exogenous incentives (e.g., market growth potential and economic factors) are required but can be complemented by resource mobilisation activities (e.g., competence and human capital, financial capital, and complementary assets) provided or organised by the innovation intermediary [START_REF] Bergek | Analyzing the functional dynamics of technological innovation systems: A scheme of analysis[END_REF].
The need for collaboration clearly implies a necessity to avoid sources of conflicts. The introduction of new technologies often implies a need for change, to which established market actors resist. The intermediary can facilitate the formation of an "advocacy coalition", which places new objectives on the agenda and creates "legitimacy for a new technological trajectory" (Hekkert et al., 2007, p. 425). For instance, in the case of environmental care, the opposing interests of different actors and resulting conflicts could not be resolved without the intervention of a legitimised intermediary.
Finally, the role of the intermediary is to stimulate innovative approaches. According to van Lente et al. (2003, p. 256), the intermediary supports the "learning processes, by enhancing feedback mechanism and by stimulating experiments and mutual adaptations". More generally, the challenge is to develop and offer favourable conditions for learning and experimenting, i.e., to create a place for collective innovation.
The four core functions of this type of intermediary can be summarised as follows:
Table 3. Core functions of an intermediary as an ecosystem bridger
Core Functions Examples
Connect Create and maintain a network for ongoing multilateral exchange [START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF] Involve / commit / mobilise Mobilise resources: Human capital, financial capital, and complementary assets [START_REF] Bergek | Analyzing the functional dynamics of technological innovation systems: A scheme of analysis[END_REF] Solve / avoid conflict Create legitimacy for a new technological trajectory, create a common agenda for actors with different (opposing) interests [START_REF] Hekkert | Functions of innovation systems: A new approach for analysing technological change[END_REF] Stimulate innovation Support learning processes, foster feedback, stimulate experiments and mutual adaptations [START_REF] Van Lente | Roles of Systemic Intermediaries in Transition Processes[END_REF]
Core Functions of innovation intermediation: synthesis
Drawing on the literature, we identified four core functions that appear to be fulfilled by various types of intermediaries in the context of innovation: (i) connecting actors; (ii) involving, committing, and mobilising actors; (iii) solving, avoiding, or mitigating potential conflicts of interests; and (iv) (actively) stimulating the innovation process and innovation outcomes. We show that, according to the literature, distinct types of innovation intermediaries fulfil these four core functions.
Yet, previous studies do not account for how the degree of unknown may modify the nature of the different functions for the three types of intermediaries. This leads to difficulties to acknowledge the complex role and actions of intermediaries in some specific cases. For all types and functions, intermediaries come into play and offer their services when situations are rather well defined. But they can also face high degree of unknown, which may raise new managerial challenges and may impact how they can fulfil the previously discussed functions.
In the following, we investigate the nature of innovation intermediation when the degree of unknown is high.
Exploring the Challenges of the Intermediation in the Unknown
In situations of low degree of unknown, actors in collaborative innovation endeavours are attracted by a clear common goal, which an intermediary can express and communicate.
Similarly, conflicting stakeholders can work together because the necessity and expectations are sufficiently high for all and the role of the innovation intermediary is therefore to support the collaboration. Indeed, in intermediation in low level of unknown, intermediaries are characterised by different degrees of uncertainty. The coordination failure of a pure market solution creates the need for intermediaries, calling for coordinative action at different levels of uncertainty. Intermediaries handle the market failure in different ways. With brokers for problem solving, those in need of knowledge are aided in finding those that possess it. With knowledge transfer intermediaries, knowledge holders must find problems that can be solved with their knowledge. There is hence a low degree of information obscurity; problems and their solutions can be understood by the key actors. In bridging situations within an innovation ecosystem, an actor with a higher need for innovation combines different sources of knowledge to create the plan for creating solutions. In all circumstances, there exists both a where the degree of the unknown is high.
By "unknown", it is simply meant the absence of knowledge in a situation of collective action. Many works have dealt with the notion of "unknown" in management. We acknowledge that different lines of research have chosen to use different terms. For example, in knowledge management, a common phrase used to describe the unknown is "opaque" or the degree of "opacity". In chaos research and finance, authors refer to "ambiguity". In this paper, we choose the term "unknown" to stress the importance of design in innovation processes (although it is arguably very close to "opacity" and "ambiguity").
Recent advances in design theory and management research have helped to clarify the richness of the notion of the unknown. Historically the notion was used in management to try to characterize what is beyond the limits of rational choice, and requires a design effort. In a first approach, dealing with the unknown was assimilated to (complex) problem solving [START_REF] Simon | The Sciences of the Artificial[END_REF][START_REF] Hevner | Design Science in Information Systems Research[END_REF]), relying on complexity elimination (by modularization, independence creation,…) or ignorance (by dealing with simpler and higher level aggregates), in particular in system engineering. It has then been shown, on the one hand, that dealing with the unknown could not be reduced to managing complex problems [START_REF] Rittel | On the Planning Crisis: Systems Analysis of the 'First and Second Generations[END_REF][START_REF] Schön | ) Varieties of Thinking. Essays from Harvard's Philosophy of Education Research Center[END_REF][START_REF] Von Foerster | Ethics and Second-Order Cybernetics[END_REF][START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF][START_REF] Dorst | Design Problems and Design Paradoxes[END_REF] and should be extended to deal with "figural complexity" and "details" [START_REF] Schön | ) Varieties of Thinking. Essays from Harvard's Philosophy of Education Research Center[END_REF], with wicked problems [START_REF] Rittel | On the Planning Crisis: Systems Analysis of the 'First and Second Generations[END_REF], with "desirable unknowns" [START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF], etc. On the other hand it has also been shown that dealing with the unknown should be distinguished from dealing with uncertainty [START_REF] Knight | Risk, Uncertainty, and Profit[END_REF], Loch et al. 2006). According to [START_REF] Knight | Risk, Uncertainty, and Profit[END_REF] uncertainty refers to events whose probabilities cannot be determined. For instance, the probability that it will snow in summer is very lowwe know what snowing means, but this event is unlikely to occur in summer. In contrast, forms of life existing on exoplanets are unknown in the sense that it is difficult to conceive the large variety of forms they can take, because the nature of this life it-self is unknown. One should distinguish between not knowing about the occurrence of future events (uncertainty) and not knowing about the nature of these events (unknown). More generally, the distinctions between complexity, uncertainty and unknown have been developed and grounded in design theories [START_REF] Hatchuel | CK design theory: an advanced formulation[END_REF][START_REF] Le Masson | Design theories as languages of the unknown: insights from the German roots of systematic design (1840-1960)[END_REF] Building upon this definition of "unknown", we investigate the role intermediaries can play if the objects, actors, vision/goals and legitimacy of context do not exist, are partially "unknown" and need to be designed. Table 4 lists the challenges that may be faced with reference to the core functions of intermediaries. In the following part of the paper, we analyse four empirical cases in which innovation intermediaries faced a relatively high degree of unknown, and each of them had to address some of the questions raised in Table 4. We analyse the solutions developed in each case to investigate how the managerial challenges were met.
Research Methodology
Case-Study Approach, Data Collection, and Data Analysis
Given the lack of prior research on the role of innovation intermediaries in situations of high degree of unknown and our interest in studying corresponding intermediaries within their organizational contexts, we chose considered a qualitative case-study approach (Yin 2014).
The case-study approach is particularly suitable for studying "how" research questions, and allowed us to explore the managerial challenges of the unknown in their real-life contexts as well as to conduct an in-depth investigation of how innovation intermediaries resolve these challenges. To identify a set of relevant managerial principles, it was also important to examine multiple cases since we could not expect all pre-identified challenges to be equally relevant to a single case. In other words, most likely, a single case would not have been exhaustive when analysing all four main intermediary functions as well as the challenges associated with these functions under conditions of high degree of unknown.
The rationale for selecting the case sites followed an information-oriented selection strategy [START_REF] Flyvbjerg | Five Misunderstandings About Case-Study Research[END_REF]. This strategy aims to maximize the utility of information from case studies by selecting cases "on the basis of expectations about their information content" (ibid., p.
230). On this basis, we selected the cases of four innovation intermediaries in Germany, Sweden, and France for inclusion in our study (see Table 5). In each of these four cases, the degree of unknown was high, and at least one of the above-described managerial challenges (see Table 4) was particularly pronounced. Although the specific challenges of the selected cases were rather heterogeneous (see section 4.2 below), the cases exhibited considerable similarities in terms of the main functions fulfilled by each intermediary. Ultimately, this allowed us to consolidate the individual case insights into a set of managerial principles for intermediation in the unknown. To collect data, we conducted a total of over 120 semi-structured interviews with intermediary representatives and other stakeholders. The interviews lasted between 40 minutes and 2 hours (see Table 5 above). All interviews were tape-recorded and transcribed.
Follow-up emails and phone calls were used to clarify any questions that arose during the interview transcription and data analysis. To triangulate the interview data, we also ran research workshops with participants from the involved organizations, attended internal meetings and workshops, carried out quantitative surveys, visited the case sites on a regular basis to make direct observations, and collected internal documents such as presentations, reports, and meeting minutes. The collection of data from multiple sources is consistent with established guidelines on case-study research (e.g., [START_REF] Eisenhardt | Building Theories from Case Study Research[END_REF][START_REF] Yin | Case Study Research: Design and Methods (Fourth Edition)[END_REF]. Most importantly, this data triangulation allowed us to do pattern matching across data sources, and helped us identify convergent lines of inquiry.
Before analysing the case data, we integrated all interview transcripts, field notes, and other relevant documents into a case study database [START_REF] Yin | Case Study Research: Design and Methods (Fourth Edition)[END_REF]. We then coded the collected data on a case-by-case basis with a particular focus on the four managerial challenges related to a high degree of unknown (see Table 4), and compiled short summaries for each case including preliminary case findings and interpretations. The case summaries also facilitated cross-case comparisons, which require a more macro view of each case (cf. [START_REF] Choudhury | Portfolios of Control in Outsourced Software Development Projects[END_REF]. While the analytical process was necessarily interpretive, we tried to minimize the involved subjectivity by iteratively moving back and forth between the case data and the case summaries during the analysis. Furthermore, to facilitate the analysis process, all members of the research team met several times in person to present, discuss, and compare the case findings, to identify patterns, as well as to draw conclusions about the managerial principles that innovation intermediaries apply to address the challenges of the unknown.
Next, we explain the background, the actors, and the role of the intermediary for each case.
Thereafter, in section 5, we focus on the contingency variable (degree of unknown), highlight the major challenge of the unknown for each case, and show how the intermediaries responded to these challenges in order to enable successful collective innovation.
Presentation of the four case studies
The Siemens open innovation unit
Siemens is one of few multinational corporations that have managed to run a successful business for more than 100 years. Such consistency cannot be explained by only operational excellence; effective R&D processes are also required for the frequent development of Connecting Siemens experts around the world: The OI unit has implemented a new collaboration infrastructure, a network for experts from Siemens's various divisions and business units. This infrastructure allows internal technical experts around the world to share knowledge and ask for support. Collaboration is not limited to the primary technical area of expertise, as the head of the OI unit noted: Employees also engage in many other forums to offer and find "out-of-the-box" solutions to specific technical problems.
The OI unit can be viewed as an internal innovation intermediary for various sectors and business units. Specifically, this intermediary is an entity within a large enterprise.
The collaborative arena SAFER
Southwest Sweden is home to several major automotive companies, such as AB Volvo, Volvo Car Corporation, and Autoliv. It is in the interest of the different actors to collectively conduct research on vehicle and traffic safety to strengthen the automotive cluster. SAFER is organised to facilitate this work and provide a platform for innovation, as it acts as a host to a range of collaborative projects. The arena offers office facilities, meeting rooms, seminars, conferences, etc. to individuals at their institutional partnerscompanies such as AB Volvo, Autoliv, government agencies such as the Swedish Transport Administration, smaller technical consultancy companies, and universities such as Chalmers, the Royal Institute of Technology (KTH) and Gothenburg University. SAFER is an association consisting almost solely of its partners. The association is governed by an annual meeting of the partners and an elected board and led by a director who, with a few assistants, is responsible for maintaining daily operations. SAFER does not have judicial status in the regular sense (for practical reasons, the economic administration is managed via Chalmers). Without the partners, there would be no organisation. Approximately 170 people had access to the SAFER offices at the time of study.
SAFER hosts an array of projectsfrom pre-studies to large-scale testing projects to method development; however, they are all focused on the initial non-competitive phase of the innovation process. The collaborating partners pitch ideas on new projects to the other partners to find collaborators. On some occasions, collaborators are found outside of the boundaries of SAFER, where the extensive network of SAFER can be of use. SAFER thus provides a space for matchmaking and networking and offers a neutral and legitimate place in which those working on the projects can meet and work.
The agricultural cooperative CEA and the research centre CEBC
The domain of agriculture must cope with serious innovation challenges to attain environmental sustainability. These challenges are present particularly in cereal plains, where intensive farming practices significantly damage biodiversity, as well as water and soil resources. This case study depicts a pioneer situation in the West of France in which a small The project aims to provide knowledge on environmentally friendly farming practices and governance challenges raised by agro-ecological projects that require coordination between heterogeneous stakeholders. This bottom-up initiative is thus now supported by public authorities.
The I-Care cluster
The I-Care cluster, launched in 2009, aims to encourage collaborative projects involving industry and research laboratories in the Rhône-Alpes region (France) in the field of health technologies. One area in particular has attracted investment and R&D efforts without producing significant results in terms of innovativeness: the need to improve the well-being of elderly people who face a loss of autonomy.
In France, the average age of the population is increasing; therefore, innovations using information and communication technologies (ICT) to assist people experiencing a loss of autonomy are highly sought after. However, the quality of proposed innovations has not met expectations. The I-Care cluster acted as an intermediary to explore new ideas collectively with the totality of stakeholders at several creativity workshops (60 participants). This intermediary influenced the nature of the interactions among these stakeholders by making paths of innovation that remain unexplored visible. To do so, the cluster developed a methodology based on a C-K theory framework [START_REF] Hatchuel | Teaching Innovative Design Reasoning: How C-K Theory Can Help to Overcome Fixation Effect[END_REF], which allowed for the unveiling and evaluation of paths of innovation that provided potential new ways of tackling the issue of autonomy. The methodology provided a means to objectify the distance between expectations in terms of innovation regarding a specific milieu and what the actual innovation capabilities of the sector could provide. The methodology was also a means to stimulate new concepts to be explored by the various actors in the sector.
The four actors as intermediaries
We can summarise our four cases with regard to the different functions of intermediaries: The following table indicates how the intermediaries in each case fulfilled these functions.
Results : the managerial challenges of the unknown and insights into the ways intermediaries resolve them
We have observed that the intermediaries in all four case studies were engaged in the core functions that we identified in the literature. However, we found that the intermediaries in each of our cases also faced rather unusual, or challenging, situationssituations that had not been reported in previous studies. We now focus on each of these situations, explaining the particular challenge in the context of the case at hand. We then illustrate how the intermediary in question coped with these challenges.
Connecting actors that have previously not been identified: Siemens
Siemens is one of the largest enterprises in the world, with more than 350 000 employees operating in more than 190 countries. Every day, many people in Siemens encounter challenging technical problems that must be resolved. In the past, problem solving was limited to individual specialists and local teams (engineers could ask colleagues on their teams). Perhaps in some cases, engineers personally knew experts outside their team. However, it was not possible to receive ideas and solutions from "unknown" colleagues.
One of the activities initiated by the OI unit was the development and implementation of an interactive online expert network within the Siemens intranet. This network is an infrastructure rather than a real network in a stricter sense because the nodes (Siemens employees) are not active participants on a permanent basis. Rather, the infrastructure enables employees across industrial sectors and various regions to build "ad-hoc" networks for specific problem solving challenges. Experts who operate in totally different industries can provide pieces of knowledge regarding problems that have been posted on this platform. As an example, one engineer in the Diagnostics unit was facing a problem and submitted it on the platform. Within 40 minutes, he received the first answer and, within 2 days, could collect 25 answers from experts around the world whom he neither knew personally nor had previously identified and selected as potential problem solvers for this ad-hoc network.
Mobilising joint innovation while in competition: SAFER
The overall objective of the SAFER collaboration is to increase the competitiveness of the Several of the partner organisations would engage in bilateral collaboration even if SAFER did not exist. However, SAFER becomes a "safe haven", a legitimate place for collaboration in ways that otherwise would not have been possible because of competition law, market positioning, political conflict or lack of initiative. Most of the individuals involved agree that the existence of a physical space in which to meet, create trust, drive projects, and thereby collectively share knowledge and develop new ideas is absolutely central to the success of SAFER. SAFER is neither a traditional university competence centre nor a private research institute. Instead, SAFER is an arena for collaboration that is "self-regulated", as its partners work together in different forms to collectively pool resources, skills and capabilities to succeed in the safe area. If one of the other partners would host the collaboration, it would not be regarded as equally free. However, it is only the quality of the work conducted in the name of SAFER that legitimises its existence.
Resolving conflicts without pre-existing common interests: CEA-CEBC
Farmed ecosystems' stakeholders generally have contradicting interests regarding their resources; nevertheless, the actions initiated by some actors have impacts on others. As a consequence, conflicts regarding situations are common, particularly between farmers and naturalists or other citizens. The challenge of addressing potential conflicts of interests is thus essential to overcoming such a situation and initiating a collective innovation process to reconcile agricultural protection and environmental preservation.
In the case studied, the ecologists proposed the development of grasslands in the cereal plain.
Ecologists consider grasslands to regenerate regulations crucial for ecosystem functioning (water storage, insect reproduction…), as these areas are more "stable" than cereal crops.
Indeed, grasslands are not ploughed every year and require fewer pesticides than cereal crops.
However, cereal farmers initially did not view grasslands as an acceptable solution despite their ecological interests, as they were not sufficiently profitable: a market for fodder hardly exists.
To overcome the grassland reintroduction conflict, the "grassland" proposition was not considered a turnkey solution with a predefined value for which a consensus had to be reached. Rather, the proposition was considered the departure point of a design process involving a large range of agro-ecosystem stakeholders [START_REF] Berthet | Analyzing the Design Process of Farming Practices Ensuring Little Bustard Conservation: Lessons for Collective Landscape Management[END_REF]. CEA and the CEBC organised a workshop to initiate a collective design process, departing from the initial proposition "designing grasslands for a sustainable agro-ecosystem". The aim was to cause the stakeholders to revise the identity of grasslands (not only an ecological habitat; not only an area for intensive fodder production) and together explore the potential values of "new kinds of grasslands".
The stakeholders first shared knowledge of grasslands and then explored possible new functions, such as the regeneration of insect populations and higher biodiversity throughout the plain. Indeed, grasslands represent biodiversity sources in a highly disturbed ecosystem.
The exploration, led by stakeholders, made visible interdependences between them: they found that providing such ecological functions required further coordination between farmers.
For instance, managing grassland location throughout the landscape makes it possible to optimise insect dispersion. The design workshop also brought to light new opportunities for creating value, such as the production of high-quality dairy products with local forage from environmental-friendly grasslands or the improvement of water quality.
Stimulating innovation by unveiling unexplored paths of innovation: I-Care
In France (as in Europe), the average age of the population is increasing. The number of French citizens over 75 years of age will become 2.5 times higher between 2000 and 2040, reaching a total of 10 million people, and it is estimated that 1.2 million people will have lost their autonomy by 2040. Innovation using ICT to aid people experiencing a loss of autonomy is highly sought after to provide means for elderly people to enhance their quality of life and stay at home longer. The mainstream path with regard to autonomy addresses the monitoring of a person in his or her home using numerous various high-tech devices (e.g., a medallion that can trigger a remote alarm if necessary). These types of projects have been on the market for over 15 years already (and there are plenty of these projects); however, none of them have had commercial success. Thus, despite a well-expressed need, the innovativeness of the field appears to be stale.
The discussion initiated by the I-Care cluster with geriatricians led to the discovery of the concept of fragility. Fragility is described as an intermediate state between robustness and dependence. For example, during this period of life, for a large proportion of seniors, the risk of falling or developing a disease is greater.
The problem of autonomy was then reformulated using this new concept. Shifting the focus from the concept of the use of ICT in increasing the autonomy of seniors to the concept of fragility revealed new interdependences among the actors, as well as new actors to involve, and facilitated understanding of the current staleness of these innovation processes.
Thus, the actions of the cluster and the proposed conceptual broadening facilitated the opening of the field to new stakeholders (e.g., in connection to fragility and the seniors' environment). Various actions performed by the cluster (e.g., a seminar emphasising the lack of knowledge, a workshop for working collaboratively on original concepts, and meetings with involved entities) led to the emergence of awareness of new possibilities and, therefore, to the appropriation of new alternative technologies by all of the ecosystem's stakeholders and engendered new modalities of interactions among these stakeholders.
Discussion
This set of cases led to original management principles for addressing each of the four forms of the unknown. These management principles are summarized in table 7. These principles are derived from the variants of the core intermediary functions (connect, involve, resolve conflict, stimulate innovation) that the intermediaries in our cases deploy, which we first describe::
Connect unknown people: Expert networks are well known in the literatureand very famous cases at Siemens have been already studied in depth [START_REF] Voelpel | Five Steps to Creating a Global Knowledge-Sharing System: Siemens' ShareNet[END_REF].
However, these networks connect already identified experts. Interestingly, in the current Siemens case, we make note of the capacity to build an "ad-hoc" network with regard to an issue that can be new to the firm. Although the implementation of expert networks in the general case is often based on technical skills and scientific disciplines, the building of the ad-hoc network in our case is driven by the innovation issue itself. The temporary "organisation" is initiated on demand and disassembled when the issue is resolved. The intermediaryin this case, via the technical platform rather than active involvementmakes this method possible. The solution that the Siemens case provides extends beyond the classical "solver-solution", in which actors are supposed to provide one solution. The solution also extends the "networker" role, in which intermediaries connect parties in innovation ecosystems: In a sense, the entire Siemens organisation can be understood as an innovation ecosystem, but because of the very large number of employees, it is impossible to "know" all of the potential actors in advance. Thus, the actors are unknown.
Finally, it should be emphasised that the building of ad-hoc networks is not based on incentivesmotivation is intrinsically based on the innovation issue. Experts commit to the emerging network because of their interest in addressing the issuewhich is a strong motivation, being perhaps even stronger than usual economic incentives [START_REF] Pink | Drive: The Surprising Truth About What Motivates Us[END_REF][START_REF] Glucksberg | The Influence of Strength of Drive on Functional Fixedness and Perceptual Recognition[END_REF].
Mobilise, interest, involve a legitimate place: SAFER was initiated because different market players shared a common interest -vehicle and transport safety. By creating SAFER, the stakeholders also created a legitimate place for collaborative research and innovation. In the case of SAFER, the stakeholders come together not to find "the solution", but rather, because of the favourable collaborative conditions, to invent solutions. SAFER demonstrates a striking case in which the intermediaries do not raise expectations regarding the solution (so-called anticipative expectations) but raise expectations regarding the capacity to generate multiple solutions (so-called generative expectations, cf. Le Masson et al., 2012). Legitimacy is based not on the output but on the working conditions. Note that this reasoning was already the logic of the machine shop culture at the root of Edison Invention Factory [START_REF] Israel | Edison: A Life of Invention[END_REF][START_REF] Millard | Edison and the business of innovation[END_REF]. Just as in Edison's factory, working at SAFER is more innovative and more fruitful in terms of innovation output than working inside one's own parent company.
Conflicts as a resource for collective exploration: Contrary to "intermediation in the known", for which conflict avoidance or trade-offs is often the rule, the management principle illustrated by CEA-CEBC is to address conflict in a creative manner and even to address conflict to be creative. Indeed, conflicts reveal a need for innovation that would reconcile contradictory interests and, hence, might be a source of radical innovation. It is well known that innovation is also marked by power relationships [START_REF] Santos | Constructing Markets and Shaping Boundaries: Entrepreneurial Power in Nascent Fields[END_REF]; however, the works on these topics have demonstrated that this power relationship is based precisely on the definition of boundaries. Conversely, the intermediation in the unknown consists of blurring existing boundaries by reinventing their definitions (new markets, new technological variants and combinations, new constraints understanding, questioning the identity of the object of conflict…), which creates opportunities for "new boundaries" that correspond to possible common interests. In the case of CEA and CEBC, at first, the actors had very distinct understandings of the key use of grassland: the idea that grassland is "for production" (boundary 1) vs. the idea that grassland is "for bird preservation" (boundary 2). The intermediation work consisted of creating new "grasslands" designs that could combine several values (productive farming, as well as the preservation of fauna and water resources). The intermediary redesigned the identity (functions and design parameters) of grasslands and hence created the conditions for overcoming conflicts and power relations.
Sharing an agenda of open issues instead of sharing knowledge. The I-Care case illustrates a management principle for handling ill-defined problems. The absence of well-identified problems might hinder knowledge sharing. However, knowledge is not necessarily the key resource in radical innovation. It is established that creativity and the capacity to imagine can also produce innovations. Thinking out of the box is helpful in avoiding so-called fixations [START_REF] Agogué | The Impact of Examples on Creative Design: Explaining Fixation and Stimulation Effects, International Conference on Engineering Design[END_REF][START_REF] Hatchuel | Teaching Innovative Design Reasoning: How C-K Theory Can Help to Overcome Fixation Effect[END_REF][START_REF] Jansson | Design Fixation[END_REF], so today, it is a critical capacity for radical innovation, a new form of absorptive capacity (Le Masson et al., 2012a[START_REF] Le Masson | Revisiting Absorptive Capacity with a Design Perspective[END_REF]. Moreover, knowledge sharing is often critically linked to confidentiality or IP issues; sharing questions and unsolved problems is paradoxically easier. As for managerial implications, our study underlines the difficulties that innovation intermediaries as well as open innovation managers currently face. First of all, there is often ambiguity regarding the degree of unknown in innovation dynamics, and it is all the more so in open innovations. Indeed, recognizing that the issue at stake goes beyond the expertise of the stakeholders and requires a real exploration approach is not easy in it-self, as over- innovation. These studies have laid the foundation for our understanding that an intermediary is a quite complex actor with sophisticated management principles (with its specific objectives, processes, competences, performance criteria, etc.). By studying intermediation in situations of high degrees of unknown, we find that the complexity of the intermediation management principles is even higher. For example, our cases involves the introduction of new actors into the ecosystem, stimulating innovation to overcome collective fixation, organising a legitimised collaborative working place, and addressing conflicts in creative ways. The intermediary becomes the architect of the ecosystem and is in charge of renewing the language of forms and values, inviting "entrepreneurs", dividing and coordinating the entrepreneurs' exploration work, and handling conflicts between them. Hence, this new intermediation role in the unknown is consistent with what Agogué et al. (2012) propose to be "the architect of the unknown".
Firms are increasingly relying on outside input and collaboration to revitalise their innovation processes to achieve not only incremental innovations, but also more radical innovation.
There is a dilemma inherent in collective radical innovation, however. Radical innovation appears to require even more learning, well-managed collective exploration processes, longterm commitment and complex coordinationbut open innovation teams can rely on neither the classical internal coordination capacities of the firm (learning, core competencies, collective ownership, common purpose, etc.) nor market mechanisms that fundamentally change existing entities. Hence, it appears that more coordination is needed and less coordination capacity is available. The "architect of the unknown" (Agogué et al. 2012) appears to resolve this dilemma in situations of high degrees of unknown. The existence of the "architect of the unknown" (ibid.
) explains why open innovation also can be radical.
Our paper has described the properties of intermediation in the unknown and principles for how to manage it. Yet, we acknowledge that our study has several limitations, which in turn has implications for future research. First, our case selection was built on the heterogeneity of our cases. As such our four cases are facets of a quite new phenomenon -intermediation in the unknown. More theoretical work is required to propose an integrated model of not only the functions but also the type of intermediaries in the unknown (in terms of role, governance, and performance). Second, even if our cases were very contrasting, we focused mainly on organizations. As stated by [START_REF] Howells | Intermediation and the Role of Intermediaries in Innovation[END_REF], we are in need today for a broader understanding of intermediation that includes as well individuals, professional bodies, research councils, advisory bodies and trade unions (ibid). We therefore call for further research to identify more examples of intermediaries in the unknown and to better understand their management tools and doctrines when intermediaries are not organizations as such. Last, the introduction of the degree of unknown may not be a specific feature of innovation intermediaries, and might call into question contemporary work on radical innovation in complex and open contexts. Future research may therefore extend the notion of unknown as a contingency variable to study innovation dynamics.
marketable new products (and services). Although internal R&D has been highly important ever since the company's foundation, collaborative R&D occurring across business units and industrial sectors and used along the value chain with suppliers, customers and external communities is also part of innovation strategy. Nevertheless, some years ago, it became clear that new web-based technologies and developments in social behaviours (e.g., user cocreation, social networking, and online collaboration) called for a systematic approach to open innovation. For this purpose, a dedicated "Open Innovation" (OI) unit was installed at Siemens headquarters. The OI unit develops processes, tools, and governance mechanisms to complement other prevalent forms of (open) collaborative innovation. The OI unit supports three focus areas: Collaborative idea generation, particularly via internal and external idea contests: The OI unit supports the operative business units by defining idea contest topics and formulating challenges. The OI unit also provides access to supporting technology and maintains relationships with IT service providers. The unit also supports the process as a whole, from initiation to idea selection and subsequent follow-up activities. Collaboration with knowledge brokers (e.g., NineSigma, InnoCentive): The OI unit first internally promotes the opportunity to collaborate with knowledge and also aids in the formulation of the problem. The OI unit then fulfils a gate-keeping and quality assurance function in such collaborations with knowledge brokers.
agricultural cooperative, CEA (Cooperative Entente Agricole -400 farmer members), has established a partnership with a research centre in ecology, the CEBC (Centre d'Etudes Biologiques de Chizé), to design solutions that reconcile agriculture and environmental protection on a landscape scale. Such collaboration is crucial to the exploration of innovative solutions, but it is challenging, as stakeholders have very different interests and are often in conflict. Through this initiative, CEA and the CEBC seek to play the role of an innovation intermediary, bringing together a plurality of stakeholders to design innovative farming and land management practices.As an initial step in the project, the cooperative and the research centre organised a collective design workshop in May 2011. Most participants were cooperative farmer members and technicians, but other stakeholders such as local authorities were invited as well. Thirty people participated. Following this workshop, the cooperative and the research centre began a research-action project involving agronomy and ecology scientists, as well as farmers. This project will continue for four years and is co-funded by CEA, the CEBC and local authorities.
automotive cluster in southwest Sweden and act as an open innovation arena. The partners in SAFER have been self-selected, but recently, more effort has been devoted to the identification and attraction of new types of partners. Most of them have worked together before in different constellations. The larger organisations, such as Autoliv and Volvo, have several points of contact with SAFER: Several individuals collaborate in different areas of expertise. This strategy indicates that although the partner organisations are established, the stakeholders within those organisations that are relevant for different projects are not. There is an ongoing matchmaking process in the different organisations to put the relevant people to work together. Trust is created between individuals who act as contact points between the organisations, and this often involves the sharing of information that extends beyond what is actually allowed judicially (IP). Because of the specialisations of the different partners, they complement each other in terms of competences, thereby creating a new, shared form of organisation in the space between the partner organisations.
confidence in what is currently known sometimes prevent managers from realizing how much is actually unknown. And because of the organizational complexity that open innovation fosters, one risk of radical open innovation is to engage in the process as if it is already known in advance which knowledge that is needed, which technologies should be developed and which stakeholders that are relevant. Both innovation intermediaries and open innovation managers should first conduct a diagnosis of the level of unknown prior launching the open innovation dynamics. This diagnosis may also help clarify the outputs of the process. Indeed, in high level of unknown situations, the identification of both the knowledge to acquire and the stakeholders to involve should be outputs or at least intermediate results of the open innovation process.Conclusion and Perspectives on studying Intermediation in the UnknownOur study contributes to the theory of innovation intermediaries by introducing the degree of unknown as a key contingency variable. We characterise a set of management principles for intermediaries in situations in which the degree of unknown is high. This set of principles is consistent with previously described intermediaries such as architects of the unknown and colleges of the unknown. Yet, our contribution goes beyond previous proposals by clarifying the notion of unknown in open innovation as well as the activities required for intermediation in the unknown. One of the consequences of this work is the uncovering of the paradoxical complexity of this so-called intermediation. In early studies of open innovation, intermediation was practically absent. In recent years, many authors have revealed the importance of intermediaries to open
type of goal, problem or vision and uncertainty regarding different possibilities to resolve the issue at hand.However, what if there is no common goal or common vision, and the field of innovation seems very blurry? What if the intermediary alone cannot identify a common goal or common problem? What if there is no legitimate place to which an intermediary can invite potential stakeholders to begin to work together to create a common goal? In some situations, knowledge that is needed, technologies that should be developed and relevant stakeholders are not known in advance. Rather, they will be outputs or intermediate results of the exploration of the unknown. We characterize such situations as of high degree of unknown, and the role of intermediaries can actually be to participate in those innovation processes
. The authors have shown that managing the unknown should integrate design capacities in collective action, i.e. the capacities to create new dimensions, new design parameters as well as new values and new
design spaces for action (eg.
[START_REF] Schön | ) Varieties of Thinking. Essays from Harvard's Philosophy of Education Research Center[END_REF][START_REF] Grin | Reflexive modernization as a governance issue -or: designing and shaping Restructuration[END_REF][START_REF] Dorst | Design Problems and Design Paradoxes[END_REF][START_REF] Hatchuel | Strategy as Innovative Design: An Emerging Perspective[END_REF]
. In particular, innovation processes are precisely about addressing the unknown: indeed, developing a new object means that this object was previously unknown and that the unknown was then explored.
Table 4 .
4 Challenges raised in situations of high degree of the unknown
Core Functions
Connect Can intermediaries connect parties when relevant
stakeholders are not identified?
Involve / commit / mobilise Can intermediaries mobilise without a positive reputation or
a legitimate proposition?
Avoid / resolve conflicts Can intermediaries overcome conflict without pre-existing
common interests?
Stimulate innovation Can intermediaries stimulate innovation without pre-defined
problems or research questions?
Table 5 .
5 Case overview and data sources
Siemens SAFER CEA-CEBC I-Care
Country Germany Sweden France France
Time of January 2011 - September 2008 March 2010 - September 2009
analysis October 2011 -December 2012 April 2013 -August 2011
Interviews and 6 interviews >55 interviews 41 interviews 21 interviews
Table 6 .
6 Summary of the four case studies: Challenges in the unknown
Core Siemens SAFER CEA-CEBC I-Care
functions
Connect Connect people Connect Connect Connect
beyond local researchers and agricultural companies, health
(physical) specialists in the professionals and organisations,
boundaries, vehicle and traffic naturalists research
particularly by safety field (initially in organisations, and
introducing new originating from conflict) specialists (for
(web-based) partners who instance,
collaboration compete in the geriatricians)
platforms market
Involve / Promote methods Create a Organise Support the
commit / and tools across legitimate place meetings and a various actors by
mobilise sectors & for meeting and collective design organising joint
business units and innovating workshops: creativity
allow employees (offices and lab introduce issues, workshops and
to present their environments). create mutual applying new
ideas to top Collaborative understanding, creativity
management activities for idea and formulate a techniques
generation and common goal
knowledge
sharing
Solve / Create legitimacy Written rules of Collectively Open discussion
Table 7 .
7 Management principles for intermediation in the unknown
Core Functions Can intermediaries be Management Illustration with the
active in the principles empirical cases
unknown?
Connect Can they connect Develop a capacity Siemens developed
parties when relevant to create an ad-hoc an interactive online
stakeholders are not network in which expert network
identified? the right people within its intranet.
commit to This infrastructure
collective enables employees
innovation (not across various
incentives) industrial sectors
and regions to build
"ad-hoc" networks
for specific problem
solving challenges.
Involve / commit / Can they mobilise Create a legitimate The SAFER
mobilise joint innovation while place for collective association offers to
in conflict and innovation (not a its members a
competition? shared vision) neutral and
legitimate place in
which those
working on
collective | 71,655 | [
"737786",
"1111",
"1113"
] | [
"304765",
"92114",
"236005",
"39111",
"39111",
"236006",
"261213",
"236005"
] |
01481881 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2016 | https://minesparis-psl.hal.science/hal-01481881/file/Le%20Masson%20Hatchuel%20Weil%202015%20track%20Design%20pa1.pdf | P Le Masson
A Hatchuel
B Weil
Innovation theory and the logic of generativity: from optimization to design, a new post-decisional paradigm in management science
In this paper we contribute to show that innovation theory can today strongly contribute to the (re)foundation of management. Innovation theory is teared apart between the historical optimal-decision-making paradigm and the new perspective on creation. Still contemporary advances in design theory provide an integrated framework that accounts for decision andcreation, their similarities and differences, and enables to introduce a relativity principle, the unknown (or the expected generativity), to account for the continuity from one activity to the other. We show how applying that general framework and that relativity principle can help extend decision-based models (decision under uncertainty, problem solving, combinatorics) and how this extension helps revisit basic managerial notions built on these models (risk management, knowledge management, coordination…). We conclude on some of the perspectives opened by this relativity principle for management.
2-to account for the new forms of collective actions that emerge precisely around these innovation phenomena, management science can't content with relying on the decision paradigm,which has long been one of its main roots. Studying these forms of generative collective action with a decision paradigm results in logical aporia and unconclusive empirical studies (see for instance (Hatchuel et al. 2010;[START_REF] Rogers | The Generative Aspect of Design Theory[END_REF]).
3-by contrast, the study of these forms of generative action with a post-decisional, generative paradigm helps to deepen the foundations of management science andopens new paths for research. Moreover, since the generative paradigm includes and extends the decision paradigm, it helps to strengthen the decision roots without destroying them.
These propositions have been established (at least partially) by relying on empirical approaches: many works have already shown how innovation cases oblige to revise classical notions in management and/or propose new notions that take into account the generative logic that is inherent to innovation -see for instance the works on "managing as designing" (Boland et Collopy 2004), design approaches in strategy (Hatchuel et al. 2010), or critical works that underline unconclusiveness of traditional organizational approaches for innovative organizations [START_REF] Damanpour | Product and Process Innovation: A Review of Organizational and Environmental Determinants[END_REF],… Still these results are often interpreted as showing the "great divide" between (radical) innovation and decision making. And this false evidence of a great divide tends to separate works on radical innovation and disciplinary approaches in HR, marketing or strategy. This "great divide" prevents to see how lessons learnt in radical innovation studies could bring insight to other works in management science.
In this paper we show that there is a continuity between decision-optimization and radical innovation -we keep the unique features of both activities, but we also show that both activities can be represented in the same framework and one can shift from one to the other by modifying one parameter of the theoretical framework, namely the generativity power (or, its dual facet: the unknowness level). We finally show how "generativity" can play the role of a "relativity" principle in management science.
In a first part we will show the issue of overcoming the great divide; in a second part we show how design theory is a unifying framework that accounts for decision and radical innovation, their similarities and differences, and enables to introduce a relativity principle, to account for the continuity from one activity to the other; in a third part we apply that general framework and that relativity principle to more specific formal models, that are underlying many works in decision paradigm: we show how relying on design theory, one can extend the model of decision making under uncertainty, the model of problem solving, and the model of combinatorics. We also show how this extension helps revisit basic managerial notions built on these models (risk management, knowledge management, coordination…). We conclude on some of the perspectives opened by this relativity principle for management.
Part 1-the great divideoptimal decision making vs creativity
We first show the two paradigms in innovation management -decision paradigm and creation paradigm -and the issue of overcoming the great divide between them.
1-Decision paradigm in innovation management: historical breakthroughs
It is interesting to underline that many breakthrough obtained in the study of innovation management in the second part of the 20 th century were based on the decisionoptimization paradigm.
Regarding R&D management models, a first stream of research addresses new product development and planning (see for instance [START_REF] Clark | Product Development Performance : Strategy, Organization and Management in the World Auto Industry[END_REF][START_REF] Thomke | The Effect of "Front Loading" Problem-Solving on Product Development Performance[END_REF]). Researchers have assimilated a new product development (NPD) project to a general project (like the project for organizational change or civil engineering, or event organization…) where the "production" activity is broadened to the production of data. "tasks" -ie steps in the planning-are not limited to production steps but are steps in the process of "data production". This way, the authors were able to transfer tools and techniques of operation research to innovation situations. The works discussed ways to manage project resources to meet product requirements in a reliable time. Budget, time and quality control relied on the measure of the drift from the optimal path in a PERT diagram. Knowing the tasks, the constraints (resource costs and availability,…), managing a project consists in finding the optimal path and be able to monitor and re-plan in case of drift (taking into account new resource constraints, taking into account an external shock,…). It relies on specific models and techniques that are derived from usual operations research techniques, in particular graphs theory and algorithm: critical path construction with PERT, multiple paths with resource and cost constraints (Gantt diagrams, PERT/Cost) and random events (Random PERT). Hence organizing the development process consists finally in mastering complex combinations. These works corresponded also to works on the organization of the development process: models like stage-gate [START_REF] Cooper | Stage-Gate Systems: A New Tool for Managing New Products[END_REF]), V-cycles or chain-linked organization(Kline et Rosenberg 1986) were based on gathering the right experts with the right knowledge to make the right decision taking into account relevant uncertainties at each development step. And in the strategy literature, researchers went as far as characterizing firms by their "combinative capabilities" -their capacity to combine pieces of knowledge into meaningful innovations and strategic actions (Kogut et Zander 1992).
Another stream of works addresses the economics evaluation of marketing and research activities. How to evaluate the value of money dedicated to research? Why should a company invest in knowledge production? Already in the end of the 19 th century, a model emerged that quickly became so dominating that we often tend to think this is the only possible model: this money is invested to reduce uncertainty. One of the first authors working on this is no less than Charles S. Peirce at the time when he was working for the US Coast Survey [START_REF] Peirce | Note on the theory of the economy of research[END_REF]) (reproduced in 1967 in Operations Research, Vol 15 n°4 pp. 643-648). This logic of risk reduction was progressively extended to other innovation skills: marketing was seen as a profession able to increase market knowledge to reduce market uncertainty. The role of research management or marketing management is modelled as the capacity to use research resources to get uncertainty reduction where it is most needed. Some researchers went as far as applying option pricing developed in finance (based on the theory of decision under uncertainty) to the pricing of so-called "real options" [START_REF] Fredberg | Real options for innovation management[END_REF][START_REF] Perlitz | Real options valuation: the new frontier in R&D project evaluation[END_REF].
A third stream of research deals with the economic evaluation of projects and project portfolio. How to evaluate the value of an NPD project with market and technical uncertainty? Assimilating an NPD project to an investment, it was possible to apply to NPD projects the tools and techniques developed for corporate investment: return on investment, net present value (NPV) and expected utility. Given a list of projects with known market and technical uncertainty, managing an NPD project portfolio hence consisted in choosing the projects ensuring the highest levels in NPV or expected utility. From a strategic point of view, many works analyse the tension between exploration projects and exploitation projects, referring to the seminal works of March [START_REF] March | Exploration and exploitation in organizational learning[END_REF]; here again: an analytical framework that is directly derived from decision making under "bounded rationality" conditions.
Hence innovation management techniques finally relied more or less implicitly on a set of techniques and models that all belong to the field of optimal choice (optimization on complex combinatorics (graphs), optimization in uncertainty situation) -hence the decision paradigm.
2-Ideation and creativity paradigm in innovation management
By contrast, many works have underlined that radical innovation requires imagination, ideation, creativity, dealing with the unknown -is forms of reasoning that are far from optimal choice. For instance, creativity is born in psychological studies precisely as a way to characterize a form of intelligence that is different from IQ [START_REF] Guilford | Creativity[END_REF][START_REF] Guilford | Traits of Creativity[END_REF]. For Guilford one can characterize IQ as the capacity to answer questions were there is one best answer whereas creativity is the capacity to answer questions where there is no one single solution -and one will appreciate fluency, flexibility and originality of the answers given [START_REF] Torrance | The Nature of Creativity as Manifest in its Testing[END_REF]. Many works in creativity management have then shown that managing collective creativity could not follow the usual rules of administrative management based on command, control and incentives to reach one specific goal [START_REF] Amabile | Affect and Creativity at Work[END_REF][START_REF] Amabile | How to kill creativity[END_REF][START_REF] Hargadon | When Innovations Meet Institutions: Edision and the Design of the Electric Light[END_REF].
Regularly authors have shown that innovation obliges to change our view on management. Early on, in the 1960s, Burns and Stalker exhibit the "organic structure", by contrast with the "mechanical structure". The latter is characterized by clear work division, with local optimization and well-identified rules to aggregate results, hierarchical structure for command, control and communication. The "organic structure" (embodided in Burns & Stalker book by only case) raises actually a critical question: the determinants of creative behaviour with organizations are not described by the usual administrative language and they require another models. Van de Ven et al., based on the impressive "Minnesotta study" showed that the "innovation journey" is not planned and controlled towards one clear and well-specified goal -modelling the innovation journey requires alternative models. They insist for instance on the fact that learning in the innovation journey can not be modelled with the usual models of uncertainty reduction: in the usual "try and learn" models, action reduces uncertainty of wellidentified outcomes but in the innovation journey, action reveals unexpected outcomes (Van de Ven 1986; [START_REF] Van De Ven | The Innovation Journey[END_REF]. Meta-analysis of the determinants of the organizational determinants of innovation have largely confirmed that the usual descriptors (work division, span of control,…) can't explain the innovation success [START_REF] Damanpour | Product and Process Innovation: A Review of Organizational and Environmental Determinants[END_REF][START_REF] Damanpour | Organizational Complexity and Innovation: Developing and Testing Multiple Contingency Models[END_REF]. Many works have underlined the limits of "problem-driven" approaches, that can't account for "problem formulation" issues [START_REF] Rittel | Dilemnas in a General Theory of Planning[END_REF][START_REF] Simon | Models of Discovery and Other Topics in the Meth-ods of Science[END_REF] These works led to consider that innovation management should rely on another paradigm, where emergence, creativity, new ideas, new knowledge, new "need-solution" pairs (Hippel et Krogh 2016) is the norm. This second paradigm is often described by metaphors and analogies. In an analogy with the work of the architect Franck Gehry, Boland & al. explain that managing should be considered as "designing", a capacity to remain in a "liquid state" instead of a cristal one [START_REF] Boland | Managing as Designing: Lessons for Organization Leaders from the Design Practice of Frank O. Gehry[END_REF]. Authors use the notions of "mapping", "framing", "guiding patterns", that appear as metaphors for a design strategy, a design rule or a brief (a classic practice in industrial design).
3-When the great divide limits our understanding of innovation -and management
In the last decade, new trends emerge in innovation management. Interestingly enough, many of these new trends tend to keep the optimization logic, by finding ways to improve the quality of decision. Let's take some examples among well-known streams of research:
Open innovation builds on the assumption that internal R&D costs limit the quantity of knowledge that is available to find an optimal solution to a given problem. Open innovation (Chesbrough 2003) considers that relying on external resources enables to increase the quantity of ideas and knowledge at a reasonable cost or even at a lower cost than in a "closed innovation" model. Hence Open Innovation helps the firm to get a better solution at a lower cost to solve one given problem.
Another example is the logic of platform and modularity (Gawer 2009): this strategy builds on the fact that markets are uncertain and fast evolving, so that an innovation strategy that would adapt to one particular demand is risky and innovators finally develop platforms and modules that help them to resist market uncertainty. As demonstrated by [START_REF] Baldwin | Modularity in the Design of Complex Engineering Systems[END_REF], 2000), the logic of modularity is actually an option logic. It is an optimization logic in uncertainty situation.
The ambidexterity approach deals with the balance between evolutionary and revolutionary changes in organizations [START_REF] Tushman | Ambidextrous Organizations: Managing Evolutionary and Revolutionary Change[END_REF], between exploitation and exploration in organizations [START_REF] March | Exploration and exploitation in organizational learning[END_REF]. Since "exploitation", in the sense of March, is no more than a form of a "search" strategy in which one does not choose the frequently used search routines but one uses rarely used one, one finally tends to interpret ambidexterity as a "search" that mixes to heuristics -finally a refinement of optimal decision making heuristic.
Recent studies on managing risk in "unknown unknown" (unk unk) situations (eg [START_REF] Loch | Diagnosing Unforseeable Uncertainty in a New Venture[END_REF])) aim at integrating unknown "state of the world" in the Savagian decision making theory (see p. 31). But this "unknown" states are finally modelled as uncertain states, hence remaining in the Savagian framework. And consequently the prototyping strategies in unk unk situations are modelled as uncertainty reduction strategies, based on "search" in complex probability spaces [START_REF] Sommer | Selectionism and Learning in Projects with Complexity und Unforseeable Uncertainty[END_REF][START_REF] Loch | Parallel and Sequential Testing of Design Alternatives[END_REF].
Hence these four streams of research finally rely on the decision paradigm. But the paradigm also imposes limits:
Research works have shown that the logic of "problem solving" is a critical limit to open innovation [START_REF] Sieg | Managerial challenges in open innovation: a study of innovation intermediation in the chemical industry[END_REF]. And empirical studies have shown that there are forms of open innovation that are not driven by optimal problem solving but by a more efficient exploration of the unknown [START_REF] Agogué | Explicating the role of innovation intermediaries in the "unknown": A contingency approach[END_REF].
Platforms are not only made for uncertainty reduction [START_REF] Gawer | Industry Platforms and Ecosystem Innovation[END_REF]Gawer 2009). There are also platforms that are created to improve collective exploration [START_REF] Gawer | Industry Platforms and Ecosystem Innovation[END_REF]Le Masson, Weil et Hatchuel 2009).
As recently underlined by [START_REF] Birkinshaw | Clarifying the Distinctive Contribution of Ambidexterity to the Field of Organization Studies[END_REF], ambidexterity should be understood as "an organization's capacity to address two organizationally incompatible objectives equally well" (p. 291) -hence ambidexterity is more than a compromise, it raises this issue of the organizational structure that enables a renewal of the collective creative capacities.
Regarding the "unknown" literature: some papers have identified cases where collective innovation management really meets Loch assumption -it consists in changing the probability space -and not only in reducing uncertainty (Kokshagina, Le Masson et Weil 2015).
In all four cases, it appears that the optimal decision making paradigm is regularly used even for innovation management and even in cases where radical innovation is at stake. The optimal decision making paradigm brings actually intellectual frameworks and methods that help to (partially) characterize certain facets of the efficiency of innovation management processes and organizations. But it also limits the full understanding and the full analysis of these four forms of collective action (open innovation, platform, ambidexterity, managing the unknown).
4-research question and method
Hence our research question: can we establish a continuum that extends optimal decision making to collective creation? More precisely, the requirements are as follows:
1.
the new framework should account for both optimal decision making and creation. We will show that contemporary design theory has this property.
2.
the new framework should contain "parameters" that could explain the continuity from one extreme (pure optimal decision making) to the other (pure creation). This parameter would play the role of a "relativity" dimension. This latter point has to be explained: With relativity theory in physics, physics theory is relative to the ratio between the nominal speed and light speed -for low speed, Newtonian theory is enough, for high speed, relativity theory is required. But the former is actually a simplified case of the latter. We are looking for similar notion in innovation management.
Our method is then as follows:
1-We rely on one very general theory of models of thought, that encompasses optimal decision-making. We will show that advances in design theory have led to propose a design theory that has this property.
2-We show how design theory enables to extend three specific models of thought that are related to optimal decision making: a model of optimal decision under uncertainty, a models of optimal complex problem solving, a model of decision based on combinatorics. More precisely we show that in each three cases, the unknown (or the expected generativity) is the critical, relativity parameter. We will show that the unknown (or conversely, the generativity) plays in management the same role that speed in physics. With the unknown, management notions become relativist. When unknowness (or expected generativity) is low, then the notion can be interpreted and worked in a decision paradigm; when the unknown (or expected generativity) increases, the design paradigm applies. And the former is a simplified case of the latter -in which the unknown (or expected generativity) is as negligible as is the ratio between nominal speed and light speed in Newtonian physics.
Part 2: Design as a theoretical framework that extends optimal decision paradigm -generativity as relativity principle
1-Main features of the optimal decision paradigm
As underlined in [START_REF] Buchanan | A brief history of decision making[END_REF], a history of "decision making" could begin with prehistory! Still this is rather after world war II that models of "decision making" were progressively formalized and integrated into a general framework. Recent historians' works enabled to understand the movement of "rational choice" that unfold at the end of world war II and during Cold War [START_REF] Erickson | How Reason Almost Lost Its Mind -The Strange Career of Cold World Rationality[END_REF]. One of the critical issues was to find models and algorithms that could account for rational decision making -in particular to be able to support or even control human decisions in the Cold War crisis situations, where human emotions and tensions threatened to lead to "irrational" choices (see [START_REF] Erickson | How Reason Almost Lost Its Mind -The Strange Career of Cold World Rationality[END_REF]). The conjunction of scientific breakthrough, political concern about rational and controlled choice and industrial needs for handling complex planning led to a tremendous research movement on "optimal choice", heavily funded by the US state and industrial companies, in famous institutions such as RAND Corporation, the Office Naval Researchor the Carnegie Institute of Technology, with extraordinary interactions between many disciplines (economics, political sciences, management, psychology,…)and involving among the most brilliant researchers of the time (John Nash, Herbert Simon, Charles Osgood, Thomas Schelling, Herman Kahn, Anatol Rapoport…).
It would be far beyond reach of this paper to make a history of the development of the decision paradigm since world war II -all the more useless that it was recently done by an international group of research [START_REF] Erickson | How Reason Almost Lost Its Mind -The Strange Career of Cold World Rationality[END_REF]). We will just underline two movements: one the one hand works led to a "positive" movement of creation of theoretical frameworks, algorithms and methods related to optimal choice. For instance, in the 1950s, the development of the decision theory under uncertainty provided management with "the basis disciplines that underlie the field of business administration" -who said that? Bertrand Fow, the Director of Research of Harvard Business School in his preface to the reference book "applied statistical decision theory" of Raiffa & Schlaiffer [START_REF] Raïffa | Decision Analysis[END_REF]). And Raïffa and Schlaiffer explain that their own work is grounded on Savage 1954 book "the foundation of statistics" [START_REF] Savage | The foundations of statistics[END_REF]) and Wald's "Statistical Decision Function" [START_REF] Wald | Statistical Decision Functions[END_REF]. The theory of statistical decision provided an integrated framework that could account for choice between known alternatives, taking into account uncertain events; moreover the models were able to put a clear value on uncertainty reduction endeavours (leading later to option theory and later real options). On the other hand, some researchers worked on situations were the set of alternatives was complex and highly combinatorial; in the stream of operation research, their works helped to determine the optimal path in complex combinatorial situations. The famous Simplex algorithm, invented by Dantzig in 1947, helped to solve complex problems by relying on linear programming.
One the other hand, some researchers addressed multiple critics to rational optimal choice models:. the grand fathers of management insisted on the limits of rational decision theory in real situations. Because of psychological or organizational biases, managers and companies were not able to make the "optimal" decision. Economics Nobel prize Herbert Simon built on the fact that human decision making was finally a "bounded rationality" [START_REF] Simon | A Behavioral Model of Rational Choice[END_REF]) so that human could only rely on "procedural rationality", ie find heuristics and algorithms that might not lead to the "best" solution but to "satisfying" one. Hence a great consequence: the role of the executive was no more to choose the best solution but to organize the design of decision functions that would lead to more or less "satisfying" solutions. This paved the way to analysing the logics of routines in organizations, contrasting exploitation logics -where people rely on frequently used routines-and exploration -where people try rarely used routines [START_REF] March | Exploration and exploitation in organizational learning[END_REF]. In psychology, Kahneman & Tversky (Kahneman et Tversky 1979) (also Economics Nobel prize winners) underlined the biases introduced by heuristics and they open new paths to research on how to inhibit biased heuristics to favour better decision making processes. Research on collective decision making also underlined negative or positive effects of collective groups on decision (the group introduced biases in the process; or the group might help to overcome individual biases…). These critics paved the way to reference works in organizational design, organizational learning, action learning, strategic management,… Interestingly enough these two movements -positive or critic-were anyway in the same decision making paradigm. Of course, critics tended to add some additional constraints to the general models (psychological bias, routines in search processes,…) but they kept the paradigm: the managers (or the teams, or the organizations) are not perfect decision makers -but still they are decision makers. They cannot find the optimal point, but still they are looking for this point taking into account their limited knowledge and their biases. And the dialectical movement of positive propositions and critics proved particularly fruitful for the development of theories, models and methods of optimal choice. These discoveries were often made in the field of management research or strongly contributed to its development (for instance the journal "management science" was created by Herbert Simon precisely as a means for collective research on these topics).
It appears finally that optimal decision paradigm finally has three facets: formal models -decision under uncertainty, problem solving in complex problem spaces, combinatorial decision making; a cognitive dimension, derived from the formal models (biases, heuristics,…) ; an organizational dimension, derived from the formal models and the cognitive dimension (information, knowledge, competences, work division, performance,…).
We now focus more precisely on the formal models. Models of optimal choice share the same axiomatic structure: given one actor (or collective actor), who is able to acquire knowledge, it is confronted to a problem space, made by a set of elementary actions, constraints on these actions and a utility function associated to these actions, the actor looks for algorithms to find the most satisfying combination of actions that meets the constraints and reaches a satisfying utility level. Theory of decision under uncertainty (Wald, Savage, Raïffa), is one particular case (where the problem space is not too complex, so that the alternatives can be easily enumerated; and the utility calculus takes uncertainty into account). General problem solving (Simon) is another particular case, where the problem space has a so complex structure that it is impossible to enumerate all solutions (typically: winning at chess play). And we have powerful results: (1) according to Wald: there is always a solution 0 such that:
This result is extraordinary general: there is always an optimal choice functionwhatever the learning capacities L, whatever the a-priori belief on states of nature, whatever the set of alternatives, whatever the cost.
2-following general problem solving: even for very complex cases, there are powerful algorithms, like Branch and Bound, that enable to find a solution -you only need to be able to generate a solution by progressive separation and to be able to evaluate subsets of solutions (which is more than just being able to evaluate one individual solution).
Here again, this is a very powerful algorithm, very generic, relevant for complex cases where it is not possible to describe all alternatives.
These works have also clarified the conditions under which such results are valid -and here again we have relatively generic results: the order of preferences should follow some generic logic (transitive rules) (Nobel prize Maurice Allais has shown that these transitive rules are far from self-evident, with his famous example, today included in the
E(C) = r(m,y) = C(q, d) R n ´D´Q ò l x (d) L(x, q) m(q) dd dx dq
r(m,y 0 ) £ r(m,y) "y last versions of Savage's book) [START_REF] Savage | The foundations of statistics[END_REF]); more generally the elements have to build an integrated framework with well-defined relationships between each parameters of the decision problem (eg partial order in branch and bound,…).
2-Design theory: a theory of generativity
Models of generativity also share common elements - (Hatchuel, Weil et Le Masson 2013) describes it as the ontology of design theory:
there is knowledge, and in knowledge there is dynamic frontier between invariant ontologies (eg universal laws) and designed ontologies (where definitions are revised, extra-knowledge is added…).
there are "voids" in knowledge, which are not uncertainty or "lack of knowledge about something that exists" but unknown entities which existence requires design work.
There is a process for the formation of new entities (a process that is described in formal language - (Hatchuel, Weil et Le Masson 2013) notes that "'idea generation', 'problem finding' or 'serendipity' are only images or elements of more complex cognitive processes").
And there is a mechanism to account for the preservation of meaning and knowledge reordering.
The generativity logic is at the heart of design theory: the models describe how the new (the unknown) can emerge from the known (the knowledge base), and how this newly known can be integrated to the previously known. To underline the difference: decision making begins with a set of alternatives that is already generated (or the process of generation is actually a combination built on known building blocks), whereas design finishes when an alternative that was initially unknown is considered as known (or as constructible by combination).
Just like decision theory clarifies the conditions for optimal decision making models, design theory clarifies the conditions for generativity. Note that it is not self-evident that there is any condition! Intuitively we tend to consider that generativity is a "free space", the only limits that one tends to mention are cognitive ones (fixations). However recent works on design theory have helped clarify formal conditions for generativity. This is called the splitting condition(Le Masson, Hatchuel et Weil 2013;Hatchuel, Weil et Le Masson 2013). Without going into details, according to the splitting condition, a knowledge base allows generative processes only if the knowledge base is not deterministic (in any situation, there is always at least two alternatives) and not modular (no building block can be added without influencing the rest of the design). This means that there must be some forms of "independence" in the knowledge base.
Here again it is interesting to underline the deep difference between decision making and design: decision making is based on interdependences -in the integrated knowledge base, interdependences help to combine elements and help to reduce uncertainty on states of nature by sampling methods. In the world of decision making, independence has no value: if variables are independent, sampling one variable won't tell anything on the other; if components are independent, there combination has no meaning. By contrast, independence appears as the critical (unique, necessary, indispensible) resource for generativity.
As we described in the first part, decision theory was built as a reference that helped to analyse empirical decision making processes (individuals, groups; in real situations or in experimental situations,…). And the questions were: is there a bias? If yes, what is the cause? Can this bias be addressed? These questions were basically the questions that drove the research on the "good decision maker", the decision making processes, the role of routines in decision processes, the psychological biases, leadership, etc.
Design theory plays exactly the same role to analyse generativity processes. Suppose that one team (or one creative designer, or one division, one company,…) comes with a set of proposals to address an innovation issue. The same questions can be raised (see figure below): is there a bias in this set of propositions? If yes what is the cause? How can it be addressed? Note that one issue in this type of study is to measure the bias towards the theoretical reference. Decision theory helped to define the theoretically ideal choice; in generativity, design theory provides the reference. Many works follow this pattern, from an individual cognition perspective (see for instance [START_REF] Agogué | The impact of type of examples on originality: Explaining fixation and stimulation effects[END_REF]) to a collective one or even to organizational issues.
3-Design theory as an extension of optimal decision -generativity as a relativity principle
As underlined above: design theory accounts for generativity processes, hence it is a relevant model for the study of radical innovation. It also encompasses the creativity paradigm that we mention in part 1. And its formal properties help to build cognitive approaches and organizational approaches on it.
On the other hand, design theory also encompasses optimal decision making. This has been shown formally in other academic papers and books [START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF][START_REF] Dorst | Design Problems and Design Paradoxes[END_REF]Le Masson, Weil et Hatchuel 2010). They also show that design theory can be assimilated to optimal decision making when no extensions are expected, ie there is no expected generativity, or, to say it in another way: the design process will explore the unkown in an extremely limited way, in a negligible proportion. In that particular case, all ontologies are invariants, there are no holes in the knowledge base which is now integrated (just as in the optimal choice paradigm) and the process of designing becomes the process of building one optimal solution in a problem space (that might include uncertainty).
Hence, expected generativity or the unknown appear as the critical parameter that enables to describe in the same theory the situations that are very close to optimal decision making and the ones that are, by contrast, closer to radical innovation and creation.
Part 3: Extending classical models of optimal decision with design theory
We will now show how design theory can play the role of a paradigm in innovation management. We will test this proposition on three optimal choice models that have given birth to critical notions in innovation management. On each model we show: 1that we can extend the optimal choice model to design situations, just by increasing the unknowness (or the generativity) while relying on a design paradigm; 2-the critical notions related to each of these models are kept but enriched in the extension process from optimal choice to design theory.
The three models are:
1.
decision under uncertainty -this model is the root of many works on risk management, on knowledge management and on the value of information 2.
general problem solving -this model is the root of many works on the role of constraints in building solutions, on monitoring the elaboration of solutions and on strategic management 3.
combinatorics -this family of models is the root of many works on the analysis of products and objects, on the role of expertise in organizations and on the strategic management of resources.
In each case we illustrate the shift by simple examples and we show the consequences on management notions.
1-Beyond decision under uncertainty: revisiting risks, knowledge and the value of information
Derived from Wald and Savage work on decision theory under uncertainty, Raïffa developed decision tree under uncertainty [START_REF] Raïffa | Decision Analysis[END_REF]). Given a set of alternatives, states of natures and beliefs on these states of nature, it is possible to compute the expected utility of each alternatives and choose the best one (see example in the figure below). This is the basis for the techniques of investment evaluation and decision and for portfolio management.
Moreover, it is possible to consider additional alternatives that consist in learning on the states of nature and to compute the value of the alternative (see example below), hence to reduce uncertainty. This reduction depends on the learning performance, that is defined by conditional probability that a certain measure is done for a given state of nature (P(Ui/j). And this grounds the foundation for the techniques to evaluate research activities or marketing studies. It is possible to compute the cost acceptable for such study, depending on their precision. Note that, self-evidently, an instrument that aims at reducing uncertainty has to be strongly correlated to the variable to be analysed (weather forecast are all the more valuable that they predict weather with very limited errors). These instruments are the basis for real option techniques, and more generally for risk management.
Still this apparently wide-ranging paradigm has some intrinsic limits. Let's begin by reminding of one old joke. The shadoks are creatures invented by French cartoonist Jacques Rouxel in the 60s-70s. When building a rocket to Mars, they rely on mottos derived from decision making under uncertainty and the logic of 'try and learn': "When one tries continuously, one ends up succeeding. Thus, the more one fails, the greater the chance that it will work" and a direct application: "since Shadoks computed that their rocket has one chance out of one million to succeed, they rush failing the 999 999 first tries." -this far-fetched approach underlines that learning and trial in innovation is not necessarily related to just identifying the working solution but precisely leads to deeply change the artefacts so that finally the rocket works at each launch, ie so that the probability space has changed and evolved! By contrast, Shadoks underline that an innovator might be "transforming" the probability space! This corresponds to Loch initial intuition [START_REF] Loch | Diagnosing Unforseeable Uncertainty in a New Venture[END_REF]: decision under uncertainty depends on a probability space describing all the possible states of the world and their probability; but in case of "unk unk", innovation precisely might lead to create one or several new state(s) -unheard of, unimagined before. Since the decision making framework requires that states of nature are known, strictly speaking, the "emergence" of this new state can not be modelled in a decision making framework. But it can be modelled in a design framework.
Even more -consider the example in the figure below. The first figure illustrates an archetypal situation of optimal choice under uncertainty with the possibility to "learn" during the process, in order to reduce uncertainty.
Figure 2: example of decision tree under uncertainty. On the right hand-side: the basic hypothesis (alternatives: choose a raincoat for a walk vs choose a hat for a walk, states of natures = rain during the walk vs sun during the walk, beliefs = 50% chance that it will rain, 50% that it will be sunny); on the left hand-side, far left: d1 and d2 : portfolio management (choose between d1 and d2); in the middle: d3, an alternative to reduce uncertainty before choosing d1 or d2 consists in taking time to read weather forecastweather forecast is not ideal and the precision of the instrument is given in the data on the right hand-side: the forecast of rain is 80% when it will rain, the forecast of sun is then 20% when it will rain (and vice-versa) (Cho o s e) the best wea r a b le accesso r y for a wa l k
d 1 d 2 q 2 q 1 U = 100 m = 0,51 U = 10 m = 0,49 100 0,8 d1/U1= 82 d2/U1= 28 U 1 U 2 q 2 q 1 E 2 = 54 U = 100 m = 0,51 U = 10 m = 0,49 E dK = 82 d 1 d 2 q 2 q 1 q 2 q 1 10 0,2 10 0,8 100 0,2 100 0,2 d 1 d 2 q 2 q 1
q 2 q 1 10 0,8 10 0,2 100 0,8 d 1 =raincoat d 2 =hat q 1 =rain; m(q 1 )=0,51 q 2 =sun; m(q 2 )=0,51
Learning (dK) P(U 1 /q 1 )= 0,8 P(U 2 /q 1 )= 0,2 P(U 1 /q 2 )= 0,2 P(U 2 /q 2 )= 0,8
d 3 E 1 = 56
In this archetypal case, generativity begins simply by wondering: "what would be an alternative that would be better than all known alternatives?" In the case of the raincoat vshat decision, this would be an alternative that is as good as a raincoat when it rains and as good as a hat when it is sunny (see figure below). This "alternative" is partially unknown (as such it is not an alternative as d1, d2 or d3) and still it is possible to build on it: it has a value for action! For instance it can push to explore on uses in mobility, on textiles, on protecting against rain,… And it is even possible to compute elements for the value of this solution -not as a result but as a target: to be acceptable, the value distribution of the solution should be, for instance, 100 in each case. This very simple case shows the logic of generativity: there is an unknown, this unknown is desirable and its potential value can be expressed. Note that coming back to Wald theory (see equation ( 1) above), it is possible to systematically express an unknown "alternative" for each decision making problem! For each decision making problem, instead of looking for the best decision function, one can also try to design dn+1 such that:
(2) What is the logic of "trial" in this situation? In the case of the raincoat-hat, the trial consists in designing an alternative that is independent of the weather (rain or sun). More generally, progress in design theory have shown that trial in a design perspective consists in designing alternatives that are independent of the states of nature. This is extremely counterintuitive: whereas trial for uncertainty reduction aims at being strongly correlated to states of nature, trials in design looks for independence from states of nature -the exact contrary! Geometrically, trials for uncertainty reduction are, ideally, in the space of the states of nature; trials for design are, ideally, orthogonal, to the states of nature! Beyond the paradox, this underlines a deep logic of the value of knowledge in generativity. In uncertainty (and more generally in information theory), knowledge is usefull if it is correlated to other variables -through correlation, the acquired information will bring knowledge on the variable of interest. By contrast in design, knowledge is useful if it is not correlated to any known variable -because through independence, the information is different from all already available information.
(DESI GN) the best wea r a b le accesso r y for a wa l k
d 1 d 2 q 2 q 1 U = 100 m = 0,51 U = 10 m = 0,49 E 1 = 56 100 0,8 d1/U1= 82 d2/U1= 28 U 1 U 2 q 2 q 1 E 2 = 54 U = 100 m = 0,51 U = 10 m = 0,49 E dK = 82 d 1 d 2 q 2 q 1 q 2 q 1 10 0,2 10 0,8 100 0,2 100 0,2 d 1 d 2 q 2 q 1 q 2 q 1 10 0,8 10 0,2 100 0,8 d 1 =raincoat d 2 =hat
q 1 =rain; m(q 1 )=0,51 q 2 =sun; m(q 2 )=0,51
Learning (dK) P(U 1 /q 1 )= 0,8 P(U 2 /q 1 )= 0,2 P(U 1 /q 2 )= 0,2 P(U 2 /q 2 )= 0,8
Concept: a raincoat-hat q 2 q 1 E 3 = 100 U = 100 m = 0,51 U = 100 m = 0,49
K-expansions: uses in mobility?
Textiles? Protection?...
"i =1...n, "m, C(q, d n+1 ) Q ò m(q)dq < C(q, d i ) Q ò m(q)dq
These basic results have direct consequences for classics in management -consequences that have been already described by many research works: For risk management: action does not only consist in reducing risk; it also consists in designing alternatives that enable to be independent from known risks! This apparently paradoxical path is actually quite well-known and practiced in innovation management when designers invent techniques that are generic, ie these techniques are designed to be valid on a large number of markets -so that, even if the probability of success is very low for each market the probability that at least one market emerges becomes very high! (Kokshagina, Le Masson et Weil 2015) For knowledge management and creativity: crossing the boarder from decision making to generativity, we didn't forget what we knew! Contrary to misleading assumptions that one should begin on a blank sheet, we see how the structure of the known actually helps to formulate relevant unknown and to reveal holes in knowledge. The unknown emerges from the known. If there is less knowledge at the beginning, then the design process will be less challenging. For the value of information in the generativity paradigm: we reminded that decision making under uncertainty is also a theory of the value of learning.
Crossing the boarder between decision and design, it appears that learning is also highly relevant -but its logic is different. In decision making, one learns with instruments that are correlated to the variables whose standard deviation have to be reduced; in design, one learns on variables that are independent from all variables of interest! In decision theory, the value of information is proportional to uncertainty reduction -so that independent information has no value; in design theory, the value of information lies in being independent from what is already known.
Value of information
From the value of interdependences to the value of independences Design creates new interdependences that enable better predictions
2-Beyond problem solving: revisiting constraints, strategy and leadership
A second critical stream of work in the decision making paradigm is problem solving, ie the capacity to identify the best solution (or at least a satisfying one) in a complex problem space. In such complex problem space, it is impossible to enumerate all individual solutions; hence the idea of one paradigmatic algorithm, Branch and Bound (B&B is the root of the programs developed by Newell and Simon in their General Problem Solver research program): in the set of all alternatives, separate sub-set of solutions and, knowing the separation criteria, evaluate directly the whole subset (and not individual solutions). Then separate again the most promising subset, and so on. This algorithm tends to prune the (too large) set of possible solutions to find the optimal solution. It is efficient and very generic. Still it works only if the users dispose of knowledge for separation and knowledge for evaluation. This very generic algorithm is particularly useful for many industrial design issues (project planning, logistics, production,…). It is also the paradigm behind many management debates: management being assimilated to goal optimization under constraints, the problem solving paradigm helps to raise questions: how can one separate complex problems into subproblems? What is the evaluation function to evaluate partial solutions? More generally, what are the organizational forms that are adapted to problem solving? Research works have uncovered the difficulty to take into account all the constraints, raising the issue of expertise, the management of expertise in companies and the access to external expertise (absorptive capacity, open innovation). Researchers have also worked on biases: organizations will favor some separation or evaluation criteria, whereas separation and evaluation criteria depend on the type of problem -hence a risk of routinization, in which organizations member favor the exploitation of existing routine instead of using new routines in a more exploratory mode; this provokes bias in decision making. And the role of management becomes precisely to evaluate different separations and evaluations functions to increase overall performance. Hence works on leadership for optimal problem solving. The problem solving model finally helped to frame many management issues.
Still, here again this is a famous joke -the story says that, for an oral exam, a physic professor asked a young student (said to be Nils Bohr, which is actually not true and not important for our point) to solve the following problem: "how to measure the height of a tall building using a barometer?". The professor expected a solution based on the relationship between Pressure and Altitude. But the student proposed many other solutions like: "Take the barometer to the top of the building, attach a long rope to it, lower the barometer to the street and then bring it up, measuring the length of the rope. The length of the rope is the height of the building." Or: "take the barometer to the basement and knock on the superintendent's door. When the superintendent answers, you speak to him as follows: "Mr. Superintendent, here I have a fine barometer. If you tell me the height of this building, I will give you this barometer". Apparently the "problem" was well-framed and should be solved in a direct way, relying on known laws and constraints. But the student invents original solutions by relying on properties of the objects that are out of the frame of the problem: the barometer is not only a system to measure pressure, it also has a mass, it has a value,… In innovation as well, the innovator will play on neglected dimensions of objects or even invent new dimensions of objects -like smart phone functions that are not limited to phone calls… Actually, in the problem solving framework, the "barometer problem" provokes a double surprise:
1-on the one hand, the "student" addresses an "impossible" problem: the initial problem was "measure the height of a tall building with a barometer -the "student" actually adds "…without measuring pressure". In the initial problem, the solver had to relate pressure and height using one relationship between pressure and altitude; with the refined formulation, it is not self-evident to say whether there is solution in the solution space or whether the solution space is empty. The added "constraint", "without measuring pressure", makes emerge an undecidable proposition.
2-It also appears that some objects in the problem (barometer, building) can have "unexpected" properties. A barometer is not necessarily an object for measuring pressure, it can for instance have value to be a reward for the building's superintendent.
The example reveals a surprising "strategy" related to generativity: whereas decision paradigm consists in finding one optimal solution in a non-empty solution space by using available action means, generativity consists in "closing" self-evident solutions spaces to force the emergence of new action means! Note that the hen's egg case described above is precisely one more elaborated case following this logic: on the one hand, the problem solving logic is applied -one generates the restrictive solutions in the following way: the "hen's egg fall" becomes: "avoid that a fragile object breaks by falling" and this leads to three main types of solution: damping the shock, protecting the egg or slowing the fall; on the other hand, these first solutions are detailed and this reveals degrees of freedom, that can be closed by adding constraints such as "without any additional device", "by using a living animal", etc. This leads to identify new action means. Moreover, in this process, "constraints" are combined and design theory has shown that there might appear a partial order based on constraints, just as there is a partial order through the separation process in Branch and Bound. The result is that the tree of alternatives of Branch and Bound becomes a tree in the unknown; hence a paradoxical result obtained by design theory: there is structure of the unknown. This implies, again, deep changes in action models -changes that have been already anticipated by many research works:
Constraints in action: in the decision paradigm, "constraint" means less degrees of freedom; it also means progressive definition of the solution (adding constraints after constraints helps to define the final solution); in the design paradigm, a constraint also becomes generative (Arrighi, Le Masson et Weil 2015b, a;[START_REF] Hatchuel | Creativity under strong constraints: the hidden influence of design models[END_REF] -it enables to go out of the solution space; and it obliges to revise (expand) the set of actions means. Monitoring projects: in branch & bound algorithm, leadership consists in organizing separation and evaluation steps, and avoiding too narrow exploitation based on well-known routines. In a generativity perspective, leadership consists in adding constraints to "close" solutions spaces to push people to explore the unknown and identify new facets and paths. Strategy: just as they are strategies to explore a solution space (depth first, breadth first, simulated annealing to avoid local optima, etc.), there might be strategies to explore the unknown. Of course the strategic logic seems different: "close self-evident solution spaces", "stimulate the emergence of new action means" are the objectives. But this does not mean that any strategic monitoring is impossible. On the contrary, the logic of "closing solution spaces" can be tuned and iterated. Since there is a partial order in the unknown, a strategy can be built on it (again: steepest first or breadth first). Some authors have even proposed methods to extend Branch and Bound to design situations by a) adding an evaluation function that measures whether the solution space is, as in decision paradigm, non-empty; or whether it is undecidable; b) depending on the answer, it either launches a classical optimization procedure or launches the exploration to create knowledge (Kroll, Le Masson et Weil 2014). The paradigm of optimal decision has also given birth to many works on combinatorics, leading to master more and more complex combinations, for instance through artificial intelligence, expert systems, neural networks or evolutionary algorithms. These models combine elements of solutions into comprehensive solutions, they evaluate each solution according to an objective function, and depending on the performance, they recombine the elements of solutions.
Just like problem solving or decision making, one should remind of these models being heavily used for instance in industrial engineering (today: image or speech recognition, contemporary CRM through targeted adds,…). Beyond these applications, the model also led to frame some management issues: following the idea that performance is in combination, authors defined the firm as a set of "combinative capabilities" (Kogut et Zander 1992), leaders are in charge of crossing expertise by organizing cross-functional teams, innovation consists in organizing serendipity, ie the random, unexpected combination of skills, etc. In strategic analysis and strategic management, industrial dynamics are modelled as evolutionary processes in which firms realize successful or less successful "genes" combinations.
In this model, Lego appear as the archetype of the combination logic -all blocks can be combined, it is possible to evaluate the final solution. Lego building can be more or less efficient or even "original": the combinations are more or less sophisticated, refined, etc… inside the algebra of all possible combinations. However, the Swedish photograph Erik Johansson has been revisiting M. C. Escher 'impossible construction' by using Lego.
In particular he created a shape that is done with lego blocks but is impossible with (physical) lego blocks. This picture illustrates in a very powerful way the limit of the combinatorics models for innovation: in a world of lego, many combinations are possible, but the innovator might go beyond combinations by creating something that is a made with lego but is beyond all the (physical) combinations of lego. Innovation can be like this: combining old pieces of knowledge so as to create an artifact that is of course made of known pieces but goes beyond all combinations of the known pieces. Hence it raises a question: how is it possible to combine basic, known elements in such a way that the final piece is different from all known and predictable combinations? How can combinatorics become generative -whereas it is often marked by a closed world logic? Actually the Escher lego just illustrates the contemporary models of generative combinatorics: evolutionary robotics for instance has developed evolutionary algorithms based no more on convergence towards an objective but based on organized divergence -so called "novelty driven" algorithm. In set theory, Forcing is a technique to create new models of sets by combining features of the initial model of sets. These recent works bring two insights (Hatchuel, Weil et Le Masson 2013):
1-it is possible to use the combination techniques to go "out of the box" -and not only to identify one particular point inside the box.
2-not all knowledge structures are adapted to generative combinatorics. As shown in the previous part (part 2.2), the structure of knowledge should be very specific. To simplify: there has to be some independences in knowledge.
This extension of combinatorics has also implications for management:
Regarding the objects (products, services, competences,…): if combinatorics becomes generative, if algebras can be extended, then the set of objects is in constant evolutions, with regular creation of new object identities. Over time what is called a phone is strongly evolving -which is quite self evident-but this is also true for vacuum cleaner, a tooth brush, or a bike! The world of objects might be inherently generative. In the decision paradigm, we need to impose ex ante and stable definitions of things; in the design paradigm, we can accept changing identity of objects -obtained by an extended logic of combinations that goes from closed combinations to generative ones.
This has consequences for domain expertise: the movement of generative combination regularly changes the models; when new entities emerge, it means that the whole set of products and related knowledge (skills, competences, disciplines,…) should be reordered to take into account the new entities and, above all, the combinations of the new entity with all "old" entities. In design paradigm, ontologies are not stable -they are designed and redesigned. Stability is just a temporary state.
The logic of independence in knowledge structures induces surprising leadership and strategies: for instance the leader has to organize independence (and not only crossfunctional logics). He has to organize differentiation and autonomy of skills -before organizing new contacts, later. The strategy consists in looking for partners that are not "complementary" but first of all, "independent". Organizing constantly evolving independences implies a logic at ecosystem level, and the development of completely new ways to deal with common unknown, their exploration and the critical issue of the appropriation -or their non-appropriation.
Part 4 -Conclusion and further research
In this paper we rely on design theory to propose the "unknown" as a relativity principle to bridge the gap between the optimal decision paradigm and the logic of creation in management. We showed that this relativity principle paves many paths to revisit basic notions in innovation management. It appeared as integrative of many research works already done in innovation management.
This formal approach is coherent with many works based on empirical approaches. It also call for empirical investigation -the design paradigm helps to formulate new and original hypotheses and, hence, original empirical research. Of course this (already) supports research in creative professions, creative firms of design-oriented organizations. More generally it calls for thorough investigations on collective action based on creation logics.
This formal approach can also nurture a renewal of education and teaching -for scholars, students and practitioners, it is more and more needed to understand the formal models of the design paradigm -not only "understand": also apply and prolong them in action! Finally, understanding and formalizing these logics of creation also enables to give to "creation" a full "autonomy" . [START_REF] Castoriadis | L'institution imaginaire de la société[END_REF]. Hidden behind optimization and decision making, creation tended to be interpreted as a means for utility maximization. Formal models show that there can be an intrinsic logic of creation -for an individual, a group, an institution or even a society. In this perspective, relying on a design paradigm, management science is in a position to better analyze the processes, to better identify new forms of collective action, to better educate citizens and actors of these new forms of action. Relying on a design paradigm, it is possible to avoid the trap of "heteronomy"the over-interpretation of these logics by metaphysical principles such as rational utility maximization. Relying on the design paradigm, management could better participate to the invention of an "autonomous" society.
1-following Wald: given a set D of alternatives, given a set of states of nature , with a-priori density(), given a source of information on modelled as a sample X1…Xn of a random variable X with density function f(x, ) and a related sample likelihood L(x1…xn, ), given a cost function C(, ) associated to each pair (, ), we are looking for the best choice function that relates each particular sample x = (x1,…xn) to a decision of D (more precisely: to a probability density () defined for each of D) and optimizes the cost expectation:
Figure 1 :
1 Figure 1: empirical research with design theory
Figure 3 :
3 Figure 3: decision tree with an unknown "alternative"
Figure 4 :
4 Figure 4: Escher lego -Erik Johansson
Table 1 : extending the decision under uncertainty Extending… Keep From Decision to design From design back to decision Decision making under uncertainty
1
Risk From uncertainty reduction Structure a probability space with
management to regenerate the space of better expected utility!
probability
Knowledge From integrated knowledge Reduce holes to create integrated
management algebra, without holes to knowledge structures enabling
identify holes and create decision
knowledge
Table 2 : extending problem solving Extending… Keep From Decision to design From design back to decision Problem solving
2
constraint From constraint as restriction to Build "restrictive constraints"
generative constraint that enable fast convergence
Leadership From speeding up optimization Generate a problem space with
and project process to forcing "out of the favorable convergence
monitoring box"
Strategy From 'satisfying' & routines to Generate robust routines
capacity to tune generativity
c) Beyond
Combinatorics: revisiting the dynamics of objects, of disciplines and ecosystems
Table 3 : extending combinatorics
3
Extending… Keep From Decision to design From design back to decision
Combinatoric Objects From a fixed algebra to a moving Design algebra -eg: generic
reference (for market analyzes, techniques that enable
intelligence…) combinations.
Assets From assets as ontology to assets A capacity to stabilize an
as capacity to restructure ontology
ontologies
Strategic From strategy based on core Organize the regeneration of
management competences and specific assets resources.
of resources to strategy based on re-oredering
of things
Engineering Design, ICED'11, Copenhagen, Technical University of Denmark, 2011. p 12 Hatchuel A, Pezet E, Starkey K, Lenay O (eds) (2005) Gouvernement, organisation et entreprise : l'héritage de Michel Foucault Presses de l'Université de Laval, Saint-Nicolas (Québec) Hatchuel A, Starkey K, Tempest S, Le Masson P (2010) Strategy as Innovative Design: An Emerging Perspective. Advances in Strategic Management 27:3-28. Hatchuel A, Weil B, Le Masson P (2013) Towards an ontology of design: lessons from C-K Design theory and Forcing. Research in Engineering Design 24 (2):147-163. Hippel Ev, Krogh Gv (2016) CROSSROADS-Identifying Viable "Need-Solution Pairs": Problem Solving Without Problem Formulation. Organization Science 0 (0):null. Kahneman D, Tversky A (1979) Prospect Theory: An Analysis of Decision Under Risk. Econometrica 47 (2):263-291. Karniel A, Reich Y (2011) Managing the dynamics of New Product Development Processes -A new Product Lifecycle Management Paradigm. Springer, London Kazakçi AO (2013) On the imaginative constructivist nature of design: a theoretical approach. Research in Engineering Design 24 (2):127-145. Kline SJ, Rosenberg N (1986) An Overview of Innovation. In: Landau R, Rosenberg N (eds) The Positive Sum Strategy, Harnessing Technology for Economic Growth. National Academy Press, Washington, pp pp. 275-305 Kogut B, Zander U (1992) Knowledge of the firm, combinative capabilities and the replication of technology. Organization Science 3 (3):383-397. Kokshagina O, Le Masson P, Weil B (2015) Portfolio management in double unknown situations: technological platforms and the role of cross-application managers Creativity and Innovation Management (accepted, published online 31 May 2015). Kroll E, Le Masson P, Weil B (2014) Steepest-first exploration with learning-based path evaluation: uncovering the design strategy of parameter analysis with C-K theory. Research in Engineering Design 25:351-373. Le Masson P, Cogez P, Felk Y, Weil B (2012a) Revisiting Absorptive Capacity with a Design Perspective. International Journal of Knowledge Management Studies 5 (1/2):10-44. Le Masson P, Hatchuel A, Weil B (2013) Teaching at Bauhaus: improving design capacities of creative people? From modular to generic creativity in desingdriven innovation. Paper presented at the 10th European Academy of Design, Gothenburg, Le Masson P, Lenfle S, Weil B (2013) Testing whether major innovation capabilities are systemic design capabilities: analyzing rule-renewal design capabilities in a casecontrol study of historical new business developments. Paper presented at the European Academy of Management, Istambul, Le Masson P, Weil B (2013) Design theories as languages for the unknown: insights from the German roots of systematic design (1840-1960). Research in Engineering Design 24 (2):105-126. Le Masson P, Weil B, Hatchuel A (2009) Platforms for the design of platforms: collaborating in the unknown. In: Gawer A (ed) Platforms, Market and Innovation. Edward Elgar, Cheltenham, UK, pp 273-305 Le Masson P, Weil B, Hatchuel A (2010) Strategic Management of Innovation and Design. Cambridge University Press, Cambridge | 68,799 | [
"1111",
"3386",
"1099"
] | [
"39111",
"39111",
"39111"
] |
01481889 | en | [
"shs",
"spi"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01481889/file/Kokshagina%20et%20al%202017%20RED%20Submission%20-%20copie.pdf | Olga Kokshagina
Pascal Le Masson
Benoit Weil
Should we
Keywords: intellectual property, innovation, patent design, C-K Theory, patentability criteria, person skilled
published or not. The documents may come
Intellectual property: from an existing asset to an asset to be designed INTRODUCTION The proposed paper deals with the issues of the intellectual property (IP) design. The IP is often seen as a source of competitive advantage, as a strategic element that can ensure revenu flows or just demonstrate the organizational entity's innovative capacity. Though, the IP management is becoming ever more complex and resource demanding. The reasons are multiple. First, there is multiplicity of patents filled: according to the NYtimes, « the number of patent applications, filed each year at the United States patent office has increased by more than 50 percent over the last decade» (NYTimes, 2012). Second, the legal risks involved are making it even more challenging. The number of patent litigation increases drastically over time according to the Stanford University analysis, as much as $20 billion was spent on patent litigation and purchases in the smartphone industry in 2010 -2012. Third, the number of actors operating in the world of industrial propery increased nowadays. In addition to the protection by patentees of their inventions, there is a whoole second-market of non-practicing entities that provide new opportunities for buying and selling patents [START_REF] Chien | From arms race to marketplace: the complex patent ecosystem and its implications for the patent system[END_REF]. The ecosystem is more complex and hard to manage than ever before featuring the variety of business models and actors like patent lawers, designers of IP, licensing institutions, IPO, patent consultants, patent brokers, patent pools, standard setting associations etc., despite the increasing help and contributions of Information and Communication Technologies. Moreover, new actors are still emerging such as defensive patent aggregators and superaggregators (Hagiu et al., 2013). From the one side, IP legal frameworks tend to be complex. From the other side, economic logic behind is ressource demanding for the companies. IP portfolios are too costly and the ways to create profitable IP portfolios are not evident. This overwhelming complexity causes speculative bubbles, leading for the creation of actors that tend to decrease the system complexity or simply profit from it (taking the case of patent trolls). How should we deal with IP without loosing its strategic potential and do not being lost in its complexity?
This research intends to deal with the IP as an asset to be designed. The existing literature is mostly focused on the the existing intellectual assets (Teece andPisano, 1994, Teece et al., 1997), on the capacity to appropriate them and ensure their competitive advantage. Yet companies could proactively generate their intellectual assets to protect and strengthen the business opportunities by focusing on the ex ante phases [START_REF] Lindsay | From experience: Disruptive innovation and the need for disruptive intellectual asset strategy[END_REF]. There exist methods that take into account IP design: TRIZ, design analogy, genetic algorthims (Altshuller, 1999b[START_REF] Felk | Designing patent portfolio for disruptive innovation-a new methodology based on CK theory[END_REF][START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF][START_REF] Koza | Invention and creativity in automated design by means of genetic programming[END_REF]. Though, their performance is not evident, the expected results in terms of IP are hard to quantify and the comparison is not obvious. To the best of our nowldge, there is no theoretical theoretical framework that defines what means patent design. The methods define certain properties for patent design but it is impossible to evaluate them since a general model of patent design is missing. This research tackles the following questions: what is the general framework of patent design and how to characterize the performance of the patent design methods?
To address these research questions we will rely on the recent advances in design theory.
Design theory is chosen since they allow for knowledge expandability, for generc process of expansion which includes the capacity revise objects identity and work on different knowledge structures [START_REF] Dorst | Design problems and design paradoxes[END_REF][START_REF] Le Masson | Design theory: history, state of the art and advancements[END_REF]. Moreover, design teories adress issues that go beyond the scope of classical models and open possibilities to invent new methods, new organizations, … [START_REF] Le Masson | Design theory: history, state of the art and advancements[END_REF]. Recent advances in design theory have led to propose formal models of design that are independent of the language of objects [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]. As a consequence, the generic character of these models enables to consider a patent or a patent portfolio as a possible design objective. Overall, IP design is not as strange as it might sound: on the one hand, peer-reviewed journals in the discipline of the engineering design (e.g., Journal of Engineering Design or Research in Engineering design) already tackle the issue of the conceptualization of patent information or its interpretation as a design object and shows that there is a link between the IP and engineering design [START_REF] Koh | Engineering design and intellectual property: where do they meet?[END_REF][START_REF] Koza | Invention and creativity in automated design by means of genetic programming[END_REF].
By building on the most recent design theories like Concept-Knowledge design theory [START_REF] Hatchuel | CK design theory: an advanced formulation[END_REF], this work introduces a general framework of patent design that allows controlling for "patentability" criteria, model a patent in a unique way using (action effect, knowledge) model and consider reasoning of person skilled in the art. Using the introduced model, patent design methods (TRIZ, design by analogy, genetic algorthims and CK Invent) are compared and their performance is characterized. The results indicate that patent proposal quality depends on the capability to extend the knowledge basis that is not limited to the original knowledge combinations but requires expanding the initial design reasoning of person skilled in the art and ensuring sufficient inventive step and novelty. Moreover, the patent design revealed an unexplored property of design theoriesnon-substitution, showing that the order of partitioning in design matters and it influences the quality of results.
The paper is organized as follows. First, we present the legal definition of patent and the existing methods of patent design. Second, we build on the design theory advances to see how the design logic takes into account patent design logic and introduce a general framework for patent design. Third, we build on this framework to compare the patent design methods and quantify their performance. We conclude with a discussion and draw directions for further research.
LITERATURE REVIEW
Intellectual property: On the definition of patent
According to the World Intellectual Property Organization (WIPO), Intellectual Property (IP) refers to « creation of the mind such as inventions, literature and artistes works, designs and symbols, names and images in commerce». It is protected by law through patents, copyright and trademarks. Our focus in this article is on patents which represent a document, issued, upon application, by a government office (such as the European Patent Office (EPO) or United States Patent and Trademark office), which describes an invention and creates a legal situation in which the patented invention can normally only be exploited with the authorization of the owner of the patent. Patents offer a legal right title for the owner to prevent others from using, making or selling the protected invention in a country where patent has been granted for an allowed period of time.
"Invention" is defined as a solution to a specific (mostly technical) problem. An invention may relate to a product or a process. Not every idea can be patentable: only the ones that incorporate nonobvious steps. To be protected by a patent, invention has to meet several criteria: 1) the invention must consist of patentable subject matter; 2) the invention must be industrially applicable (useful); 3) it must be novel; 4) it must exhibit a sufficient "inventive step" (be non-obvious for a person skilled in the art); and 5) the disclosure of the invention in the patent application must meet certain standards.
On the logic of Patent design
Which methods or practices account for "patent design practices"? We have identified several ways of designing patents. We structure them in five different methods: 1) Technology-first; 2) TRIZ; 3) Genetic programming; 4) Design analogy methods; 5) CK Invent (See Table 1).
Technology-first
Patents are filed once a technology is first developed (Ernst, 2003, Van Zeebroeck and[START_REF] Van Zeebroeck | Filing strategies and patent value[END_REF] or proof of concept is obtained. We call this a Technology-first method since it assumes that technology is first developed or prototypes are designed in order to fill patent application. The technology-first approach consists in pursuing first of all the exploration of a reference technology (Table 1). The goal is to mobilize the existing knowledge to find new invention proposals. This is a reference model for patent design where patents are issued for the technological inventions. In this case the technology need to be well described, its novel character ensured. Claims are often seen as a strategic section of the patent document that influences patent quality.
Claims editing in accordance with prior art can be seen as a design activity which is made by a patent engineer. In this case, the collaboration with an IP office with higher level of expertise is crucial. It might determine the patent's future quality and condition a probability of be granted a patent.
As pointed by [START_REF] Cavallucci | Linking contradictions and laws of engineering system evolution within the TRIZ framework[END_REF], the need to rebuild design practices in enterprises is strongly felt both in terms of human skills and methodological expertise. In case we deal with the assets ex ante, patents appear as an objective that a group of designers and scientists have to achieve before the technology is commercialized or its exploration started. The latter brings companies, research centers to actually seek for methods that allow maximizing the number of potential patents, pursuing creativity in patent design, possibility of strategic inventing [START_REF] Nissing | Strategic inventing[END_REF].
Inventive problem solving -TRIZ
Nowadays patent design is often associated with inventive problem solving -TRIZ (Russian acronym that stands for "a problem-solving, analysis and forecasting tool derived from the study of patterns of invention in the global patent literature"). TRIZ is a widely accepted method of inventive concept generation that emphasizes predictive methods and evolutionary trends based on the description of contradictions and potential solutions (Table 1). TRIZ resulted in creation of TRIZ-based methods such as contradiction theory, substance-field analysis, and technology evolution patterns (Altshuller, 1999a, Altshuller, 1999b[START_REF] Altshuller | Creativity as an exact science[END_REF]. For instance, tools to automatically identify contradictions in technical systems based on patents textual analysis [START_REF] Cascini | Computer-aided analysis of patents and search for TRIZ contradictions[END_REF], axiomatic conceptual design model that combines TRIZ and the functional basis work were proposed [START_REF] Zhang | A conceptual design model using axiomatic design, functional basis and TRIZ[END_REF]. In addition, TRIZ was combined with strategic inventing to place more emphasis on differentiation and patent protection [START_REF] Nissing | Would you buy a purple orange?[END_REF].
TRIZ offers the existing model for patent design associated to the logic of abstraction and analogy. The ideality in TRIZ determines proximity of how close the new solution is to the ideal system so it presumes that the ideal system is determined. The ideality aims to increase product benefits and reduce costs. Contradiction solving seeks to identify and eliminate contradictions by deploying 40 principles contradiction matrix which allows solving about 1500 technical contradictions based problems. [START_REF] Liang | Patent analysis with text mining for TRIZ. Management of innovation and technology[END_REF] propose method of mining patents based on contradictions. By indicating that the contradictions and corresponding inventive principles are too abstract and general, the authors claim to help innovators by using text mining to relate directly examples of published patents in patent design activity using TRIZ related methodology. Authors explore how to ensure patent search and use text mining and semantic analyses methods to automatically analyze the existing IP databases and relate them to the new innovative challenges, to verify the risks of patent infringements [START_REF] Cascini | Computer-aided analysis of patents and search for TRIZ contradictions[END_REF][START_REF] Liang | Patent analysis with text mining for TRIZ. Management of innovation and technology[END_REF][START_REF] Bergmann | Evaluating the risk of patent infringement by means of semantic patent analysis: the case of DNA chips[END_REF]. TRIZ is also to deal with local innovations of an existing patent -design around. Design around is built based on patent infringement judgments to ensure that new techniques are substantially different from the existing patents. [START_REF] Hung | An integrated process for designing around existing patents through the theory of inventive problem-solving[END_REF] demonstrated that design-around strategies, function model and value analysis in TRIZ can determine the design problems to be solved and TRIZ can be applied to solve them by increasing innovativeness and avoiding incremental trial and error solutions. Yet, by admitting its patentability, one can argue how innovative will be a solution obtained through design-around method. Overall, TRIZ together with problem analysis and semantic tools are powerful instruments for patent strategy development. Though, TRIZ is often criticized for its limited inventive novelty due to the departure from the already issued patents -existing physical system. Reich et al., (2010) demonstrated that TRIZ based methods are suitable for 'in-and near-box' designs.
Genetic programming
Genetic algorithms were applied for patent design [START_REF] Koza | Invention and creativity in automated design by means of genetic programming[END_REF]. This method consists in dealing with the automation of nonobvious, knowledge intensive processes: the evolutionary and the invention processes by determining the exploration spaces and ensuring the search using the genetic algorithms. Genetic programming transforms an initial population into a new generation by iteratively applying the operators of crossover, mutation, gene duplication, and deletion [START_REF] Goldberg | Genetic algorithms[END_REF]. A new solution can be generated close to the prefixed exploration space and is always built on the existing knowledge (Table 1). The new solutions can ensure the inventive capacity and lead to propose counterintuitive solution. For instance, the authors replicated an invention of negative feedback by AT&T through an automated design and invention technique patterned after the evolutionary process in nature, genetic programming. They show that the genetic algorithm can synthetize analog circuits and duplicate their functionality. In addition they demonstrate how this technique can produce many additional inventions inherent on the evolutionary process. Independently from patents, the evolutionary computation approach was used for concept generation to increase quality and diversity of design concepts by combining the evolutionary approach with TRIZ.
Design by analogy
Design by analogy methods applied to patent texts [START_REF] Fu | Design-by-analogy: experimental evaluation of a functional analogy search methodology for concept generation improvement[END_REF][START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF][START_REF] Murphy | Function based design-by-analogy: a functional vector approach to analogical search[END_REF] attempted to be used for patents creation by assuming that similar problems can appear in technologies that have similar functions and properties (Table 1).
Design-by-analogy extracts functional analogies from patent databases and allows designers to find analogies with the existing patents [START_REF] Fu | Design-by-analogy: experimental evaluation of a functional analogy search methodology for concept generation improvement[END_REF]. As the authors underline, this method quantifies the functional similarity between the design problem and patent description and this, lead to generate new concepts using design-by-analogy. It is based on functional vector space model analogy search engine [START_REF] Murphy | Function based design-by-analogy: a functional vector approach to analogical search[END_REF] which applied to patent databases create a vector representation of latter based on functons. For instance, [START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF] aim to generate patents by creating an analogy between mature wireless router and emerging wireless charger. The authors argue that engineers or designers can use this method systematically to create patents. First, problem solving concept resolved by each patent has to be identified. The new technology by building on one similar property or function of the old technology seeks to incorporate by the analogy other different properties or functions of the old technology by enabling new discoveries.
As the study indicates, from 352 patents on wireless router technology, 227 patents on wireless charger technology were used. [START_REF] Murphy | Function based design-by-analogy: a functional vector approach to analogical search[END_REF] claim that a robust design-byanalogy methodology would enable designers to identify nonobvious analogous solutions.
Yet it is limited once applied to novel properties or functions in case of disruptive technology where the analogy cannot be pursued.
CK Invent
The more recent design formalism like Concept-Knowledge (C-K) design theory was used to account for patent design. [START_REF] Felk | Designing patent portfolio for disruptive innovation-a new methodology based on CK theory[END_REF] mobilized C-K design theory to propose a patent proposal generation method when problem is not necessarily given in advance. The authors consider a patent as a design purpose. Following operators introduced by C-K design theory (Hatchuel and[START_REF] Hatchuel | A new approach of innovative design: an introduction to CK theory[END_REF][START_REF] Hatchuel | CK design theory: an advanced formulation[END_REF]. This method aims to strategically position future inventions in a predefined patentable design space. The problems are not defined a priori [START_REF] Felk | Designing patent portfolio for disruptive innovation-a new methodology based on CK theory[END_REF]. It is shown that an invention proposal can be obtained by adding to an existing entity a property that do not exist in knowledge basis. This is called and expansive partition in C-K Theory (see section 3 for more details).
This method derived from C-K design theory is called CK Invent. The authors exhibit how by incorporating design logic one can increase the quality of future inventions and increase the number of patents proposals. But how to control the quality of partitions? How to quantity the performance of CK Invent? (Koza et al., 2005) Is not given a priori but the exploration space is defined as well as selection criteria (fitness function)
Based on functionality of existing patents (Optical lenses systems)
Design analogy method [START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF] Is not given a priori but the space for exploring analogous technology is defined
Creation
Is not given a priori
Based on C-K design theory operators
The literature shows that there exist methods for patent design but these methods are difficult to compare, their performance needs to be quantified. As we observed, all these methods make a reference to the existing knowledge sets (patents, publications, know-hows…). They help to structure the knowledge available to design patents and to fabricate the inventions in the original design fields. Though, the general theoretical framework on how to account for patent design is absent. Our research question: What is the general framework of patent design and how to characterize the performance of the patent design methods?
THEORETICAL FRAMEWORK FOR PATENT DESIGN
Requirements to define a theoretical model for patent design and relevance of design theory
Theoretical model for patent design has to clarify the language of patent and its expected performance -a possibility to control for patentability criteria and has to be invariant to the types of knowledge structures. Why are contemporary design theories capable of doing this? Is Design theory suitable to define a model for patent proposal design?
Design can be seen as the simultaneous generation of objects and knowledge. The one way is to think of generation as a combination of existing objects or elements, likewise language generates new texts by combining invariant signs or letters. Though, Hatchuel et al. (2011) showed that limits of this combinatorial principles and demonstrate that design theories go beyond pure combinatorial strategies and take into account dynamic transformations, adaptations, hybridizations, discovery, invention and renewal of objects discovery. The authors demonstrate that the design theories seeks to increase both their generativity, i.e. their ability to produce design proposals that are different from existing solutions and design standards and ensure their robustness, i.e. their ability to produce designs that resist variations of context. Therefore, generativity and robustness of design theory will ensure the sufficient character of inventive step, the inventions' applicability.
Patent design process can be seen as a design activity where one can control for inventive and novel character of patent design process. How the design theory clarifies the design of patents, take into account the patentability criteria?
This research builds on the C-K design theory since it is independent from a particular knowledge domain [START_REF] Hatchuel | CK design theory: an advanced formulation[END_REF]. Concept-Knowledge (C-K) design theory (Hatchuel and[START_REF] Hatchuel | A new approach of innovative design: an introduction to CK theory[END_REF][START_REF] Hatchuel | CK design theory: an advanced formulation[END_REF] defines the design process as a continuous refinement of a concept described by various properties that need to be met based on the existing knowledge and producing new one. The process of design is defined as a double expansion of the concept and knowledge spaces through the application of four types of operators. Design theory is useful for patent modeling since it separates a knowledge model and design reasoning how to use the existing knowledge to structure the unknown.
Patent model and Patentability criteria
Modeling patent as an (Action, Effect, Knowledge)
A patent can be represented as a solution to a technical problem that differs from the prior art. The solution that comprises the object description and the interventions performed by an agent (human, fluid...) on an object can be characterized as actions. The technical problem to be solved can be defined as a set of effects. Knowledge defines the prior art and the results obtained during the invention preparation. The knowledge basis comprises 1) public knowledge: patents, research, commercial papers, all the available documentation and all the knowledge generally available and evident for the person skilled in the art; 2) knowledge developed by an inventor during his research or design process. In this regard patent can be seen as combination of actions, effects and knowledge [START_REF] Couble | Une approche innovante du processus de rédaction de brevet[END_REF] In this text "for the production of a beverage, more particularly coffee" is an effect that the new invention offers. The knowledge is embedded in the principles of elevated pressure in the capsules and action is the process 1-3 described using knowledge elements to achieve the desired effects. (A, E, K) model defines the patent proposal. Each patent is a combination of new actions and or effects with the associated knowledge that explains how certain effect was obtained by one or several actions. In this research, (A, E, K) model is used to characterize patent as a design objective. Definitely other ways such as Functions -Behavior -Structure [START_REF] Gero | The situated function-behaviour-structure framework[END_REF] or contradictioneffects principles [START_REF] Glaser | TRIZ for reverse inventing in market research: a case study from WITTENSTEIN AG, identifying new areas of application of a core technology[END_REF]) might be used to define patent.
Patentability criteria
Patentability criteria and definition of person skilled in the art
In order to be issued, a patent has to be a subject of patentability criteria: patentable subject matter, ensure usefulness of the invention, novelty, inventive step and enable disclosure of the invention (Table 2). Patentable subject matter is open to all fields of technology. The non-patentability is often due to specific methods of treatment for living organism schemes and models that perform purely mental activities; methods that exist in nature (for further details see Article 27.1 of the TRIPS agreement). The patentability of the invention can be questioned using morality principles as well.
Usefulness of the invention, its applicability defines the practical purpose of the proposed invention, the possibility to actually manufacture the proposed product and implement a part of the process to serve its purpose. It is the utility of the proposed invention.
Novelty and "inventive step" are appreciated with respect to the state of the art at the date of patent application. Novelty is a critical criterion and the invention appears to be novel when it is not anticipated by the prior art. The inventive step of a proposition questions the nature of the invention on its obviousness for a person having ordinary skill in the art. The question "is there inventive step?" only arises if there is novelty. WIPO suggests analyzing inventive step in relation to the problem to be solved, the solution to that problem and the advantageous that the invention offers to the state of the art. Generally, if a person skilled in the art (PSA) is capable to pose that problem, solve it similarly to a proposed invention and predict the results -the inventive step is missing. The definition of PSA differs according to the national patent codes. For instance, in Europe patent proposal follows problem -solution scheme that starts with the "closest" state of the art where we seek to identify the distinctive characteristics of the proposed inventions and seek for the technical effects of these differences, the connections between them. Finally, the appreciation of the evidence needs to be indicated: how the invention should be used by the PSA, how it can be reproduced.
In France, PSA is considered to be an expert of one technical domain or maybe the closest ones. Secondary considerations that are objective evidence of non-obviousness are not really examined which often results in trivial inventions. On the contrary in the US and Japan based systems, secondary considerations must be evaluated. In Japan and the US, the inventive activity is not based on problem -solution but on a production of unexpected, advantageous results, unexpected technical effects. In Japan, PSA has common general knowledge relevant to the art of invention, PSA is able to use ordinary technical means for R&D, to exercise ordinary creativity in selecting materials and changing designs, is able to comprehend as his knowledge all technical matters in the field of invention and in the fields of technology and may be thought as a group of persons (JPO examination guidelines). These differences in practices of examination influence novelty and inventive step of patents across different countries. The patent examination depends on the definition of PSA. The better the role of PSA is qualified and taken into account in concept definition and patentability criteria, the better the patent examiner knows how to treat each patent proposal and thus, can accelerate the evaluation and the editing procedure.
The last criterion is a disclosure of the invention that defines whether the proposed invention is sufficiently well described in the application that the PSA can carry out the invention claimed. These are the criteria that inventor has to fulfill to be granted patent rights.
The patent examiner should ensure that patentability criteria are met. His examination depends on the knowledge available and the reasoning that the inventor followed to invent a new solution.
Patent design model from the design theory perspective
We can now introduce a model of patent design in C-K theory. As shown in Section 2.1 the patentability criteria are defined based on the person skilled in the art (PSA) who disposes certain knowledge basis and the reasoning that the inventor follows to create a patent proposal. Knowledge space includes public knowledge (K 0 ), which comprises patents, publications, customer references, internal company expertise -"state of the art".
PSA presumed to be a skilled practitioner in the relevant field, who is possessed of average knowledge (in A, E form) and ability and have had at his disposal the normal means and capacity for routine work and experimentation. K 0 should be evident for PSA. Knowledge is then structures as sets of actions, effects and knowledge on their existing and possible relations: K 0 = A,E + K(i.e., R(A, E)) -see Figure 1.
According to the C-K formalism concept is "undecidable" in knowledge basis meaning that its logical status is neither true nor false (Figure 1). The concept is defined as a combination of actions and effects that lack a logical status yet (at least for PSA). The concept is defined as a combination of actions and effects meaning that a concept in C-K is defined as (A-E). The patent can be achieved through (A-E) concepts exploration and can be defined as new (A, E, K) [START_REF] Couble | Une approche innovante du processus de rédaction de brevet[END_REF] that fulfills patentability criteria.
The knowledge space has to take form of actions, effects and their relations. Patent design appears as designing a new (A, E, K) proposition that is not already in K 0 and PSA (K 0 ) does not consider the patent proposal as obvious.
Figure 1. C-K for patent design logic
If the dominant technological proposition already exists, the reference solution in the form (A i , E i , K i ) can be determined and the patent exploration is often consisted on extracting potentially subsets (A, E, K) from the properties of that technology and checking its patentability (Figure 2). So a patent is a new sentence made of A, E which meets patentability criteria (Table 2): 1) Not all Ei, Ai or their combinations can be patentable (patentable subject matter); 2) Invention is considered to be novel only when Action (interventions made on objects) and Effects (consequences brought by actions) don't make part of a common knowledge and as a result new δK is created (A, E) ⊄ K 0 . 3) Inventive step is defined as if one is able to incorporate all the learning that PSA can make in the domain, ∃K 0 : PSA(K 0 ) . If there is an expansive partition A-or E-type where (A, E) ⊄ PSA(K 0 ) , it can be stated that the inventive step is ensured. Novel actions and/or effects based on the existing state of the art are proposed. It is important to underline that not all the concepts in C-space will result in patent proposals. In case of restrictive partitions, they are included in ∃K 0 : PSA(K 0 ) ; 4) Minimal description need to be ensured in order to disclose the invention (A, E) ⊂ (K 0 ∪δK ) . In this case (A, E) ⊂ (K 0 ∪δK ) is a new designed knowledge. the inventive power of inventions. Different types of expansions influence the way PSA(K 0 ) will evaluate the proposal and provide a possibility to control for patent quality.
We identify three types of expansion to control an invention quality (see Figure 3):
-Keeping Action and Effects but create new relations between them/ new dependencies -Creating new effect for the existing action -Creating new action for the existing effect In the first type, the exploration starts by listing the known action and effects, their relations. The purpose is to build on the identified, previously independent (A 2 -E 2 ) sets and create independencies between them or change their relations. In this case δK(A, E) that redefines the (A 2 -E 2 ) relations is created. New actions or effects do not appear but their relation is redefined. This process -Patent first by keeping (A-E) but changing their relations-is similar to the patent design around (Figure 3).
While pursuing breakthrough innovation, actions, effects and associated knowledge are often missing. According to the C-K theory, creative design requires an expansive partition, which expands the concept space. These expansive partitions significantly modify or propose new actions and effects that generate new "sentences" -new ideas for patent proposals (Figure 3). They add new knowledge to the K space that can have A or E types of partitions and are not evident for PSA. These expansive partitions obtained by
Creating new actions or new effects should be examined to fulfill the criteria of novelty and inventive step. Novelty implies the absence of A-E relations in the knowledge basis;
inventive step means that A-E is not evident for PSA (Table 2). Moreover, disclosure of the invention is achieved once the (A, E, K) can be understandable and repeatable by technical experts.
Patent design process
Patent design process is exhibited in Figure 4.The exploration starts by organizing knowledge structure in order to define high-level (A-E) concepts that enclose a wide range of opportunities using C-K design tools (see figure 2). A conjunction is achieved once a new concept is true in K and a combination of (A, E) lead to create new knowledge resulting in patent proposal.
Patent design means designing a new (A, E, K) sentence that corresponds to Table 2.
Patent design starts by identifying new innovative field on which the design process will focus (Figure 4). Once a generic concept is established, a mapping of the high-level effects and actins should take place. These high level actions or effects can correspond to some patent classification and represent design concept (A 0 -E 0 ). Normally, some new words (new types of actions or effects) can be introduced here to enabling the exploration of new design space (see 3.4 for an illustration). To map the corresponding actions and effects, knowledge bases based on A, E, K are built using patent databases, scientific articles, etc.
This initial mapping allows identifying enabling patents, relations between different inventions (for instance, by integrating patent citations analysis) but also incorporating issues discussed in research papers, competitor analysis, industry trends. The goal is to identify potential knowledge gaps. Mapping of the initial knowledge basis (K 0 ) represents a set of initial issues. We do not claim here to analyze all the patents related to the domain but we seek for a general understanding of the field problematic "Claims" part of a patent is used. Normally 10 -20 % of patents are used to build K 0 . This phase is mostly conducted in K-space but a tree structure of C-space helps to put together patent propositions as concepts in the C-space where each patent is presented as a relation between A and E, and partitions are actions or effects. We want to underline that this phase is required when the research area is mature and company aims to identify the new possibilities and new offers that are still "free of IP" -possible breakthroughs that are easier to attain. In addition, it might be helpful to determine person skilled in the art -a general knowledge that is considered obvious in the research area -should be characterized. Usually definition of PSA(K 0 ) depends on the IP legislation where designers wish to fill patent application.
Next, workshops are organized for ideas production and knowledge simulation following
Patent design in practice -Illustration at STMicroelectronics
The experiments initiated by [START_REF] Felk | Designing patent portfolio for disruptive innovation-a new methodology based on CK theory[END_REF] Moreover, conventional brainstorming, C-sketch of 6-3-5 methods are used solve technical problems resulting in patents. Each idea, once elaborated, is presented to the special patent committee, which evaluate the ideas, help to enrich them and decide whether the patent application process can be pursued. The panel of committee members includes various ST experts, IP engineers and external IP examiners.
Each experiment was conducted during 3 -6 months period with teams in charge of developing relevant technological blocks. Teams comprised engineers, researchers, doctoral students who participated in inventions' generation. Coordinator and facilitators were in charge of deploying the method and control the quality of the corresponding proposals. The issued propositions were later discussed and presented to the patent committee. The experiments have resulted in a number of inventions and patent proposals were filled.
The experiment starts by defining A, E, K and introducing new words that enable to explore new design space (see Figure 4). These new words help to structure Knowledge space (in a C-K model). We will illustrate here three types of partitioning that teams were able to come up with. The whole process of C-K exploration in case of patents cannot be exhibited here due to the confidentiality issues.
Patent first by keeping (A-E) but changing their relations
We illustrate how the design team that worked on multi-touch haptic solutions where touch was considered as a way of interactive communications to illustrate these strategies (Figure 5). The initial concept was formulated as "haptic touch as a way of interactive communication". A team pursued the axis to improve and find alternatives to the already emerging solutions that consider haptic feedback. Here the work is based on new types of (A-E) relations. The already existing (A -E) sets were identified: electroactive polymers for volume rendering. Electroactive polymers (A) exhibit shape or size change in response to the electrical stimulation and allow independent volume rendering (E) (Figure 6). Here
A, E can be considered known and the team was able to redefine the existing relation by bringing it to the new context, creating new knowledge -δK(A, E) .
Creating new action for the existing effect
The team added new Action -flexible electronics using Organic Light-Emitting Diode (OLED) or graphene sheet (Figure 6). The haptic multitouch domain did not previously consider these actions. The team aimed to explore potentially disruptive solutions that account for capacitive multitouch flexible transparent display solutions. The desired effect was to achieve rich and precise multitouch feedback despite the screen flexibility. As a result the design team proposed inventions that deal with the innovative fabrication processes. They created new, disruptive (A, E, K) combinations and their starting point included an Action that was completely new and not evident for PSA. The team explored radical concepts in this case.
Creating new effect for the existing action
A team in this case was exploring a concept "3D Integration that have better electrical and thermal behavior than 2D alternatives". TSV (Through Silicon Via) was already used as an existing solution to ensure electrical interconnections. By working on the generative concepts like "Design TXV that have better electrical and thermal behavior than 2D
alternatives" allowed a team considering any type of substrate and ways of interconnecting devices trough this substrate (Figure 7). In this kind of concept the first part describes A (Action) which consists in realizing Via (drilling, etching, etc.) through a generic substrate (X which can be Silicon or any type of substrate such as AsGa,...). The second part of the concept describes E (Effects) that are expected from device behavior (electrical, thermal, etc.). For TXV new type of effects was considered -using them for better thermal management -thus, adding new effect.
ANALYSING PATENT DESIGN METHODS WITH PATENT DESIGN MODEL
Each design method described in Section 2 deals with patent design. Given a theoretical framework, do these methods consider patent model (in a form of A, E, K) and patentability criteria?
Comparison: how do we control for the PSA(K0)
Technology first
Technology-first approach proved to maximally reuse the existing and developed new technology design rules. This approach is based on the existing knowledge and expertise (K 0 ) that is already possessed by the company or is currently under development through R&D explorations. Patent model is not relevant in classical technology driven patent deposition. Therefore, this process is mostly driven by skills that exist in the field PSA(K 0 ). Resulted inventions are based on the combination of the existing technological building blocks, existing knowledge and expertise (PSA(K 0 )) to actually find new patent proposition. The knowledge expansion is deeply routed to the existing expertise and thus harder to provoke. This is a more classical way of developing patents where the obtained proof of concepts results in patent filling [START_REF] Ernst | Patent information for strategic technology management[END_REF]. Patentability criteria are examined by the patent engineers and are not usually verified during the technology design process.
Patent examiner plays an important role on the claims definition, increases the importance of patent. He 'designs' patent proposal according to the common definition of PSA (see Table 3).
TRIZ
In TRIZ, a problem to engage in the design activity should be defined first. Contradiction theory in TRIZ defines a problem as a contradiction of systems parameters such as speed, size, weight. Novel combinations are based on contradictions between the existing actions and effects. In TRIZ, patent can be represented as (A, E, K) and matrixes give the guidance how to solve the problems and structure knowledge basis. Moreover, TRIZ gives indications on Concept-based strategy: it shows that one should start with the sets of (A-E) that are incomplete, with contradictions. TRIZ supposes to master the sophisticated knowledge since to define well a problem we need to know the extensive list of actions and effects. Moreover, TRIZ codifies the portfolio of existing patents. Thus, its inventive step and novelty are highly driven by the existing expertise and knowledge. It assumes a possibility of finding novel solution though a list of existing principles, which limits breakthrough character of invention. TRIZ does not directly incorporate patentability criteria (see Table 3).
Genetic algorithms
Genetic algorithm (GA) is a combinatorial strategy in K. It offers an automated evolutionary search based design process that is knowledge intensive and nondeterministic [START_REF] Koza | Invention and creativity in automated design by means of genetic programming[END_REF]. In GA, the exploration space is defined in advance based in the genes sequences. Thus, all the actions and effects that can be used to achieve original combinations are known in advance. It is not possible to create new actions or effects, to expand the existing K-basis. The patent design process corresponds to the original combinations of the existing genes that can be described as actions and effects. The original combinations are achieved thanks to the operators of crossover, mutation, gene duplication, deletion. The strategy in C-space is absent (see Table 3). The novelty and inventive step are thus limited since knowledge expansion is limited. GA is based on a limited version of PSA(K0). As shown in [START_REF] Koza | Invention and creativity in automated design by means of genetic programming[END_REF], genetically evolved tuning rules and controllers satisfy the statutory requirements on being improved and useful.
Authors claim that these features would never occur to an experienced control engineer, they are unobvious to someone having 'ordinary skills in the art'. As in a variety of legal frameworks (see section 2), PSA is defined differently and has different skills, the requirements to ensure novelty and inventive steps vary and thus, proposals created using GA can result in issued patents.
Design analogy
In design analogy system is based on the assumption that similar problems occur between different technologies with similar functions and properties. Design analogy is based on mining of problem-solved concepts, construction of the patent mapping and specification of the reference patents [START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF]. The authors claim that the method can be used for problem identification in case of new technology. Still the new technology should have the properties of the existing one. Otherwise analogy cannot be built. Design analogy can describe a patent proposal in the form of (A,E,K) where action and effects correspond to functions and properties. According to this method, the inventor should analyze the patents according to their similarities in functions (effects) and properties (actions) and decide which one can be transposable to a new domain (e.g., between router and charger in the given example). With this reasoning we cannot control for concept generation; we start by exploring and generating new K and hope to generate new concepts but there is no indication on how to do it and the patentability criteria are not ensured (see Table 3). The authors mention that their system is not applicable to novel properties or functions. Thus, Design analogy similar to GA is driven by K-space.
CK Invent: inventive step to create new actions, effects
The model of patent design enriched CK Invent methodology by 1) considering patentability criteria; 2) incorporating reasoning of PSA and 3) introducing three types of strategies (partitions) that allow controlling the type of desired invention. (Koza et al., 2005) No control of concept space A, E, K; K driven strategy Design analogy method [START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF] No control of concept space A, E, K; K driven strategy
CK Invent (Felk et al., 2011)
Strategy based on A, E expansions
A, E, K All explored methods of patent design analyzed here take into account patent model in (A, E, K). Though, they lack knowledge on patentability criteria and do not take into account PSA reasoning. These methods mostly deal with the combinations of A, E, K but they do not discuss how to extend the reasoning PSA (K0), how to extend the initial combinations by creating new actions and effects and thus to expand the initial domain where PSA is capable to operate. There is no generation of new actions and effects and thus, the patentability is limited. In certain legal systems, the experts are based their evaluation on a limited number of combinations, these combinations can be considered novel. Though, from the design theory perspective, novelty is missing. This research reveals that there is a difference in resulted patent proposals on whether the exploration aims to define propositions as expansive partitions in C-space as in CK Invent or by exploring the existing design rules (based on K-space).
In addition, this research revealed that by starting with patent design first, an exploration team does not obtain the same results as starting with technology design first. Changing the order of partitioning influences the results that can be achieved: different alternatives and possibilities are observed and the design reasoning is different. Thus, the design theory does not always account for substitution meaning the order of concepts that appear in Cspace influence the results of the future inventions. In practice, it means that the first-order concepts are important and will determine the future exploration and directions for knowledge production. By simply changing the order if the first-and second-order partitions, the designer will rarely end up obtaining the same results. In case a partition is driven by inventive character of patent proposals we target the expansive partition and thus, increase a probability of obtaining new original results.
RESULTS
The main research areas on IPs emerged through economic and legal literatures which can be too selective [START_REF] Somaya | Patent Strategy and Management An Integrative Review and Research Agenda[END_REF]. For instance, from the economic perspective patents are seen as economic indicators and address issues such as pricing, commercialization or exchange of the already established IP. Still major practical issue for the organizational structures that deal with the IP is its complexity [START_REF] Gollin | Driving Innovation[END_REF]. There exist methods for patent design that allow for creative problem solving. Or more recently developed CK Invent method proposed an interesting way to design patents. This paper develops theoretical framework to incorporate patent design logic by building on the recent advances of the CK design theory.
The major results of this research are: 1) a model of patent design that takes in to account reasoning of person skilled in the art; 2) a comparison of patent design and characterization of their performance that depends on expansion from the existing knowledge combinations PSA (K 0 ); 3) a patent design model using C-K theory shows a non-substitution property that demonstrates that the order of partitions influence future results. This work demonstrates the irreversible power of operators in the C-K design theory.
A patent design consists of designing a new (A, E, K) that corresponds to Table 2. First, Patent design starts with the identification of the initial effects and actions with are referenced as A 0 -E 0 ; knowledge basis construction based on A, E, K and the definition of PSA (K0) according to the countries were patent proposal will be filled. Second, the initial modeling of PSA (K0) should be extended, based on new A, E, or new relations between the A-E. This results on structuring the C-space. Third, a knowledge basis should be expanded which requires knowledge production to ensure conjunction. It means that once a new (A, E, K) that fulfills the patentability criteria is defined, knowledge production process begins to actually ensure proof of concept and develop technological proposition.
Thanks to a model of patent design, existing patent design approaches such as TRIZ, Genetic algorithms, patent design by analogy (Altshuller, 1999b[START_REF] Felk | Designing patent portfolio for disruptive innovation-a new methodology based on CK theory[END_REF][START_REF] Jeong | Creating patents on the new technology using analogy-based patent mining[END_REF][START_REF] Koza | Invention and creativity in automated design by means of genetic programming[END_REF] are situated within the design process, their performance is compared. Traditional technology-first patent design [START_REF] Ernst | Patent information for strategic technology management[END_REF] process often produce the improvement patents that protect the differences between new products and already existing ones by adding new properties or substituting the old ones. The technology-first approach is suitable in case of dominant design where the goal is to propose better actions and effects to actually ongoing technology development. Patent -first method often results in a range of inventions facilitating creation of coherent patent portfolios. The use of patentability criteria interpreted with the help of design theory actually helps to reduce risks of non-relevance of the issued ideas. Different strategies of patent-first approach are examined. These strategies demonstrate that patents can be conditioned differently: the intermediary patent level that just work the relations between actions and effects, the patents that revolutionize and extend the list of known of ever used actions and effects. It expands both actions and effects enhancing the 'inventive' character of the proposal. In patent first approach there is a risk of non-relevance and thus, the process is controlled by patentability criteria and by company managers that can estimate the interest of the issued solutions internally. The ideas positioning according to the extended action and effects allows determining the design space and list the concepts that remain to be explored.
Different strategies combined with patentability criteria increase patent propositions quality. Design reasoning that considers patentability criteria actually increases the quality of patent propositions. It shows the necessity to have various processes of ideas exploration regarding the available knowledge, the level of maturity, competition and offers the relevant patent exploration models.
The order of concepts that appear in C-space is not easily reversible; it conditions the success of the exploration and the corresponding results. This irreversible character leads to non-commutative character of the C-K design theory.
Overall, this paper proposes a better understanding of the legal and modeling aspects of IP and thus to better understand the conditions for patent design logic and the definition of person skilled in the art (PSA). In addition, better understanding of the PSA activity helps to take into account the criteria that explain the variety of legal frameworks. An inventor can be evaluated based on his capacity to create more or less important (A, E, K) combinations.
DISCUSSION AND PERSPECTIVES
This research proposes a model of patent design based on the C-K design theory. Patent model can be examined and enriched with other theoretical lenses like General Design Theory [START_REF] Yoshikawa | Design theory for CAD/CAM integration[END_REF], Axiomatic Design [START_REF] Suh | A theory of complexity, periodicity and the design axioms[END_REF], Coupled Design Process [START_REF] Braha | Topological structures for modeling engineering design processes[END_REF], Infused Design [START_REF] Shai | Infused design. I. Theory[END_REF].
One might argue that patent design processes will increase the number of patents proposal to manage and thus, increase even more the IP complexity. Still the consequence of the method on the increasing number of IP is not evident since the goal is to seek for quality in patent proposals and not to augment the number of inventions.
Patent design model can be used to create cartography of the existing state of the art, better understand the dependencies between the existing patents. The use of action and effects to characterize the exiting technologies and patents, simplify the technological forecasting and allow identifying the interesting alternatives to explore, free zones to patent, automatically evaluate companies' position in accordance with the other actors. Keyword patent based maps allow to discover the unexplored areas and actually shape new discoveries, identify new technological opportunities [START_REF] Lee | An approach to discovering new technology opportunities: Keyword-based patent map approach[END_REF].
Our work similarly to other patent design methods based on TRIZ depends on the strength of the group of experts participating in the experimentatuon work. Sill design theory formalism helps to guide reasoning and ensure inventive step and novelty of the emerging solutions. In their [START_REF] Fu | Design-by-analogy: experimental evaluation of a functional analogy search methodology for concept generation improvement[END_REF] propose patent-based functional analogy method which deals with these aspects of subjectivity. Patent mining techniques, patent mapping that aim
Figure 2: C-K Models based on A, K, E and inventive step
Figure 3 :
3 Figure 3: Strategies to control for (A, E) expansions
formal model of creative thinking -C-K design theory. These workshops aim to extend the initial PSA(K 0 ), based on new actions, effects or by creating new connections between the existing A and E. Once new sentences are generated, they are verified on patentability criteria and knowledge exploration is organized. As a result, K basis is expanded and to ensure conjunction in C-space knowledge production is required to proof the concepts, build new partnerships or just fill patent applications.
Figure 4 :
4 Figure 4: Patent design process
Figure 5 :
5 Figure 5: Patent design: creation new relations between action and effects
Figure 6 :
6 Figure 6: Patent design: creating new action
Figure 7 :
7 Figure 7: Patent design: creating new effect
Table 1 Patent design models Method Problem space Model of patent generation Standard: Technology- first
1
Problem is based on a Identify patentable features in
technology developed/ in technology and fill a patent
advance proposal
TRIZ (Altshuller, Problem defined in Patent design around based on
1999b) advance contradiction solving
principles
CK Invent (Felk et al., 2011)
by analogy
assuming that similar
problems appear in
technologies with similar
functions, properties.
Deducing functions from a
pre-defined patent set.
(Wireless router/charger)
. Patent is a proposition where Action represents the intervention made on objects and their interrelations. Effects are actions' consequences and Knowledge is the set of technical information used by the invention.
Using the famous Nespresso capsules patented by Nestec -an R&D of Nestle example
(US20100239717 / EP 2364930 A2), let's analyze Claim 1:
"Capsule for the production of a beverage, more particularly coffee, in a beverage
production machine comprising 1) a capsule holder with relief and recessed elements;
said capsule comprising: 2) an inverted cup-shaped body forming a chamber containing
beverage ingredients, preferably ground coffee, a bottom injection wall, a sidewall; a delivery wall which is sealed to the body; 3) optionally, a filtering wall placed between said chamber and the delivery wall characterized in that the delivery wall comprises a calibrated orifice or comprises perforating means to provide a calibrated orifice and in that the beverage delivery wall is not tearable against the capsule holder during extraction but provides through the restriction created by the calibrated orifice a certain back pressure which generates an elevated pressure in the capsule during extraction»
Table 3 Patent design model
3
Method Strategy in C-space Strategy in K-space
Technology first No control of concept K driven strategy
space
TRIZ (Altshuller, Strategy based on A/E A, E, K
1999b) contradictions
Genetic programming
to automatically
synthetize complete
designs
to use meta data or information in the texts of patents by analyzing patent databases can substantially enrich patents analysis to create databases of actions and effects. For instance, it might be interesting to use C-K with design by analogy search engines to first, use the power of the design-by-analogy search engine [START_REF] Fu | Design-by-analogy: experimental evaluation of a functional analogy search methodology for concept generation improvement[END_REF] to analyze functions embedded in the patent databases and second, experiment with new strategies of generating nonobvious ideas using C-K design framework.
In further work we plan to compare differet patent design method on the same design concept and investigate overall effect of these methods to the novelty and inventivity of the proposed inventions.
The patent design model should be tested in case of platforms or generic technologies [START_REF] Baldwin | Modularity in the design of complex engineering systems. Book chapter in Complex Engineering systems: Science meets technology[END_REF][START_REF] Gawer | Platform dynamics and strategies: from products to services[END_REF][START_REF] Gawer | How companies become platfrom leaders[END_REF] which comprise both modules to address market complementarities and the core element of a technological system. How to protect the core? How the complementary innovations, modules should be patented? For instance, how to protect a platform core and better share (or not) the rights among platform designers? This is especially relevant in case of complex technologies where the property right is not exclusive but shared among the actors. In this case, companies built thickets of patents which obliges them to share rents under cross-licenses.
In this case the structure of "patent thicket" should be carefully defined [START_REF] Graevenitz | How to measure patent thickets-A novel approach[END_REF]). Yet the models for the design of such structures are missing. The design theory might provide new perspectives on this issue and provide new ways of designing interdependencies and associated revenue models to design relevant thickets. Patent thickets can be better designed.
Finally, IP is evolving and it can be seen as a competitive asset and as a mean to strategically create ecosystems of innovators. The design theory on IP management opens up new aresas of research to seek new models for the strategic management of IP design for innovative ecosystems. | 63,252 | [
"5162",
"1111",
"1099"
] | [
"39111",
"39111",
"39111"
] |
01482005 | en | [
"chim"
] | 2024/03/04 23:41:48 | 2011 | https://hal.univ-lorraine.fr/hal-01482005/file/C13_hetero_relax.pdf | Daniel Canet
Sabine Bouguet-Bonnet
Sébastien Leclerc
Mehdi Yemloul
Carbon-13 Heteronuclear Longitudinal Spin Relaxation for Geometrical (and Stereochemical) Determinations in Small or Medium Size Molecules
Keywords: carbon-13 spin relaxation, T 1 measurements, nuclear Overhauser effect, rotation-diffusion tensor, HOESY experiments
refers to the decay of nuclear magnetization components perpendicular to 0 B , the static magnetic field. 1 T and 2 T are involved in Bloch equations 1 which predict a mono- exponential evolution of the nuclear magnetization components. It turns out that, when two spins ½, A and B, are coupled by dipolar interactions, their longitudinal relaxation is no longer mono-exponential. The longitudinal components of their magnetizations (or rather their polarizations) are coupled by a cross-relaxation rate denoted by AB which depends solely on their mutual dipolar interaction. Hence, the interest brought to this parameter from which arises the so-called nuclear Overhauser effect 2 (nOe) from
Overhauser who discovered that polarization of an electron spin could be partly transferred to a nuclear spin through a cross-relaxation rate. Note that we are interested here in mutual transfers from one nuclear spin to another nuclear spin. Of course, there is an enormous literature about proton-proton nOe, especially through the NOESY 3 (Nuclear Overhauser Effect SpectroscopY) two-dimensional experiment. The latter is routinely used for determining distance correlations in all types of molecules, noticeably in macromolecules of biological interest (proteins, nucleic acids…), these correlations being invaluable in view of determining, for instance, the tertiary structure of proteins. In the present review, we shall limit ourselves to 13 C-1 H nOe and more especially to the exploitation of cross-relaxation rates in view of geometrical (or stereochemical)
determinations. This means that we also disregard the classical use of the heteronuclear nOe factor which enables us to evaluate the amount of non dipolar contributions to the longitudinal relaxation rates 4 . For achieving such an objective, a firm theoretical background is required. It will be provided in section 2 by a brief overview of longitudinal nuclear spin relaxation and, in section 3, by the presentation of Solomon equations which constitute the basis of nOe studies. We shall also see in this section that, besides crossrelaxation rates, cross-correlation rates, which couple polarizations and longitudinal order,
Abstract Owing to an extremely abundant literature making use of spin relaxation for structural studies, this review is limited to carbon-13 spectroscopy, to small or medium size molecules, to stereochemical and preferably geometrical determinations. The parameter of choice is evidently the Nuclear Overhauser effect (nOe) because it depends exclusively on the dipolar interaction mechanism, thus on 6 / 1 r , where r is the distance between the two interacting spins. However, it depends also on the dynamical features of the system under investigation which must be characterized prior to any attempt for obtaining geometrical or stereochemical information. Therefore, this review is devoted not only to 1 H- 13 C nOe but, more generally, to 13 C longitudinal relaxation. After comprehensive theoretical developments, experimental methods presently available will be presented. The latter include the usual gated decoupling experiment and pulse experiments of the HOESY (Heteronuclear Overhauser Effect SpectroscopY) family. These pulse experiments, which imply carbon-13 observation, can be one-dimensional, selective one-dimensional or two-dimensional. The emphasis will be put on the interpretation which is different according to the occurrence or not of extreme narrowing conditions. Along with a literature survey, some selected examples will be presented in detail in order to illustrate the potentiality of the method.
Introduction
Spin relaxation in NMR is known to provide information about the dynamics of molecular entities and possibly about molecular geometry or electron distribution. Generally, dynamical information is obtained if the tensor of the relevant relaxation mechanism is known from independent determinations. Conversely, if parameters describing the dynamics of the considered molecule have been deduced beforehand, geometrical parameters may be derived. Only in particular situations, one can hope to access both types of parameters (dynamical and geometrical). For instance, this can occur when relaxation parameters become frequency dependent (i.e. dependent on the static magnetic field value at which measurements are performed). This method, sometimes dubbed "relaxometry", may yield, independently of the relaxation mechanism details, a so-called "spectral density mapping" which contains the major dynamical features of the considered molecule. In turn, when inserted in the theoretical expression of a given relaxation rate (the inverse of the corresponding relaxation time), it can provide some geometrical parameters.
The dipolar interaction (or, in other terms, the interaction between the magnetic moments associated with two nuclear spins) is the relaxation mechanism of choice when one attempts to access interatomic distances (and sometimes bond angles). As it will be explained later, this mechanism brings a contribution to relaxation rates proportional to 6 / 1 r , where r is the distance between the two interacting spins. The problem is that the two usual relaxation times depend on other relaxation mechanisms and that it is not always easy to separate the dipolar contribution. Let us recall that the two usual relaxation times include 1 T , the longitudinal or spin-lattice relaxation time, which refers to the recovery of the longitudinal nuclear magnetization component (the component along the static magnetic field direction) and 2 T , the transverse or spin-spin relaxation time, which can be accounted for in extended Solomon equations. In section 4, we shall concentrate on spectral densities which represent the dynamical part of cross-relaxation or crosscorrelation rates and which must be considered carefully if geometrical or stereochemical information has to be derived from cross-relaxation or cross-correlation rates. Sections 3 and 4 represent actually a sort of manual for interpreting heteronuclear (intramolecular)
proton-carbon-13 relaxation data. Experimental procedures for measuring 13 C-1 H crossrelaxation or cross-correlation rates will be presented in section 5, while some selected examples will be detailed in section 6 along with a survey of literature on the subject of the present review.
2. Overview of longitudinal nuclear spin relaxation [START_REF] Canet | Nuclear Magnetic Resonance: Concepts and Methods[END_REF][START_REF] Kowalewski | Nuclear spin relaxation in liquids: theory, experiments and applications[END_REF] At thermal equilibrium, the nuclear magnetization 0 M lies along the B 0 direction, generally denoted as z. When nuclear magnetization has been taken out from its equilibrium state (with the help of a radio-frequency field), its longitudinal component z M tends to recover toward 0 M . As for any physical system, this is a relaxation phenomenon which should originate from a perturbation comparable to the radiofrequency field which has been responsible for the non equilibrium situation. The latter is time dependent and perfectly coherent. Moreover, it acts at a well defined frequency.
Conversely, a given nuclear spin is subjected within the sample to randomly fluctuating magnetic fields arising from other spins or from whatever interaction that the considered spin could experience. These randomly fluctuating fields b(t) could play the same role as radio-frequency field, but in a reverse way, in order to restore the longitudinal magnetization toward its equilibrium value. These random fields are effectively time dependent with a zero mean value:
0 ) ( t b
(actually, the bar denotes an ensemble average). They should also present some degree of coherence which can be evaluated by the so-called correlation function
) 0 ( ) ( b t b
. This can be seen from the following a contrario argument: if the random field at time t is independent of the random field at time zero, one has
0 ) 0 ( ) ( ) 0 ( ) ( b t b b t b
. Thus the correlation function is the key to the efficiency of a given relaxation mechanism. It remains to determine at which frequencies this correlation function is active. This feature is deduced from the Fourier transform of the correlation function. This latter quantity is called spectral density and can be expressed as
dt e b t b J t i ) 0 ( ) ( ) ( (1)
As we shall see, all relaxation rates are expressed as linear combinations of spectral densities. We shall retain the two relaxation mechanisms which are involved in the present study: the dipolar interaction and the so-called chemical shift anisotropy (csa) which can be important for carbon-13 relaxation. We shall disregard all other mechanisms because it is very likely that they will not affect carbon-13 relaxation. Let us denote by 1 R the inverse of 1 T . 1 R governs the recovery of the longitudinal component of polarization, z I , and, of course, the usual nuclear magnetization which is simply the nuclear polarization times the gyromagnetic constant . The relevant evolution equation is one of the famous Bloch equations 1 , valid, in principle, for a single spin but which, in many cases, can be used as a first approximation.
) (
1 eq z z I I R dt dI
(2)
Dipolar interaction
The classical interaction energy of two magnetic dipoles oriented along the direction of the static magnetic field (figure 1) is given by
) 1 cos 3 ( 2 3 AB B A d r E (3)
Figure 1. Magnetic dipoles associated with nuclear spins in the presence of B 0 .
As a consequence, the local field acting on
A is of the form ] 1 ) ( cos 3 [ ) ( 2 3 t r t b AB B (4)
since the time dependence arises from the angle and is related to the molecular reorientation (molecular tumbling and possibly internal motions). As a matter of fact, the problem must be treated by quantum mechanics so as to introduce the proper spin operators and, doing so, we obtain the following spectral density actually involved in relaxation rates
dt e t J J r K J t i AB d d ] 1 ) 0 ( cos 3 ][ 1 ) ( cos 3 [ 4 5 ) ( ~) ( ) ( 2 2 6 (5) In (5), ) ( ~ J
is independent of the relaxation mechanism and the constant 4) and ( 5), r is expressed in Å.
d K is given by 2 2 0 ) ( ) 4 / )( 20 / 1 ( B A d K ( 6
In (5), we can first notice i) the factor 6 / 1 r which makes the spectral density very sensitive to the interatomic distance, and ii) the dynamical part which is the Fourier transform of a correlation function involving the Legendre polynomial. We shall denote this Fourier transform by ) ( ~ J (we shall dub this quantity "normalized spectral density").
For calculating the relevant longitudinal relaxation rate, one has to take into account the transition probabilities in the energy diagram of a two-spin system. In the expression below, the first term corresponds to the double quantum transition (DQ), the second term to single quantum transitions (1Q) and the third term to the zero quantum transition (ZQ).
)] ( ) ( 3 ) ( 6 )[ / ( 6 , 1 B A d A d B A d AB d A d J J J r K R (7)
Chemical shift anisotropy (csa)
This mechanism arises from the asymmetry of the shielding tensor. Let us recall that the chemical shift in NMR has its origin in the screening (shielding) of the static magnetic field 0 B by the electronic distribution at the level of the considered nucleus. This effect proceeds in fact from a tensorial quantity and the screening coefficient which defines the chemical shift in the liquid phase is just one third of the (Cartesian) tensor trace. Now, as far as relaxation is concerned, we have to consider the whole tensor and, more precisely, the system of its principal molecular axes (x, y, z) in which the tensor is diagonal. Let us denote by Z the direction of 0 B .
Figure 2. Principal axis system of a shielding tensor assumed to be of axial symmetry.
Z B 0 //
The static magnetic field sensed by the considered nucleus is given by ) 1 (
0 ZZ B
, where ZZ is the shielding for a given molecular orientation (figure 2). Of course, ZZ which is defined in the laboratory frame must be expressed as a function of the shielding tensor in a molecular frame where it is diagonal (the principal axis system). Since the latter rotates with respect to the laboratory frame, it is conceivable that the shielding effect can constitute a relaxation mechanism. For simplicity we shall assume a shielding tensor of axial symmetry (figure 2).
ZZ
can be expressed as
) 1 cos 3 ( 3 3 2 ZZ (8)
where we have introduced the isotropic shielding coefficient (responsible for the chemical shift in liquid phase, hence the appellation "isotropic chemical shift")
2 // (9)
and the anisotropy of the shielding tensor (usually called "chemical shift anisotropy") (10) The time dependence result evidently from the angle and, considering from a quantum mechanical point of view the interaction between a magnetic moment and the static magnetic field 0 B (Zeeman term), we can invoke a local field of the form
//
] 1 ) ( cos 3 [ ) ( 2 0 t B t b (11)
Thus, the relevant spectral density, here equal to the longitudinal relaxation rate, is given by
) ( ) ( ) )( 15 / 1 ( ) ( 2 2 0 , 1 csa csa csa J B R J (12)
) ( ~ J refers to the motion of the tensor symmetry axis. An immediate consequence of (12) is the proportionality of the csa contribution to the square of the static magnetic field value, meaning that it can be safely neglected, in the case of carbon-13, up to 4.7 T (proton resonance frequency of 200 MHz). Another important feature of csa relaxation is its dependence with respect to the anisotropy (expressed here in ppm), which is weak for aliphatic carbons but of the order of 150-200 ppm for ethylenic, aromatic or carbonyl carbons.
In the general case, when the shielding tensor is not of axial symmetry, in place of its
diagonal
, 2 / ) ( yy xx zz
and the asymmetry parameter
/ ) )( 2 / 3 ( / ) ( yy xx zz yy xx
. The second form of arises from the fact that, as far as relaxation is concerned, the shielding tensor can be defined with respect to any time-independent reference (which therefore will not act as a relaxation mechanism). iso is such a reference and will be taken as zero, hence the second form of . Now, if the molecular reorientational motion is isotropic, ( 12) is transformed into
) ( ) )( 3 / 1 ( ) )( 15 / 1 ( 2 2 2 0 , 1 csa csa J B R (13)
Cross-relaxation (and cross-correlation). Solomon equations
Simple Solomon equations 7
Whenever the system is no longer constituted by single non-interacting spins, the simple Bloch equation ( 2) must be complted by additional coupling terms. Let us consider the dipolar interaction between two spins ½ A and B. This interaction is responsible for a biexponential evolution of their polarization which is accounted for by two simultaneous differential equations called Solomon equations
) ( ) ( ) ( ) ( 1 1 A eq A z AB B eq B z B B z B eq B z AB A eq A z A A z I I I I R dt dI I I I I R dt dI (14)
The coupling term, traditionally denoted by AB (which has however nothing to do with the screening coefficient of section 2.2), is the so-called cross-relaxation rate and is a relaxation parameters which depends exclusively on the dipolar interaction between nuclei A and B, contrary to auto-relaxation rates which are compounds of several contributions.
For instance, if A is a carbon-13, the auto-relaxation rate can always be written as
A others A csa A dip A R R R R , 1 , 1 , 1 1 (15)
In ( 15), A others R , 1 encompasses all secondary interactions which are not included in the first two terms (for instance the interaction with an unpaired electron, the spin-rotation interaction…). By contrast, the expression of the cross-relaxation rate is simply
)] ( ) ( 6 )[ / ( 6 B A d B A d AB d AB J J r K (16)
This is the beauty of this quantity which provides specifically a direct geometrical information ( 6 / 1 AB r ) provided that the dynamical part of ( 16) can be inferred from appropriate experimental determinations. This cross-relaxation rate, first discovered by Overhauser in 1953 about proton-electron dipolar interactions 8 , led to the so-called nuclear Overhauser effect (nOe) in the case of nucleus-nucleus dipolar interactions, and has found tremendous applications in NMR 2 . As a matter of fact, this review is purposely limited to the determination of proton-carbon-13 cross-relaxation rates in small or medium-size molecules and to their interpretation.
The simplest way to measure the proton-carbon-13 cross-relaxation rate is to saturate the proton transitions by means of decoupling procedures, which are normally used to remove the effect of proton-carbon J couplings from carbon-13 spectra, thus leading to a single carbon peak (provided there exists no J coupling with other nuclei). Referring to the first of equations ( 14) and supposing that A is the carbon-13 nucleus, B being the proton coupled by dipolar interaction to A, one has
B eq AB A eq A z A A z I I I R dt dI
) ( 1 (17) Assuming further that decoupling has been turned on a long time ago before the carbon-13 measurement so that a steady state is reached, one ends up with a steady state carbon polarization, A st I , obtained by setting dt dI A z to zero:
A eq A AB A B A st I R I ) 1 ( 1 (18)
Now, it can be seen from equation ( 17) that a simple relaxation measurement under decoupling conditions yields directly A R 1 whereas A eq I can be measured by a standard experiment described in section 5. One then arrives to the so-called nOe factor given by
A AB A B A eq A eq A st R I I I 1 (19)
It can be noticed that the maximum nOe factor (2 when A is a carbon-13 and B a proton) is reached under extreme narrowing (see section 6) conditions and if A R 1 arises exclusively from the A-B dipolar interaction. On the other hand, the cross-relaxation rate AB is easily deduced from the nOe factor and from the A specific relaxation rate
A B A AB R 1 (20)
In spite of the apparent simplicity of the method, its drawback comes from the fact that a two-spin system has been assumed. It provides merely global information spanning all protons prone two interact by dipolar coupling with the considered carbon. Selective information requires pulsed experiments stemming from the general solution of equations (14) given below.
For this purpose, let us define A by
A eq A z I I A and B by B eq B z I I B , with A K A ) 0 ( , B K B ) 0 ( .
Solomon equations can then be written as
A B R dt dB B A R dt dA AB B AB A 1 1 (21)
Their solution is as follows
t t t t e b e b B e a e a A (22)
and are both negative and are expressed as
2 2 1 1 1 1 4 ) ( 2 1 2 AB B A B A R R R R (23)
The coefficients ( a , a , b , b depend on all relaxation parameters and, of course, on the initial conditions). One has
a R b a R b B AB B AB 1 1 (24) and ) /( 1 ) /( 1 / ) /( ) /( 1 ) /( 1 / ) /( 1 1 1 1 1 1 B B AB B B A B B AB B B A R R K R K a R R K R K a (25)
The crucial point arises from which may become very small, thus leading to the little known property of long lived states. For this purpose let us assume that B R 1 , in our case the proton longitudinal relaxation rate, is much greater than A R 1 and AB , a common situation which, for , leads to a quantity which can be very small
B AB A R R 1 2 1 (26) B R 1
is large and governs the first part of the evolution. Conversely, when the corresponding term in (22) has almost completely decayed to zero, it essentially remains the term involving , which generates a long lasting signal. The amplitude of the latter (and therefore the possibility of its detection) depends on initial conditions via the coefficient a .
Longitudinal spin order. Extended Solomon equations
So far, we have not considered the so-called longitudinal two-spin order, represented by the product operator 9 and recognizing that the longitudinal order is zero at equilibrium) 1), ( 1), ( 1
B A AB R dt dAB AB A B R dt dB AB B A R dt dA d B csa d A csa AB d B csa AB B d A csa AB A ), ( ), (
(27)
AB R 1 is the longitudinal order auto-relaxation which may depend on all relaxation mechanisms affecting the spin system. We give below the dipolar and csa contributions
) ( ) ( ) )( 15 / 1 ( ) ( ) ( ) )( 15 / 1 ( )] ( 3 ) ( 3 )[ / ( ) ( 2 2 0 ) ( 2 2 0 6 1 B B csa B B A A csa A A B d A d AB d AB J B J B J J r K R (28)
Conversely, the cross-correlation rates depend solely on the csa mechanism and on the dipolar interaction which is of prime importance here. It arises in fact from correlation functions of the form
) 0 ( ) ( ' b t b
where ) ( ' t b refers to the csa mechanism whereas ) (t b refers to the dipolar interaction. One has
) ( ) )( 4 )( )( ( 3
2 ) 5 1 ( ), ( 3 0 0 ), ( A d A csa AB B A A A d A csa J r B (29)
The various symbols have the same meaning as before while the spectral density ) ( ~), ( A d A csa J will be discussed in section 4. For the moment, let us state that these cross-correlation rates can play a role only if the csa mechanism is important (i.e. for non aliphatic carbons but certainly not for protons) and if measurements are performed at high field (due to the term 0 B A in (29)). This means that, if A is a carbon-13 and B a proton,
d B csa ), (
can be safely neglected in (27). Nevertheless, the exploitation of ( 27), which involves three simultaneous differential equations, relies necessarily on numerical procedures. In order to convince the reader that extended Solomon equations are not purely theoretical, we provide in figure 3 an illustrative example showing the longitudinal order build-up.
Figure 3. Creation of the longitudinal order by cross-correlation as a function of the mixing time t m which follows the inversion of a carbon-13 doublet (due to a J coupling with a bonded proton). The read-pulse transforms the longitudinal polarization into an inphase doublet and the longitudinal order into an antiphase doublet. The superposition of these two doublets leads to the observation of an asymmetric doublet.
Finally, it can be noted that there also exist dipolar-dipolar cross-correlation rates which involve two different dipolar interactions. These quantities may play a role, for instance, in the carbon-13 longitudinal relaxation of a CH 2 grouping 11,12 . Due to the complexity of the relevant theory and to their marginal effect under proton decoupling conditions, they will be disregarded in the following.
Spectral densities
As seen from the above theoretical developments, accessing geometrical (and stereochemical) information implies at least an estimation of the dynamical part of the various relaxation parameters. The latter is represented by spectral densities which rest on the calculation of the Fourier transform of auto-or cross-correlation functions.
Conversely, inter-molecular dipolar interactions would imply complex models involving, among other things, translational diffusion processes 13 . This aspect of nuclear spin relaxation will not be considered here.
Isotropic tumbling of a rigid molecule
Small
/ 2 ) 0 ( ) ( ( 30
)
c is called the correlation time and is equal to
r D 6 / 1
where r D is the rotational diffusion coefficient. For a simple Brownian motion, r D can be expressed as a function of the radius a of the molecule (assumed to be spherical) and of the solvent viscosity . This is the well-known Stokes-Einstein equation
3 8 a T k D B r (31)
where B k is the Boltzmann constant and T the absolute temperature.
Qualitatively, c
can be viewed as the time necessary for a reorientation by one radian. c is very weak (10 -11 -10 -12 s) for small size molecules in non viscous solvents. Conversely, for large molecules (such as proteins in aqueous solution), it can reach much more important values (10 -9 s or higher). All (normalized) auto-correlation spectral densities have the same expressions since, in the molecule, all directions are equivalent
2 2 1 2 ) ( ~c c J (32)
(32) is as well valid for dipolar or csa autocorrelation spectral densities. As is an NMR frequency, thus smaller than 6.3 10 9 rad s -1 , ) ( ~ J is generally frequency independent for small or medium size molecules ( 1
2 2 c
) and simply equal to c 2 . Such a situation is called extreme narrowing. However, depending on the medium or on possible molecular associations, extreme narrowing conditions may no longer prevail and c can be easily deduced from the evolution of relaxation parameters with the measurement frequency. This method, called relaxometry, can be employed whenever c becomes larger than 10 -10 s (see figure 4). for different values of the correlation time (in s). They all start from 1 just to highlight their dependence with respect to the measurement frequency.
Concerning the cross-correlation spectral densities introduced in (29), they are of the same form as (32) with a geometrical factor depending on the angle csa d , between the two relaxation vectors: the vector joining the two nuclei for the dipolar interaction, and the largest shielding principal axis (or the symmetry axis of the shielding tensor if it is of axial symmetry). One has
) 1 2 )( 1 cos 3 )( 2 / 1 ( ) ( ~2 2 , 2 ), ( c A c csa d A d A csa J (33)
Anisotropic tumbling of a rigid molecule
If the considered molecule cannot be assimilated to a sphere, one has to take into account a rotational diffusion tensor, the principal axes of which coincide, to a first approximation, with the principal axes of the molecular inertial tensor. In that case, three different rotational diffusion coefficients are needed 14 . They will be denoted as
X D , Y D , Z D and
describe the reorientation about the principal axes of the rotational diffusion tensor. They lead to unwieldy expressions even for auto-correlation spectral densities, which can be somewhat simplified if the considered interaction can be approximated by a tensor of axial symmetry, allowing us to define two polar angles and describing the orientation of the relaxation vector (the symmetry axis of the considered interaction) in the (X, Y, Z) molecular frame (see figure 5). As the tensor associated with dipolar interactions is necessarily of axial symmetry (the relaxation vector being the inter-nuclear vector) and because this review deals essentially with dipolar interactions, we shall limit ourselves to auto-correlation spectral densities in the case of an axially symmetric tensor.
Relaxation vector
From the pioneering works of Woessner 15 , Huntress 16 and Hubbard 17 , we can derive the following formulae 18
2 2 2 2 / 1 ) / 2 ( ) ( ~k k k k auto a J (34) with 2 / ) ( ] 12 ) ( 4 [ 4 2 3 5 2 4 2 / 1 2 2 2 1 0 Y X Z Z Z Z D D D D D D D D D D D D D (35) and 2 / 1 2 2 2 2 2 2 2 2 4 2 2 2 2 2 2 1 2 2 1 2 4 0 ] 12 ) ( 4 [ ) ( 2 ) 1 cos 3 ( 4 )] 1 cos 3 ( 2 cos )[sin 3 ( 2 cos sin ) 9 ( 12
1 cos 2 sin ) 4 / 3 ( sin 2 sin ) 4 / 3 ( 2 sin sin ) 4 / 3 ( D D D D D d d d D D d D a a a a Z Z (36)
Fortunately, in the case of a rotational diffusion tensor with axial symmetry (such molecules are denoted "symmetric top"), some simplification occurs. Let us introduce new notations:
1 ) ( ~ auto J ( 4
) 38
It can be noticed that at least two independent relaxation parameters in the symmetric top case, and three in the case of fully anisotropic diffusion rotation are necessary for deriving the rotation diffusion coefficients, provided that the relevant structural parameters are known and that the orientation of the rotational diffusion tensor has been deduced from symmetry 20 considerations or from the inertial tensor.
Local motions. The model free approach
Molecular internal motions are prone to affect relaxation parameters. A first approximation is to assume that they participate in the rotational diffusion anisotropy and to use the formulation of the preceding section. Indeed, for treating the internal rotation of a methyl grouping, one can use 19 an expression very close to (38). It can also be assumed that the overall tumbling and internal motions are independent so that it is possible to devise models which account for the superposition of these two types of motion. These models [START_REF] Woessner | Encyclopedia of Nuclear Magnetic Resonance[END_REF][START_REF] Daragan | [END_REF][22][23][24] , depending on a number of parameters which may exceed the number of observables, will not be further detailed. We rather focus on the popular "model free approach", also called the Lipari-Szabo model 25 (the relevant expressions were actually derived earlier by Wennerström et al. 26 ) which treats pragmatically the superposition of isotropic overall tumbling and local internal motions with a reasonable number of parameters. This approach is based on the fact that internal motions, considered globally, are represented by a correlation time f (f for fast motions assumed to lead to extreme narrowing conditions) and an order parameter S which reflects their anisotropic character or rather their orientational restriction. S has the same meaning than in liquid crystals, reflecting a partial orientation with respect to a given local director. It is an empirical parameter defined as the mean of
) 1 cos 3 )( 2 / 1 ( 2 ,
being the angle between the relaxation vector and the director. It ranges from -0.5 to 1, the latter value corresponding to a uniform orientation while S=0 indicates a pure random orientation (that is no preferred direction). S=-0.5 indicates that all relaxation vectors are perpendicular to the director. The overall tumbling is characterized by a correlation time s (s for slow, this approach being really interesting when the overall tumbling is outside extreme narrowing conditions). The relevant auto-correlation spectral density, suitable for dipolar interactions (S would be the order parameter of the internuclear direction), is of the form
2 2 2 2 1 2 ) 2 )( 1 ( ) ( ~s s f dip auto S S J (39)
S, f
and s have to be deduced from experimental data. It can be noted that 39) can be substituted by (34) or (38) if the overall tumbling is anisotropic. (39) would also be suitable for the csa contribution to relaxation rate provided that the csa tensor is of axial symmetry and that one defines a specific order parameter for the relevant symmetry axis. Let the angle between the dipolar internuclear direction and the csa symmetry axis. If (39) is associated with the dipolar interaction, the homologous csa spectral density can be written
as 27 2 2 2 2 2 2 2 2 1 2 ) 1 cos 3 )( 4 / 1 ( ) 2 ]( ) 1 cos 3 )( 4 / 1 ( 1 [ ) ( ~s s f csa auto S S J (40)
Similarly 27 , the dipolar-csa cross-correlation spectral density can be expressed as follows
] 1 2 ) 2 )( 1 )[( 1 cos 3 )( 2 / 1 ( ) ( ~2 2 2 2 2 , s s f csa dip cross S S J (41)
The Lipari-Szabo approach has been essentially used in the study of large biomolecules [START_REF] Daragan | [END_REF][22][23][24]28 , less often for the medium size molecules as will be discussed in section 6.
Experimental procedures
From now onward, we shall assume natural abundance for carbon isotopes, so that the signal we have to deal with arises from a single carbon-13 in the molecule under investigation.
Moreover, most of the experiments described below imply, at one stage or another, proton decoupling for removing any multiplet structure (due to J couplings with protons) in such a way that every carbon gives rise to a singlet (except J couplings with other nuclei such as 19 F, 31 P…). Another consequence of proton decoupling is, in principle, the destruction of proton 22 magnetization so that the Solomon equation concerning carbon-13 (see ( 14)) reduces to
) ( 1 A st A z A A z I I R dt dI (42)
where A st I is given by ( 18). This is an important property meaning that, under continuous proton decoupling, carbon-13 longitudinal relaxation is mono-exponential. Very often (either in exploiting the nOe factor or the Solomon equations), the knowledge of the carbon-13 longitudinal relaxation time will be required. The forthcoming section is thus devoted to its measurement.
Measurement of 13 C longitudinal relaxation time
As explained above, these experiments have to be performed under continuous proton decoupling. The usual method is inversion-recovery 29 (figure 6) which would require, between consecutive experiments or consecutive scans, a waiting time of 5T 1 unless one has recourse to a variant dubbed "Fast Inversion-Recovery" 30 .
Figure 6. The inversion-recovery experiment. The dynamic range is twice the equilibrium magnetization Saturation-recovery 31 (figure 7) is an interesting alternative in the case of very long longitudinal relaxation times or when a proper inverting pulse is not available. It makes use of a saturation pulse (i.e. capable of destroying the whole carbon-13 magnetization) in place of the pulse of the inversion-recovery experiment. Since magnetization starts always from zero, no waiting time is needed. The only drawback is a dynamic range half the one of the inversion-recovery experiment.
Figure 7. The saturation-recovery experiment. The dynamic range is equal to the equilibrium magnetization
For both experiments and according to (42), the experimental data can be adjusted according to
) 1 ( ) ( 1 / T eq ke S S (43)
1<k<2 for the inversion-recovery experiment (k would be equal to 2 for a perfect 180° pulse and for a waiting time permitting full equilibrium recovery). k is around 1 for the saturationrecovery experiment (it would be strictly equal to 1 in the case of a perfect saturation; in practice, it can be either larger or smaller).
Measurement of the nOe factor
The nOe factor provides a global information, that is the sum of cross-relaxation rates of all protons which can interact by dipolar coupling with the considered carbon. It is therefore very useful when the considered carbon is directly bound to one, two or three protons since the relevant dipolar interaction will overwhelm more remote interactions. As the C-H distance is normally known, the dynamical part of the cross-relaxation rate is, in that case, readily available (see equation ( 16)) and can serve as a reference for further geometrical 19), it can be seen that A st I and A eq I have to be measured.
In principle, the former is deduced from the usual proton-decoupled 13 C spectrum with a waiting time of five times the longest 1
T to restore equilibrium magnetization between scans.
However, in experiments for which decoupling is substituted by a pulse train (this is the case in reverse two-dimensional experiments, mainly used for the study of large biomolecules), some caution must be exercised regarding the ability of actually saturating the proton spin system 32 . For obtaining A eq I , it suffices to run the normal experiment but with decoupling switched on during the fid acquisition. This can be achieved 33 that way because i) decoupling is operative instantaneously, ii) nOe does not affect transverse magnetization. However, since the proton spin state is modified during the pulse sequence (by gated decoupling), proton polarization also relaxes so that the recovery toward equilibrium needs a waiting time longer than 5T 1 . Various simulations have shown that a waiting time of 10T 1 was safe in all circumstances 34 . The procedures for measuring A st I and A eq I are schematized in figure 8.
8. the two separate experiments for experimentally determining the nOe factor (see (19)). DE stands for decoupling. In the early days of carbon-13 spectroscopy, high power decoupling was used and nOe measurements had to be carried out in such a way that temperature was identical for both experiments (interleaved experiments with shift of the proton frequency). With modern spectrometers, such precautions are no longer necessary thanks to decoupling schemes which permit low power decoupling and thanks to accurate temperature control.
The HOESY (Heteronuclear Overhauser Effect Spectroscopy) experiment
Initially, this is a two-dimensional experiment which is supposed to be the heteronuclear homolog of the NOESY experiment. It thus involves 1 H and 13 C pulses as well as proton decoupling during 13 C signal acquisition. This experiment, proposed simultaneously by Rinaldi 35 on the one hand, and Yu and Levy 36,37 on the other hand, provides only cross-peaks which indicate 1 H -13 C distance correlations. Evidently, at the onset, the method provides information about directly bonded 1 H - 13 C nuclei, information which can be obtained through other more sensitive procedures. Its interest lies in other more remote correlations. Indeed, it has been mostly applied to intermolecular dipolar interactions [36][37][38][39][40][41][42][43] . As described later, we shall be rather interested here in intramolecular interactions associated with geometrical information not accessible by other correlation methods. The pulse sequences that we are using are shown in figure 9 and will be now commented. We have found it useful to slightly modify the conventional sequences 44 by inserting a saturation procedure applied to carbon-13 prior to the mixing interval. This has two advantages: i) the state of carbon polarization is perfectly defined at the beginning of the mixing period (it is actually zero), ii) because time averaging is needed, the experiment can be repeated according to the proton relaxation time, usually much shorter than the carbon relaxation time. Furthermore, the 180° phase alternation of the second proton pulse along with the alternation of the acquisition sign has the effect of cancelling any recovery of carbon polarization by carbon relaxation during the mixing time.
As a consequence, pure nOe spectra are obtained. In the 2D mode, the evolution time t 1 plays its usual role for proton chemical shift labelling.
The central pulse refocuses all proton-carbon J couplings (decoupling pulse), so that the 1 dimension corresponds to proton chemical shifts, while the 2 dimension (physically detected) corresponds to carbon chemical shifts. Without that pulse, the correlation peaks appear at the position of the 13 C satellites in the proton spectrum, that is at more than 60 Hz on either side of the proton resonance. This can be a serious advantage for detecting remote correlations 45,46 which appear precisely at the proton resonance position. Amplitude modulation occurs in t 1 so that a two-dimensional spectrum in the absorption mode can be obtained, facilitating the experimental construction of build-up curves (signal amplitude as a function of the mixing time; see figure 10). Thus an accurate measurement of the cross-
t 1 /2 5 T1 ( 1 H) DEC 13 C 1 H SAT m (/2)x (/2)x (2D) /2 5 T1 ( 1 H) DEC 13 C 1 H SAT m (/2)x (/2)x (1D) global /2 5 T1 ( 1 H) DEC 13 C 1 H SAT m ()x ()x (1D) selective ee n DANTE-Z
relaxation rates can be achieved. Figure 10. A typical HOESY build-up curve.
Whatever the method for extracting the cross-relaxation rates will be, a reference spectrum is needed. Actually it is furnished by the gated decoupling experiment shown at the bottom of figure 8 with the same number of scans than in the HOESY experiment. Let us denote by C eq I the quantity which is measured that way. For a first estimation of the cross-relaxation rate, we can have recourse to the initial approximation which makes use of the data points corresponding to small values of the mixing time and seemingly varying linearly with the latter. Referring to equation (22), these data points can be exploited to first order as indicated below
m C eq m t C z C z m C z I dt dI I I 4 ) ( ) 0 ( ) ( 0 (44)
For deriving the above equation, we have taken into account the phase cycling of figure (9) and the fact that
C eq H eq I I 4
. The interesting result of ( 44) is that the cross-relaxation rate can be deduced from the initial slope of the plot of
if we would attempt to perform the experiment the other way, i.e. from carbon-13 toward proton. Anyway, when one has at hand the full buildup curve, it is recommended to fit the experimental data according to equations (22) or possibly to equations (27) if one suspects a contribution from csa-dipolar cross-correlation rates. In the former case, this will provide not only the cross-relaxation rate (the parameter of interest) with an improved accuracy (because it is always difficult to decide what is the best initial slope) but also the longitudinal relaxation rates C R 1 and H R 1 . Notice that H R 1 can be affected by proton-proton cross-relaxation rates among the proton system 46 (the so-called spin diffusion phenomenon). Anyhow, the problem of sensitivity, related to the observation of carbon-13 (the low gyromagnetic constant nucleus), may be addressed. Inverse twodimensional HOESY experiments [47][48][49][50] have been proposed. They consist in transferring nOe from the low gyromagnetic constant nucleus to the more sensitive high gyromagnetic constant nucleus which is detected. In principle this should lead to a sensitivity increase by a factor of 16 in the case of the pair 13 C-1 H. However, as the 1 H- 13 C cross-relaxation rate is 16 times smaller than the 13 C-1 H cross-relaxation rate (see above), there is no net sensitivity increase and these experiments are of little value as far as sensitivity is concerned. Moreover, the gradients used in such experiments for suppressing unwanted signals and selecting coherences of interest may entail a loss of sensitivity by a factor of two, which must be further multiplied by the increase of noise associated with detection at a higher frequency. However, in particular instances, inverse HOESY experiments can reveal correlations not visible in the direct HOESY experiments 48,50 . The true inverse HOESY experiment involves, by means of INEPT sequences, proton polarization transfer prior to the mixing time and proton detection together with the use of gradients as indicated above. This experiment has however been essentially devised for large biomolecules such as proteins 51 . As a matter of fact, because the INEPT transfer process is based on a well defined (large) J coupling, it is well suited for carbon-13 or nitrogen-15 directly bound to (a) proton(s) and leads the cross-relaxation rate associated with the CH or NH bond. The method has been widely used for the amide proton of proteins 32 but, to the best of our knowledge, it has never been applied to the measurement of proton-carbon cross-relaxation rates in the case or small or medium size molecules.
Nevertheless, it could probably be useful outside the extreme narrowing regime, along with measurements performed at different magnetic field values.
In difficult situations (weak cross-relaxation rates or strong overlap in the proton spectrum), when using carbon-13 detected HOESY experiments, one faces the unavoidable sensitivity problem. This problem can be alleviated by one-dimensional experiments, the easiest of which is shown in the middle of figure 9. It is a global experiment, in the sense that the proton chemical shift labelling has been suppressed. As a consequence, the measured quantity is the sum of the cross-relaxation rates arising from all protons prone to interact by dipolar coupling with a given carbon-13. It can be noticed that the basic phase cycling (essential for a proper interpretation of the experimental data) has been maintained in this sequence. This experiment can be useful as a prelude of a two-dimensional HOESY experiment requiring the accumulation of many transients, in order to make sure that the spectrometer occupation for a long time will not be wasted. It can also be used quantitatively for carbons experiencing a single dipolar interaction with protons. In that case, it would provide a piece of information comparable to that provided by the nOe factor. Another one-dimensional experiment is a selective one 40 as shown in the bottom of figure 9. This sequence allows one to determine the cross-relaxation rate produced by the selected proton. The DANTE-Z technique 52 is particularly well suited for achieving the selectivity process because it involves inherently the basic phase cycling of the HOESY experiment. Of course, other selective schemes can be used, combined for instance with a TOCSY sequence which can be valuable when the proton of interest (H) is not well separated from other multiplets. In that case, if H belongs to a Jcoupling network possessing an isolated proton (H'), H' is selected at the outset and its polarization is transferred to H by the TOCSY sequence just prior to the mixing interval of the HOESY experiment 53 .
Examples
Stereo and conformational studies
Early conformational studies by HOESY experiments are illustrated by the work of Batta and Köver 54 who were able to access oligosaccharide sequencing and conformational distribution around the glycosidic bond in model compounds. These determinations make use of relayed proton-proton-carbon cross-relaxation.
Another good example of what can be gained from HOESY experiments is provided by the work of Ancian et al. 53 about the preferential conformation of uridine in water. Through the TOCSY-HOESY experiment above described, these authors were able to show unambiguously that the torsion angle of the uracil group around the glycosidic bond lies in a range corresponding to the anti-form of the pyrimidine base with respect to the furanose ring.
The discrimination between E and Z isomers of the compound shown in figure 11 constitutes a further example of the interest of selective HOESY experiments 55 . As a final example of this section, we present a HOESY two-dimensional experiment of aqueous micellized sodium octanoate 46 . In addition to one bond 1 H-13 C correlation peaks, figure 13 exhibits remote correlation peaks which indicate some spatial proximity between a given carbon and protons bound to a nearby carbon. Build-up curves (not shown) have been obtained at two different values of the magnetic field since, as expected, a slow motion corresponding to the micelle overall motion is superposed to local fast motions (segmental and rotational isomerism). The cross-relaxation rates deduced from these build-up curves can be interpreted according to the Lipari-Szabo approach which involves the correlation time Attempts to interpret the corresponding build-up curves according to the Lipari-Szabo approach lead to inconsistent results (for instance, order parameters greater than unity). This indicates that these remote correlations are probably not of intramolecular origin but would rather arise from intermolecular dipolar interactions which could become significant when some contacts exist between neighboring aliphatic chains. This can only occur for parts of the chain presenting some flexibility (due to rotational isomerism or segmental motions). This flexibility may not be the same at the C 3 or C 4 levels and this would explain the different intensities of the H 3 -C 4 and H 4 -C 3 peaks. Thus, in that case, the HOESY experiment brings information about aliphatic chain mobility. Indeed, the part of the chain located near the polar head is known to be rigid and this is confirmed by the absence of H 2 -C 3 and H 3 -C 2 correlations.
Geometrical determinations
During the seventies and the eighties, C-H distances in various medium size molecules were determined through proton-carbon-13 nOe cross-relaxation rates in a semi-quantitative way.
These determinations followed the release of the seminal book by Noggle and Schirmer 56 and, most of the time, rested on the assumption of a single correlation time describing the overall motion of the molecule under investigation. These studies relied on global and selective nOe's. A complete bibliography can be found in the paper by Batta et al. [START_REF] Batta | [END_REF] , who, in addition, treated an AMX spin system where A and M stand for two J-coupled protons and X for the observed carbon-13.
The previous approach is valid as long as the molecular reorientation can be described by a single correlation time. This excludes molecules involving internal motions and/or molecular shapes which cannot, to a first approximation, be assimilated to a sphere. Due to its shape, the molecule shown in figure 15 cannot evidently fulfil the latter approximation and is illustrative of the potentiality of HOESY experiments as far as carbon-proton distances and the anisotropy of molecular reorientation are concerned 45,58 . The HOESY spectrum is displayed in figure 16. It has been obtained in the "J-separated" mode, i.e. without the central pulse in the t 1 interval (see figure 9, top) in such a way that direct correlations (one bond) appear at the location of carbon-13 satellites whereas possible remote correlations are visible at the proton resonance frequency (see section 5.3). A first estimate of the molecular geometry can be deduced from quantum chemical calculations.
Direct correlations (which involve one bond C-H distances around 1.1 Å) can then be employed for deriving the three rotational diffusion coefficients which reveal a strong reorientational anisotropy: expressed in terms of correlation times, this leads to x =18 5ps, y =2.6 0.2ps and z =10.8 1.8ps. Evidently, it is out of the question to use the approximation of a single correlation time and equations ( 34)-( 36) must be used. The angles appearing in these equations can be assumed from the quantum mechanical calculations and the crossrelaxation rates derived from the build-up curves pertaining to remote correlations allow one to derive the relevant distances. They are collected in table 1 and compared to those given by quantum mechanical calculations and crystallography. The agreement between the three techniques is rather good, bearing in mind that quantum mechanical calculations totally ignore molecular vibrations and that different vibrational averaging should be performed for r NMR and r RX .
In the case of a molecule with low symmetry and a reasonably known geometry, the inverse approach can be considered, i.e. determine not only x , y and z , but also the principal axis system of the rotation-diffusion tensor. The molecule shown in figure 17 is a good example of such a study 59 : direct and remote correlations of a HOESY spectrum, along with an assumed geometry, lead to the orientation of the rotation-diffusion tensor principal axis system.
Interestingly, it can be seen that this orientation has nothing to do neither with the inertia tensor, nor with the dipole moment direction. Again the reorientation is here strongly anisotropic: x =12.0 2.5ps, y =32 04ps and z =4.1 1.0ps. Figure 18. The molecule studied in a cryosolvent in order to determine by carbon-13 relaxation an accurate C-H distance.
Conclusion
Through this review, it can be seen that the proton-carbon nOe have not been used as it probably deserves, as far as small or medium size molecules are concerned. As a possible perspective, one could envision extensive HOESY measurements at very high field (even combined with proton detection, a possibility for improving sensitvity not yet explored in the case of small or medium size molecules) in order to improve the spectral resolution and to obtain more and more remote correlations which could be useful for refining the geometrical parameters. Still toward this objective, another valuable approach is probably the use of cyosolvents. Altogether, one could dream of a sort of spin relaxation NMR crystallography.
) 0 8 Z
08 is the vacuum permeability, A and B the gyromagnetic constants, the Planck constant divided by 2. If A stands for a carbon-13 and B for a proton, d K is equal to 1.that, in (
related to the polarization of nuclei A and B. This spin state can be created in different ways. The easiest way is probably to let the system evolve under the sole J AB coupling so as to obtain an antiphase doublet, for instance the B antiphase doublet represented by the two proton carbon-13 satellites in an antiphase configuration). Applying selectively on B a (/2) y pulse transforms the latter spin state into x, y, z refer to the rotating frame). It turns out that the longitudinal order can also be created by relaxation. This is due to a coupling term which acts in the same way as the cross-relaxation rate but, this time, between A and B polarization and the longitudinal order. This coupling term is called cross-correlation rate10 because it involves two relaxation mechanisms, namely the A-B dipolar interaction and the csa mechanism at A and B. In fact, two such cross-correlation terms are prone to interfere with the evolution of A z I and B z I . They will be denoted as somewhat equations(21) and give rise to the extended Solomon equations which can be written as (with
Figure 4 .
4 Figure 4. Normalized spectral densities as a function of 2 / for different values of the correlation time (in s). They all start from 1 just to highlight their dependence with respect to the measurement frequency.
Figure 5 .
5 Figure 5. Definition of the polar angles in the principal axis system of the diffusion-rotation tensor.
from equation (
Figure 9 .
9 Figure 9. Three possible modes of the HOESY experiment.
Figure 11 .
11 Figure 11. Two isomers with distances (in Å) obtained by quantum chemistry calculations. Solid lines: proton-carbon distances. Dashed lines: proton-proton distances.
Figure 12 .
12 Figure 12. The 13 C responses (bottom) obtained by selectively inverting H 5 (bound to a carbon-12); they lead to the cross-relaxation rates 6 5 C H
s associated with the global micelle tumbling, the correlation time f associated with the fast
an order parameter S which describes the restriction of the latter motions (see equation(39)).
Figure 13 .
13 Figure 13. The HOESY two-dimensional spectrum of micellized sodium octanoate in aqueous solution. Besides one-bond carbon-proton correlations, remote correlations are observed (marked by an arrow).
Figure 14 .
14 Figure 14. An excerpt of figure13showing peaks associated with remote correlations.
Figure 15 .
15 Figure 15.The model molecule used to demonstrate the possibilities of HOESY experiments in terms of carbon-proton distances and reorientational anisotropy. To a first approximation, the molecule is devoid of internal motions and its symmetry determines the principal axis of the rotation-diffusion tensor. Note that H 1 , 1* H 1 , H 1* are non equivalent. The arrows indicate remote correlations.
Figure 16 .
16 Figure 16. The J-separated HOESY spectrum of the molecule shown in figure 15. Direct (onebond) correlations are located at the position of 13 C satellites in the proton spectrum. Arrows indicate remote correlations.
Figure 17 .
17 Figure 17. The ,,2,6 tetrachlorotoluene. Small letters refer to the rotation-diffusion principal axis system.
-step rotational diffusion is the model universally used for characterizing the overall molecular reorientation. If the molecule is of spherical symmetry (or approximately; this is generally the case for molecules of important size), a single rotational diffusion coefficient is needed and the molecular tumbling is said isotropic. According to this model, correlation functions obey a diffusion type equation and we can write
Table 1 .
1 Distances in Å derived from the HOESY experiment (r NMR) and compared to results from crystallography (r RX ) and quantum mechanical calculations (r QM ).
vector r NMR r RX r QM
C 1 H 1 2.02 2.07 2.20
C 1 H 1* 1.96 2.21
C 2 H 3 2.32 2.24 2.16
C 4 H 3 2.14 2.07 2.19
C 4 H 5 1.81 2.04 2.19
C 6 H 6 2.19 2.24 2.18
ACKNOWLEDGEMENTS
This work is part of the ANR project MULOWA (Grant Blan08-1_325450) | 57,110 | [
"176975",
"12647",
"773269"
] | [
"129683",
"129683",
"441232",
"129683"
] |
01482049 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2011 | https://hal.univ-lorraine.fr/hal-01482049/file/MRPM10_Leipzig_revision.pdf | Wassim Salameh
email: [email protected]
Sébastien Leclerc
Didier Stemmelen
Jean-Marie Escanyé
Lemta
NMR Imaging of Water Flow in Packed Beds
Keywords:
L'archive ouverte pluridisciplinaire
Introduction
Flows in porous media are everywhere in our environment (soils, oil reservoirs, biological tissues) and in industrial processes (filtration, drying, fixed-bed reactors). In order to explain further the transport mechanisms through a porous medium, we have to know the characteristics of the medium (e.g. porosity, pore size) and also the flow characteristics through its pores. Magnetic resonance imaging has been extensively used [1][2][3][4][5][6][7][8] to visualize directly fluid flows within porous media but few studies really focused on the accuracy of such measurements. Indeed, many difficulties due both to the acquisition of the NMR signal and image processing arise when accurate quantitative measurements are searched.
In this study we present measurements obtained by MRI in packed beds with beads of two sizes: large diameter (3.175 mm) and small diameter (0.5 mm). Packed beds with large beads allow to measure interstitial velocities inside the pores while packed beds with small beads can only give averaged interstitial velocities. In order to check the accuracy of the measurements, results were compared with those obtained by weighing.
Materials and Methods
The images were performed using an MRI equipment operating at 100 MHz (Bruker Biospec 24/40). A spin echo sequence was used to obtain the structure of the porous medium (signal intensity) and a PGSE sequence to measure the velocity field (signal phase) of the fluid. The experimental parameters were chosen as follows: duration of the flow encoding gradient pulses δ=2 ms, spacing between the gradient pulses Δ=12 ms, field of view FOV=4 cm, image matrix of 256 × 256 points, number of scan N=4. We used polymer beads in order to avoid the influence of paramagnetic elements and to limit magnetic susceptibility effects. They were packed into a central tube (length of 23 cm, inner diameter of 1.65 cm). This tube was inserted into a second tube of larger diameter to define an annular spacing of 6 mm (Fig. 1). The porous medium and the annular spacing were then fully saturated with water. The device was placed between two constant-head reservoirs. Thus water flowed only by gravity effect through the device. As we can see in figure 1, the mean flow directions are opposite in the porous and annular regions [1]. The annular region was used to calibrate the NMR signal of water and to check the quality of MRI flow measurements (fluid was incompressible and volume flow rates of water in the porous section and in the outer annulus section should be the same).
Results
Velocity measurements through packed bed of large beads
MRI velocimetry measurements are performed on a packed bed of polyacetal beads of large diameter (3.175 mm) with a thin slice selection (1 mm). This allows observing the interstitial flows between the grains of the bed and minimizes the partial volume effects (voxels including both a liquid and solid phase). Fig. 2 shows a cross-sectional velocity map for a mass flow rate of 0.914 g/s (Re p = 36). The velocities are represented with components along the axis of the porous cylinder. The flow in the outer gap is an annular Poiseuille flow (Re = 25). The velocity map in the porous tube shows the existence of preferential flow paths near the wall of the tube and also some regions with higher velocities towards the center of the packed bed between the grains. It is interesting to note that the local velocity can be 4 to 6 times higher than the mean velocity in the porous medium. This result will be useful to understand the results with small beads in the next section.
At Re p = 36, the inertial effects may become important considering the viscous effects causing recirculations. Nevertheless by varying the Reynolds number between 14 and 36, we did not observe by MRI negative velocities that would have proved the existence of such an effect. Flow measurements by MRI in both the outer annulus and the inner porous tube deviate by less than 3% from the imposed values (Table 1). Measurement errors that we found are very low compared to literature results [1,2,7,10].
Velocity measurements through packed bed of small beads
MRI velocimetry measurements are then performed on a packed bed of polystyrene beads (diameter 0.5 mm) with a wide slice selection (20 mm) in order to reach average values (Fig. 3). This case is different from the previous one because each voxel contains a significant solid fraction (around 60%). There is a "partial volume effect" as mentioned in several articles [1,3,9,10]. The volume fraction of water can be different from one voxel to another, in particular near the tube wall. So it is necessary to correct the image by weighting the velocity in each voxel with the spin density. This has been performed using a classical spin echo imaging procedure. A change in the velocity distribution can be observed in the vicinity of the walls due to the ordering of the beads in this area (Fig. 3) as well observed on porosity maps.
Phase aliasing may exist in this case without clear appearance on the images. Indeed, the phase in each voxel is a mean phase shift of all the spins contained in the voxel. The range of velocity being relatively wide in a single voxel (from 0 to about 6 times the mean velocity, as mentioned above), a phase aliasing can be reached for some spins in a voxel. This effect is difficult to correct and it is preferable to adjust the strength of magnetic field gradient in the velocimetry sequence in a way to avoid any phase aliasing [4,8]. Fig. 4 shows that, below a 30° mean phase shift (consistent with over-velocities 6 times higher than the mean velocity), the phase aliasing is avoided.
In light of these precautions, measurements of mass flow rates have given errors between the fixed flow rates and those measured by MRI lower than 4 % (Table 1). The result is a little worse than for packed beds of larger beads, but this is quite logical because the velocity measurements require a weighting by the spin densities. Also the magnetic susceptibility effects are greater with a bed of small beads. This becomes even more important for naturals materials (rocks, sand, wood…) Nevertheless those effects could be lowered by carefully choosing experimental parameters such as Δ [8].
Conclusions
We have shown in this article that velocity measurements can be carried out in packed beds by MRI with good precision (error inferior to 4%). To reach this accuracy we have used polymer beads in order to avoid magnetic susceptibility effects. Results would not be as good with other porous materials especially natural media. By using small beads we have shown the necessity to correlate the velocity and porosity maps to eliminate partial volume effects. These results also highlight the importance of adjusting the gradient strength to avoid any phase aliasing even if those aliasing not clearly appears on the velocity maps.
Fig. 1 :
1 Fig. 1: Experimental cell constituted by a central porous tube (packed bed) and an annular gap. The mean flows in the two regions are in opposite direction.
Fig. 2 :
2 Fig. 2: Velocity map obtained by MRI velocimetry with a 1 mm thickness for slice selection. Areas in black correspond to the absence of signal (solid). The porous part contains a packed bed of polyacetal beads (3.175 mm in diameter). The mass flow rate is equal to 0.914 g/s.
Fig. 3 :
3 Fig. 3: Velocity map obtained by MRI velocimetry with a 20 mm slice selection. The porous part contains a packed bed of polystyrene beads (0.5 mm in diameter). The flow rate is equal to 0.254 g/s.
Fig. 4 :
4 Fig. 4: Mean phase shift ( °) according to the strength of gradient (G/cm) for velocity measurements in a packed bed of polystyrene beads (0.5 mm in diameter) and with a 20 mm slice selection. The dotted lines represent the linearization of the curves near the origin. The flow rates are respectively 0.19 g/s and 0.31 g/s.
Table 1 :
1 Comparison between mass flow rates measured by MRI in annular and porous sections and that measured by weighing
Mass flow rate Mass flow rate Mass flow rate Relative error
(by weighing) (by MRI) (by MRI) MRI porous tube /
Outer annulus porous tube weighing
(g/s) (g/s) (g/s) (%)
0.644 0.646 0.656 + 1.9%
Polyacetal d = 3.175 mm 0.813 0.809 0.808 -0.6%
0.914 0.903 0.936 + 2.4%
0.254 0.259 0.248 -2.4%
Polystyrene d = 0.5 mm 0.276 0.277 0.268 -2.9%
0.333 0.334 0.320 -3.9% | 8,680 | [
"781410",
"12647",
"12785"
] | [
"441232",
"441232",
"441232",
"129683",
"300157"
] |
01482078 | en | [
"chim"
] | 2024/03/04 23:41:48 | 2010 | https://hal.univ-lorraine.fr/hal-01482078/file/Diff%20Tl.pdf | S Leclerc
L Guendouz
A Retournard
D Canet
NMR diffusion measurements under chemical exchange between sites involving a large chemical shift difference
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Self-diffusion coefficients, measured by NMR, are generally achieved under nearly onresonance conditions, meaning that the carrier frequency ( r , the NMR transmitter frequency) is not far from the NMR frequencies ( 0 ). The issue of a large frequency offset, between r and 0 , has been rarely addressed, just because it is usually a simple matter to set r in the vicinity of all resonance frequencies of interest ( 0 ). This cannot be evidently the case when fast exchange conditions prevail between two sites of drastically different chemical shifts. We encountered this problem when studying a host-guest system constituted of a thallium cation (the guest) partly complexed by a calixarene molecule (the host). By contrast, when dealing with cesium (as the guest) and the same host, no special difficulty was noticed [START_REF] Cuc | 133 Cs diffusion NMR spectroscopy: a tool for probing metal cation- interactions in water[END_REF]. In both cases, we observe a single signal which is the weighted average between two signals of frequencies A (corresponding to the complexed cations) and B (corresponding to the free cations in aqueous solution). This is schematized in Fig. 1. The only difference between cesium and thallium is the extremely broad chemical shift scale of the latter, with the consequence that B A can become very large, and thus that on-resonance conditions can never be achieved for both sites. We thus anticipate some problems either by using static field gradients ( 0 B gradients) or radio-frequency gradients ( 1 B gradients) methodologies.
In fact, the measurement of the self-diffusion coefficient of the thallium cation (TlNO 3 0.1M solution) in aqueous solution was straightforward (with both types of gradients) and we found a value close to the self-diffusion coefficient of water, in agreement with the fact that the cation is surrounded by water molecules. It was however somewhat bewildering to notice that, as soon as calixarene was present, no signal could be observed with 0 B gradients while, with 1 B gradients, a result was actually obtained leading however to unrealistic values of the self- 3 diffusion coefficient. Obviously, chemical exchange is responsible for such drawbacks and the purpose of the present paper is to decipher the causes of these unexpected observations.
Theory
It is advisable to go back to the basic principles encompassing diffusion measurements either with 0 B gradients in the stimulated echo mode (2) (rather than the original PGSE experiment
(3) which is not suitable when dealing with short transverse relaxation times, as here) or with the equivalent with 1 B gradients (4). Both these experiments involve a first gradient pulse which is used to defocus nuclear magnetization according to the location of the molecules bearing the relevant nuclear spins (sometimes, this first gradient pulse is dubbed "encode" pulse). Let us consider a gradient g applied along the X direction of the laboratory frame. It will produce ideally a precession angle ( 0 B gradients; in the x,y plane of the rotating frame) or a nutation angle ( 1 B gradients, in the y,z plane if 1 B is polarized along the x axis of the rotating frame) equal to
gX X ) ( [ 1
]
where is the thallium gyromagnetic ratio and the duration of the gradient pulse. The component along the z axis is then )] ( cos[ X
. After an interval of duration , called the diffusion interval, a second gradient pulse of the same duration (the "decode" pulse), is applied leading to the same defocusing angle ) ( X
. The various time intervals and the angle
) ( X
are defined in Fig. 2.
If the molecules bearing the spins of interest have not moved and if the gradient is timeindependent, of sufficient strength and applied for an appropriate duration so that complete defocusing occurs (that is ) ( X
spans at least 2 for all X values existing in the sample), the component along z (which ultimately leads to the observed signal) is the average of 2 cos , the value of which is 2 1 .
We introduce now diffusion during the interval , so that the angle ) ( X
has to be replaced by
where accounts for the fact that the molecule bearing the spins of interest has moved to another location due to translational motions. The average has now to be calculated as
sin 2 sin ) 2 / 1 ( cos cos sin sin cos cos cos ) cos( cos 2 2 [2]
In [START_REF] Tanner | Use of the stimulated echo in NMR diffusion studies[END_REF], it has been assumed that translational motions are independent of the initial location X and this is effectively the case for Brownian motions. If we further assume again that defocusing is complete (and consequently that 0 2 sin
), the preceding expression reduces to [START_REF] Stejskal | Spin diffusion measurements: spin echoes in the presence of a time-dependent field gradient[END_REF] We shall define the usual q variable 2 / g q [START_REF] Canet | Radiofrequency field gradient experiments[END_REF] With this notation the angle can be expressed as
cos ) 2 / 1 ( ) cos( cos
qr 2 [5]
where r is the displacement of the molecule of interest during . The calculation of the mean value of cos rests on the well-known distribution function of displacements due to selfdiffusion [START_REF] Price | Pulsed-field gradient nuclear magnetic resonance as a tool for studying translational diffusion. Part I. basic theory[END_REF], that is ) 4 exp( 4
1 2 D r D , D being the so-called diffusion coefficient ) 4 exp( ) 4 exp( ) 2 cos( 4 1 cos 2 2 2 D q dr D r qr D [6]
In addition to the decay given by [START_REF] Torrey | Bloch equations with diffusion terms[END_REF], we must account for decays due to relaxation and also to diffusion during the gradient pulses. The expression for the latter has been determined in the early days of NMR (6) (effect of translational diffusion in the presence of a constant gradient):
) 3 / 8 exp( 2 2 D q
. Altogether, one has for the observed signal ) (q
S )] 3 / 2 ( 4 exp[ ) / exp( ) / 2 exp( ) ( 2 2 1 2 0 D q T T q S eff gradient B [7] )] 3 / 2 ( 4 exp[ ) / exp( ) / 2 exp( ) ( 2 2 1 2 1 D q T T q S gradient B [8]
It should be noted that the observed magnetization arises from longitudinal magnetization during T is short and much shorter than 1 T (about a factor of two is gained, notwithstanding the fact that ).
In the presence of chemical exchange, the above theory may need to be amended. Indeed, as soon as in 1992, Moonen et al. proposed (GEXSY experiment) to measure the diffusion in exchanging spin systems (7). This was followed by studies of the influence of exchange on DOSY spectra [START_REF] Johnson | Effects of chemical exchange in diffusion-ordered 2D NMR spectra[END_REF][START_REF] Cabrita | High-resolution DOSY with spins in different chemical surroundings: influence of particle exchange[END_REF]). An interesting work concerns possible chemical shift modulations in diffusion decays [START_REF] Chen | Chemical exchange in diffusion NMR experiments[END_REF]. Several applications aimed at determining exchange rates or residence times can be found in the literature [START_REF] Liu | Determination of the relative NH proton lifetimes of the peptide analogue viomycin in aqueous solution by NMRbased diffusion measurement[END_REF][START_REF] Cabrita | HR-DOSY as a new tool for the study of chemical exchange phenomena[END_REF][START_REF] Thureau | Determining chemical exchange rates of the uracil labile protons by NMR diffusion experiments[END_REF]. All these latter works concern however slow exchange (at least with respect to chemical shift). Although a relatively recent publication [START_REF] Gottwald | Diffusion, relaxation, and chemical exchange in casein gels: a nuclear magnetic resonance study[END_REF] deals with fast exchange (the topic of the present work), we were unable to find any explanation of our experimental observations in these previous studies.
As a matter of fact, specific conditions of fast exchange apply to each parameter. This will be now discussed and, for that purpose, we have to define some quantities: p, the proportion of complexed thallium, denoted by A in the following; (1-p), the proportion of free Thallium, denoted by B in the following;
p A
, the residence time in site A with the exchange rate k
equal to / 1 ; ) 1 ( p B
, the residence time in site B. We shall consider that fast exchange conditions prevail with respect to a given parameter G if
B A obs G p pG G ) 1 ( [9]
We start with chemical shifts, or, rather, with resonance frequencies. If, between two consecutive data points, the considered spin jumps many times between sites A and B, then a weighted average has effectively to be accounted for (see fig. 1), that is
B A obs p p ) 1 ( [10]
As the sampling interval is set for possibly observing A and B , that is of the order of
B A 2 / 1
, the condition of fast exchange with respect to chemical shifts can be written as
B A k 2 [11]
Concerning relaxation rates R, let us assume (for the sake of simplicity) that they induce, during a time t, a magnetization loss equal to ) exp( Rt . This means that, at the outcome of the time interval (
B A ), magnetization has decreased by a factor ] ) 1 ( exp[ ) exp( p R p R B A
. After a first order expansion, we obtain
2 0 0 ) 1 ( ] ) ) 1 ( ( 1 [ ) ( B A B A B A R R p p M R p pR M M [12]
Clearly, the first term in the right-hand member of [START_REF] Cabrita | HR-DOSY as a new tool for the study of chemical exchange phenomena[END_REF] represents a weighted average of relaxation rates, thus the observed quantity in the case of fast exchange (relatively to relaxation rates) provided that the last term is negligible. Consequently, the condition of fast exchange with respect to relaxation rates can be stated as
B A R R k , [13]
7
It means that the exchange rate must be greater than the relaxation rates in both sites.
As far as diffusion is concerned, we shall neglect its possible effects during the application of gradient pulses and only consider , the so-called diffusion interval. Therefore, we have just to look at the phase angle (see eqs. [START_REF] Price | Pulsed-field gradient nuclear magnetic resonance as a tool for studying translational diffusion. Part I. basic theory[END_REF] and [START_REF] Torrey | Bloch equations with diffusion terms[END_REF]). Phase angles at the outcome of the intervals A and B are given by (eq. [6])
] ) 1 ( 4 exp[ cos ) 4 exp( cos 2 2 2 2 p D q p D q B B A A [14]
At the outcome of the time interval (
B A ), the total phase angle is then )] ) 1 ( ( 4 exp[ sin sin cos cos cos 2 2 B A B A B A D p pD q [15]
A sin and B sin are zero because they would be the imaginary part of the Fourier transform of a Gaussian function (eq. [START_REF] Torrey | Bloch equations with diffusion terms[END_REF] is the real part) and it turns out that this Fourier transform is real. Consequently, as can be seen from [START_REF] Cuc | Behavior of cesium and thallium cations inside a calixarene cavity. Evidendence of cation- interaction in water[END_REF], fast exchange conditions concerning diffusion are met provided that involves at least one cycle (
A , B ) or, in other words, that is greater than ( B A ).
In accord with reference (9), the condition of fast exchange with respect to diffusion coefficients can be expressed as
/ 1 k [16]
Failure of diffusion measurements under chemical exchange involving important offresonance conditions
The usual method for measuring self-diffusion coefficients consists in keeping constant the two intervals and while repeating the basic experiment with different increments of the gradient strength g so that relaxation decays remain constant. If equations [START_REF] Moonen | Gradient-Enhanced Exchange Spectroscopy[END_REF] and [START_REF] Johnson | Effects of chemical exchange in diffusion-ordered 2D NMR spectra[END_REF] are valid, the diffusion coefficient is easily deduced from at least two experiments provided that no irreversible defocusing occurs. In these equations, it is however assumed that on-resonance conditions prevail in such a way that defocusing is exclusively produced by a time-independent gradient. We show below that exchange processes may rule out the simple view represented by these equations, especially when the two sites exhibit a very important chemical shift difference. As far as our samples are concerned, a chemical shift difference of 110 ppm together with the observation of a single line at 9.4 T (15) which indicates fast exchange with respect to chemical shifts, ensures also fast exchange with respect of relaxation rates and diffusion coefficients.
Imperfections of radio-frequency pulses
If the radio-frequency (rf) field has a low amplitude ( 1 B ), off-resonance effects show up in the rotating frame as a tilt of the effective rf field with respect to the polarization axis (say x), due to a non-negligible z component. The latter is equal (in frequency units) to r 0 , 0 being the resonance frequency. This is schematized in figure 3 for the two resonances A and B
of figure 1 (the amplitude of the rf field is represented in frequency units by
2 / 1 B rf
).
The orientations of effective 1 B 's are seen to be different for the two resonances A and B .
As a consequence, nutation takes place in different planes and, if the spins move continuously between A and B because of chemical exchange, nutation becomes oscillatory. If the duration of the rf pulse encompasses several cycles of exchange, the latter will produce an irreversible defocusing even if 1 B is homogeneous. This is illustrated by figure 4 where the quality of a 180° inverting pulse is seen to drop considerably in the case of exchanging thallium.
These considerations concern not only homogeneous rf fields but also rf field gradients. This oscillatory behavior may become detrimental and makes the gradient time-dependent during the gradient pulses. 0
B gradients
In an attempt to understand the failure of diffusion measurements by 0 B gradients, we first paid heed to possible effects arising from chemical shift differences between the two sites versus the gradient strength. Let us assume that the resonance frequencies locations X in the sample (see figure 1). This is certainly true in our case, at least, for the first gradient values in experiments dealing with incremented gradients. In such a situation, one must also take into account precession in the rotating frame during the interval (10) and the
angle ) ( X is different for A and B )] ( 2 )[ 1 ( ) ( )] ( 2 [ ) ( B r r A gX p X gX p X B A [17]
Because of fast exchange, we must consider the average of the two angles in [START_REF] Kuntz | Diffusive diffraction phenomenon in a porous polymer material observed by NMR using radio-frequency field gradients[END_REF]. If =1/2 (refocusing). Moreover, bipolar gradients (two gradient pulses of opposite polarity separated by a 180° pulse ( 16)) have been used here (see figure 5 for the details of the sequence). They are known to cancel all chemical shift effects (including those due to the magnetic field inhomogeneity) unless rapid diffusion occurs through background gradients [START_REF] Kuntz | Diffusive diffraction phenomenon in a porous polymer material observed by NMR using radio-frequency field gradients[END_REF].
As explained above, each rf pulse contributes to an irreversible defocusing. Now, for a commercial spectrometer, thallium is outside the frequency range allowing proper tuning and matching of the probe. As a consequence, the rf field is reduced (thus amenable to the unwanted effects schematized in fig. 3) and owing to the five rf pulses the sequence of figure 5, the NMR signal virtually cancels. As shown in figure 6, instead of 60% canceled by relaxation, we have observed a loss of more than 90%. Of course, adding a LED subsequence [START_REF] Gibbs | A PFG NMR experiment for accurate diffusion and flow studies in the presence of eddy currents[END_REF] (often used for avoiding ill effects due to eddy currents) with its two additional rf pulses, would still make the situation worse. The higher trace of fig. 6 corresponds to the sequence of figure 5 without any gradient pulse and is therefore indicative of the intrinsic loss by offresonance effects in the course of rf pulses. Conversely if a gradient of moderate strength is applied so that attenuation by diffusion is negligible, we expect a loss by a further factor of two (refocusing by the second gradient). This is shown by the lower trace of fig. 6. Although running a diffusion experiment as in fig. 6 seems somewhat challenging, we attempted to verify that gradients work as they should do even in the presence of exchange. Unfortunately, as expected from fig. 6, these experiments failed and we were unable to observe any decay due to diffusion. This poor signal-to-noise ratio can be easily understood on the basis of irreversible defocusing due to the off-resonance effects detailed above. As a matter of fact, accounting for the large chemical shift difference experienced by thallium in the two sites (110 ppm), the tilt angle between eff 1 and the x axis (see fig. 3) is, in our case (with a 90° pulse of 40 s), of the order of 60° whereas a proper tuned probe (with a 90° pulse of 10 s)
would only reduce this angle to 30° (a value still too high). The remedies are therefore obvious: i) improve the quality of the probe so as to increase significantly the rf field strength, ii) go to lower static magnetic field so as to reduce the frequency difference
B A . Indeed,
similar experiments performed at 4.7 T with a dedicated probe (albeit without 0 B gradients, not available on this instrument) exhibit the above mentioned effects to a much lesser extent.
B gradients
The actual sequence is depicted in fig. 7 and is seen to involve only one rf pulse (which is simply a read-pulse). Contrary to the above experiments with 0 B gradients and, as shown in fig. 8, a signal is actually observed for exchanging thallium but this signal leads eventually to an unexpectedly large diffusion coefficient (thus to a decay faster than expected) when the experiment is carried out by incrementing the gradient strength. Although this fast decay occurs essentially at low gradient strength values (see figure 9), the explanation is quite different from the one invoked for 0 B gradients. As shown previously, because of exchange, one switches continuously between two effective 1 B fields of different directions. This renders the 1 B gradient oscillatory and therefore time dependent (denoted by ) (t g in the following).
As a consequence, the expression of
) ( X becomes 0 ) ( ) ( Xdt t g X [18]
It is thus equivalent to consider that the gradient is constant and that it is X which experiences these oscillations and, therefore, to consider it as a sort of translational diffusion motion.
Accordingly, it may be necessary to account for diffusion during the application of gradient pulses with an effective diffusion coefficient, eff D , presumably much longer than the intrinsic diffusion coefficient and which obviously depend i) on the exchange rate, ii) on
B A .
Anyway, for low values of q, equation ( 8) must be rewritten as
) 3 / 8 exp( ) 4 exp( ) / exp( ) / 2 exp( ) ( 2 2 2 2 1 2 1 eff gradient B D q D q T T q S (12)
Thus, the initial decay being faster than expected, the measured diffusion coefficient (from a fit of the low quality data of fig. 9) is necessarily larger than the true diffusion coefficient.
duration
As seen above, off-resonance effects constitute the main problem when dealing with fast exchange involving large chemical shift differences. This problem can be circumvented by a sufficient rf field strength or, within the 1 B gradient methodology, by a sufficient gradient strength. This evidently rules out the gradient strength increment method and suggests the use of a strong gradient throughout the experiment. Two possibilities can therefore been envisioned: either incrementing , or incrementing . The former requires to be corrected for the 1 T attenuation during and, in fact, due to the duration of this interval, we found that 1 T attenuation largely dominates the attenuation due to diffusion. This possibility was therefore left aside. The latter possibility works very well when
2
T is much larger as compared to any value of (as this is the case for TlNO 3 solution, see figure 10). Of course, for shorter 2 T , it is less straightforward but, sensitivity permitting, the experiment should be successful provided that the decay curve is corrected for
2
T relaxation effects (
2
T can be deduced from 1 T and 2 T which have to be measured beforehand). This has been verified by the experiments of fig. 10.
In spite of scattered data (due a modest signal-to-noise ratio and to a finite gradient value which may not totally prevent the off-resonance effects), the decay concerning exchanging thallium leads to a correct value for the diffusion coefficient (1.7 10 -5 cm 2 s -1 ) which should be the weighted average between free and complexed thallium, the latter being identical to the one of the host molecule (independently measured). The expected value (1.6 10 -5 cm 2 s -1 , calculated from these data and from the known proportion of complexed thallium) is sufficiently close to the experimental determination for demonstrating the validity and the potentiality of the proposed method. Experimental 0 B gradient experiments have been performed with a Bruker Avance DRX400 spectrometer equipped with a TBIZ probe normally operating, for X nuclei, in the 18-162 MHz frequency range. It was nevertheless possible to tune this probe for the Thallium-235 resonance frequency (230.8 MHz), at the expense of deteriorated performances (see above). With this instrument, the maximum gradient strength is 30 G cm -1 and, in practice (sine shaped gradient pulses), is reduced to 18 G cm -1 .
1 B gradient experiments have been performed with a home-made 4.7 T spectrometer, equipped with a probe dedicated to the observation of thallium-235 (114.8 MHz) and possessing a 1 B gradient coil system capable of delivering a maximum gradient of 11 G cm -1 .
This probe, shown in fig. 11, follows from a new design by which the uniformity and strength of rf gradients are considerably improved [START_REF] Guendouz | Single-sided radiofrequency field gradient with two unsymmetrical loops: Application to nuclear magnetic resonance[END_REF].
Conclusion
Through this study, off-resonance effects have been shown to constitute the major issue when dealing with diffusion experiments in the presence of fast chemical exchange involving a large difference in resonance frequencies between the two sites involved in this exchange process. Although thallium may seem particular due to its important chemical shift scale, the problem is in fact of general concern because such a situation may be encountered with more common nuclei at higher static magnetic field values. When using 0 B gradients, the difficulties lie in a severe sensitivity loss and, paradoxically, the remedy would be to go to lower static magnetic field values so as to reduce difference in resonance frequencies between the two sites. On the other hand, the 1 B gradient methodology using gradient pulse 14 increments seems also quite viable, again at lower field, provided that sufficiently strong gradients are available. Our new instrumental design [START_REF] Guendouz | Single-sided radiofrequency field gradient with two unsymmetrical loops: Application to nuclear magnetic resonance[END_REF] should fulfill this requirement.
B A r p p ) 1 (
, where p is the proportion of complexed thallium. The positive peak is obtained after a 90° read-pulse. The negative peak results from the application of the same read-pulse immediately after a supposed 180° pulse. Left: aqueous solution of TlNO 3 0.1M; the inversion rate is 73%. Right: exchanging thallium; the inversion rate is 55%. All data concerning these two samples can be found in ref. [START_REF] Cuc | Behavior of cesium and thallium cations inside a calixarene cavity. Evidendence of cation- interaction in water[END_REF]. p=0.2 for the sample of exchanging thallium. B gradient diffusion experiments by incrementing the gradient strength. Left: TlNO 3 0.1M solution decay ( = 300ms; =5 ms) leading to a diffusion coefficient of 1.9 10 -5 cm 2 s -1 (which is the expected value, close to water diffusion coefficient). Right: exchanging thallium decay (= 200ms; =2 ms) leading to a diffusion coefficient of 9.7 10 -5 cm 2 s -1 (a totally unrealistic value). B gradient diffusion experiments by incrementing the gradient pulse duration ( = 200ms; g=10.28 G cm -1 ). Left: TlNO 3 0.1M solution decay leading to a diffusion coefficient of 1.9 10 -5 cm 2 s -1 . Right: exchanging thallium decay leading to a diffusion coefficient of 1.7 10 -5 cm 2 s -1 .
Figure 11. The 1 B gradient assembly operating (normally in an upright position) at 114.8 MHz. The saddle shaped coil is used for normal rf pulses and signal detection. The two loops, closer to the saddle shaped coil, are the actual gradient coil. The third loop (top) serves as a power transmitter through its inductive coupling to the gradient coils.
rotating frame) are significantly non negligible with respect to gX 2 for all
to our expectations, nothing particular should occur, in spite of a very large chemical shift difference. As a consequence, if the gradient strength is sufficient, there would be complete defocusing due to the continuous evolution of X across the sample, all values of in the interval [0,2] having the same probability.
Figure 1 :
1 Figure 1: The carrier frequency ( r ) is supposed to coincide with the observed signal (solid line) which arises, by chemical exchange, from the two lines at frequencies
Figure 2 .
2 Figure 2. Definition of the time intervals and of the angle by which the nuclear magnetization at a position X is rotated by the application of a gradient pulse (in the rotating frame). Left: 0 B gradient. Right: 1 B gradient ( 1 B being assumed to lie along x).
Figure 3 .
3 Figure 3. Off-resonance effects expressed in frequency units. 1eff standing for 2 / 1 eff
Figure 4 .
4 Figure 4. 230.8 MHz 205 Tl NMR spectra (9.4 T). 128 scans. Duration of the 90° pulse: 43 s.The positive peak is obtained after a 90° read-pulse. The negative peak results from the application of the same read-pulse immediately after a supposed 180° pulse. Left: aqueous solution of TlNO 3 0.1M; the inversion rate is 73%. Right: exchanging thallium; the inversion rate is 55%. All data concerning these two samples can be found in ref.[START_REF] Cuc | Behavior of cesium and thallium cations inside a calixarene cavity. Evidendence of cation- interaction in water[END_REF]. p=0.2 for the sample of exchanging thallium.
Figure 5 .Figure 6 .
56 Figure 5. Details of the STE_BP sequence (Stimulated Echo with Bipolar Pulses)
Figure 7 .
7 Figure 7. Details of the diffusion sequence with rf gradients (or 1 B gradients).
Figure 8 .
8 Figure 8. 114.8 MHz NMR spectrum (4.7 T; duration of the 90° pulse: 11.5 s) of exchanging 205 Thallium (12765 Hz between sites) obtained through the sequence of figure 7 without gradients ( 0.5 ms, 200 ms). 512 transients.
Figure 9 .
9 Figure 9. 114.8 MHz 1B gradient diffusion experiments by incrementing the gradient strength. Left: TlNO 3 0.1M solution decay ( = 300ms; =5 ms) leading to a diffusion coefficient of 1.9 10 -5 cm 2 s -1 (which is the expected value, close to water diffusion coefficient). Right: exchanging thallium decay (= 200ms; =2 ms) leading to a diffusion coefficient of 9.7 10 -5 cm 2 s -1 (a totally unrealistic value).
Figure 10 .
10 Figure 10. 114.8 MHz 1B gradient diffusion experiments by incrementing the gradient pulse duration ( = 200ms; g=10.28 G cm -1 ). Left: TlNO 3 0.1M solution decay leading to a diffusion coefficient of 1.9 10 -5 cm 2 s -1 . Right: exchanging thallium decay leading to a diffusion coefficient of 1.7 10 -5 cm 2 s -1 .
Figure 1 | 29,059 | [
"12647"
] | [
"441232",
"24254",
"129683",
"129683"
] |
01482094 | en | [
"chim"
] | 2024/03/04 23:41:48 | 2009 | https://hal.univ-lorraine.fr/hal-01482094/file/13C%20diffusion_090330.pdf | Mehdi Yemloul
Vincent Castola
Sébastien Leclerc
Daniel Canet
email: [email protected]
Self-diffusion coefficients obtained from proton-decoupled carbon-13 spectra for analyzing a mixture of terpenes
published or not. The documents may come
Self-diffusion coefficients obtained from proton-decoupled carbon-13 spectra for analyzing a mixture of terpenes
Introduction:
During the last two decades, much effort has been directed toward the analysis of mixtures by the so-called PGSE (Pulsed Gradient Spin Echo) NMR technique 1 . This technique is based on the measurement of the translational self-diffusion coefficient which is different from one molecule to the other (according to the size and/or the molecular weight), hence the possibility of separating the different species of a mixture. The experiment rests on the application of static field gradient pulses in each half of a spin echo experiment. The present standard sequence rather employs stimulated echoes 2 and bipolar gradient pulses 3 . A series of experiments is performed with incremented gradient amplitudes and processed so as to provide a two-dimensional DOSY 4 diagram with chemical shifts along one dimension and diffusion coefficients along the other. These procedures are generally applied to proton NMR with the drawback of severe overlaps whenever the spectrum is crowded (as it is often the case for complex mixtures). The major problem is therefore the determination of the various diffusion coefficients involved in overlapping patterns. This implies Inverse Laplace Transform (ILT) which must be included in data processing. Several algorithms exist 1 but may fail in the case of strongly overlapping lines. A possible remedy is to increase the spectral resolution by using a third dimension which spreads out the various resonances so as to avoid unwanted overlaps. In this respect, various sequences belonging to the family dubbed IDOSY have been proposed [5][6][7][8] . Hadamard encoding has been invoked for shortening this type of experiment 9 and, very recently, has also been used in DOSY-TOCSY experiment for reducing significantly the measuring time 10 . Another way of improving spectral resolution and thus avoiding problems related to overlapping patterns is to rely on proton-decoupled carbon-13 spectroscopy. However, having recourse to direct observation of carbon-13 is not so common [11][12][13] and, in that case, problems associated with proton decoupling have been mentioned 14 . In fact, most of the time [15][16][17][18] , the usual PGSE procedure is run at the level of proton resonances and their polarization is transferred to carbon-13 via a refocused INEPT procedure and subsequently measured through the proton-decoupled carbon-13 spectrum. In that case, diffusion coefficients are effectively measured in a straightforward manner since, in proton-decoupled carbon-13 spectra, line overlapping is scarce. However, on the one hand, carbons non-bonded to protons are excluded and, on the other hand, polarization transfers are not the same for all carbons (because of different values of J couplings) thus rendering the procedure not quantitative. Evidently, the ideal procedure would be to measure the diffusion coefficient pertaining to all resonances in a proton-decoupled carbon-13 spectrum but, surprisingly, this methodology has not been often considered. In fact, due to the gyromagnetic ratio of carbon-13 which is four times smaller than the one of proton, gradients four times stronger are required. One other issue is that of decoupling. First, relatively strong irradiation at the proton frequency is expected to cause sample heating and therefore convection phenomena. The latter are known to obscure diffusion phenomena and, possibly, to prevent proper measurements of diffusion coefficients. As a matter of fact, we show in this communication that convection phenomena (if they exist) are not responsible for the encountered problems. In fact, problems related to decoupling have already been addressed 12,13 . They are due to a frequency shift during gradient applications and, as a consequence, to a much less efficient decoupling. This will be detailed later.
It can be noticed that problems posed by homonuclear J coupling are of different nature (essentially echo modulation). In that case several remedies have been proposed 19,20 .
Experimental:
All experiments reported in this paper have been performed at ambient temperature with a Bruker Avance DRX-600 spectrometer equipped with a cryogenic probe. The 13 C 90° standard rf pulse has a duration of 15 μs. The duration of each sine-shaped gradient (g) pulse, /2, was 3ms. The maximum gradient strength is 50 G/cm. The gradient was incremented in 32 steps between 2% and 95% of this maximum value, each step corresponding to a separate experiment. The diffusion interval was set at 600 ms; this unusual large value is allowed by the fact that 13 C longitudinal relaxation times (T 1 ) are generally relatively long (3-5 s). The time elapsed between two consecutive experiments is assumed to be five times the longest T 1 .
The model mixture is made of three monoterpenes: limonene (17%), carvone (33%), terpinen-4-ol (50%). These three molecules differ slightly by their molecular weight and their diffusion coefficients are close to each other (table 1). This makes this mixture particularly well suited for testing the accuracy of the method and its separation capabilities. Values given in table 1 represent a mean obtained from the results displayed in figure 2 (those in figure 5 were not measured the same day and may present a slight systematic difference and cannot be mixed with data of figure 2). Uncertainties arise from 99% confidence intervals.
Results and Discussion:
We first run the experiment of figure 1b where decoupling is just applied during data acquisition. Contrary to our expectation, the result shown in figure 2 was very satisfactory. This was briefly mentioned about 31 P-{ 1 H}-PGE experiments (which yielded "excellent result, when inverse gated 1H decoupling was used") 21 .
We found, with small experimental uncertainties, three families of diffusion coefficients. This was checked by a diffusion experiment run classically at the level of proton resonances (not shown) where the three diffusion coefficients were indeed retrieved. The separation of the three components is achieved in a straightforward way and we have, of course, verified that the diffusion results are consistent with the chemical shifts of all peaks belonging to the three compounds 22 .
The next step was to go back to the sequence of figure 1a and to verify that it did not work. Indeed, it does not work for carbons bound to protons (figure 3a), but seems to work perfectly for carbons not bearing protons, as shown in figure 3d. As a consequence, in the present case, convection phenomena have nothing to do with the failure of diffusion coefficient determination. The only difference between the experiment of figure 1a and the one of figure 1b is the period during which decoupling is applied. Of course, since convection is not concerned, the so-called diffusion interval is not involved. On the other hand, continuous decoupling could be advantageous for sensitivity matters (nuclear Overhauser effect), especially for low concentration components. An experimental procedure maintaining the nOe enhancement is therefore desirable.
As mentioned above, the consequence of a gradient application is a strong shift of resonance frequencies, depending on the spatial location. This is of course true for proton resonances (due to the large proton gyromagnetic ratio). For example, for a gradient of 5G/cm and a sample height of 3cm, the spread in resonance frequencies is 64000 Hz. As a first consequence, decoupling will no longer be efficient for a large part of the sample and this becomes worse and worse when the gradient is incremented toward lager values. As a second consequence, if they could be observed, lines would be broadened by this incomplete decoupling leading to short effective T 2 's. It is well known that the process of defocusingrefocusing is hampered by short T 2 's and the decay due to diffusion is therefore corrupted. This is illustrated by figure 4 where two different decoupling schemes have been used. The decay obtained by Waltz-16, known to be efficient over a relatively small frequency range is much faster than the one using GARP. Nevertheless none of these decays can be considered for properly determining diffusion coefficients. The remedy employed by Furó 13 consists in selecting a slice for which the modification of the B 0 field is not large, leading to a moderate defocusing, and thus to an efficient decoupling. However, in this case, one is dealing with dipolar couplings, and the use of decoupling during gradient pulses seems mandatory in order to avoid a large spread in resonance frequencies which could be incompatible with the effects of gradients. Sequence 1a could probably be used in this case but with a properly chosen decoupling scheme. However, for practical reasons, it seems sufficient to apply decoupling only during gradient pulses. As far as liquid samples are concerned, because J-couplings are much smaller, the spread in resonance frequencies is negligible and the remedy is simply to switch off the decoupling, at least, during the gradient pulses (see sequences of figures 1b and 1c).
It can be mentioned that something which looks like sequence 1c has already been proposed for application to liquid crystals but with ambiguous explanations 23 .
The efficiency of sequence 1c is nicely illustrated by the data displayed in figure 3c, where decoupling is switched off only during the application of gradients. This experiment provides the same diffusion coefficients as the one of figure 3b (sequence 1b), but, since proton irradiation is applied almost continuously, peak intensities are enhanced by the nuclear Overhauser effect (see figure 5 and figure 3d).
Conclusion:
We have compared here different procedures, employing self-diffusion coefficients for separating the carbon-13 NMR spectra of the different components belonging to a complex mixture. Two methods are available:
-One, which is straightforward and which can be used if sensitivity is sufficient, is fully quantitative, proton decoupling being applied only during data acquisition.
-The other leads to peak intensities enhanced by nOe, as proton decoupling is applied continuously except during the gradient periods (which are very short).
These methods, which can be cross-checked with analyses based on chemical shift values 22 , deserves to be developed for more complex mixtures with a DOSY type display of the results without the need to perform an Inverse Laplace Transform.
Table1:
The three compounds composing the model mixture investigated here. See figure 3 for the meaning of the star and of the arrow.
/2 /2 /2 1 H Δ (/2) +x (/2) -x (/2) +x /2 13 C +g +g -g -g 1.a /2 /2 /2 /2 Δ (/2) +x (/2) -x (/2) +x 13 C 1 H +g +g -g -g 1.b /2 /2 /2 /2 1 H Δ (/2) +x (/2) -x (/2) +x
5 Figure 1 .Figure 2 .
512 Figure captions
Figure 3 .
3 Figure 3.
Figure 4 .
4 Figure 4. Decays of the signal corresponding to the carbon-13 marked by a star in the
Figure 5 .
5 Figure 5. Same as figure 2 but with the sequence 1c). n.m.: not measured.
Figure 1
OH | 11,506 | [
"773269",
"12647"
] | [
"129683",
"843",
"441232",
"129683"
] |
00148210 | en | [
"info"
] | 2024/03/04 23:41:48 | 2008 | https://ens-lyon.hal.science/ensl-00148210/file/rr2007-19.pdf | Jean-Luc Beuchat
Takanori Miyoshi
Jean-Michel Muller
Eiji Okamoto
HORNER'S RULE-BASED MULTIPLICATION OVER F P AND F P N : A SURVEY 1 Horner's Rule-Based Multiplication over F p and F p n : A Survey
Keywords: Modular multiplication, Horner's rule, carrysave, high-radix carry-save, borrow-save, finite field, FPGA
This paper aims at surveying multipliers based on Horner's rule for finite field arithmetic. We present a generic architecture based on five processing elements and introduce a classification of several algorithms based on our model. We provide the readers with a detailed description of each scheme which should allow them to write a VHDL description or a VHDL code generator.
I. INTRODUCTION
This paper proposes a survey of Horner's rule-based multipliers over F p and GF(p m ), where p is a prime number. Multiplication over F p is a crucial operation in cryptosystems such as RSA or XTR. Multiplication over GF(p m ) is a fundamental calculation in elliptic curve cryptography, pairingbased cryptography, and implementation of error-correcting codes.
In the following, the modulus F is either an n-bit (prime) integer whose most significant bit is set to one (i.e. 2 n-1 +1 ≤ F ≤ 2 n -1) or a monic degree-n irreducible polynomial over F p . Three families of algorithms allow one to compute the product AB modulo F , where A and B are either elements of Z/F Z or F p n . In parallel-serial schemes, a single digit or coefficient of the multiplier A is processed at each step. This leads to small operands performing a multiplication in n clock cycles. Parallel multipliers compute the product AB (2n-bit integer or degree-(2n -2) polynomial) and carry out a final modular reduction. They achieve a higher throughput at the price of a larger circuit area. Song and Parhi introduced array multipliers as a trade-off between computation time and circuit area [START_REF] Song | Low energy digit-serial/parallel finite field multipliers[END_REF]. Their idea consists in processing D digits or coefficients of the multiplier at each step. The parameter D is sometimes referred to as digit size and parallel-serial schemes can be considered as a special case with D = 1. In such architectures, the multiplier A can be processed starting with the least significant element (LSE) or the most significant element (MSE). This survey is devoted to MSE operators and we refer the reader to [START_REF] Erdem | Polynomial basis multiplication over GF(2 m )[END_REF], [START_REF] Guajardo | Efficient hardware implementation of finite fields with applications to cryptography[END_REF], [START_REF] Kumar | Optimum digit serial GF(2 m ) multipliers for curve-based cryptography[END_REF] for details about parallel modular multipliers and LSE operators, which are often based on the celebrated Montgomery algorithm [START_REF] Montgomery | Modular multiplication without trial division[END_REF]. Note that Kaihara and Takagi introduced a novel representation of J.-L. Beuchat, T. Miyoshi, and E. Okamoto are with the University of Tsukuba, Tsukuba, Japan. J.-M. Muller is with the CNRS, laboratoire LIP, projet Arénaire, Lyon, France.
residues modulo F which allows the splitting of the multiplier A [START_REF] Kaihara | Bipartite modular multiplication[END_REF]: its upper and lower parts are processed independently using an MSE scheme and an LSE implementation of the Montgomery algorithm respectively. Such an approach could potentially divide the computation time of array multipliers by two.
After a brief description of the five number systems considered in this survey (Section II), we outline the architecture of a modular multiplier based on Horner's rule (Section III). We then introduce a classification of several MSE schemes according to our model, and provide the reader with all the details needed for writing a VHDL description or designing a VHDL code generator (Sections IV, V, and VI). We conclude this survey by a comparison of the most promising algorithms on a typical field-programmable gate array (FPGA) architecture (Section VII).
II. NUMBER SYSTEMS
This section describes the number systems involved in the algorithms we survey in this paper. We also outline addition algorithms and describe how to compute a number or polynomial à congruent to A modulo F . A carry-ripple adder (CRA), whose basic building blocks are the full-adder (FA) and the half-adder (HA) cells, returns the (n + 1)-bit sum R = A + B (Figure 1). Since a CRA consists of a linearly connected array of FAs, its delay grows linearly with n, thus making this architecture inadvisable for an ASIC implementation of high-speed applications. Modern FPGAs being mainly designed for digital signal processing applications involving rather small operands (16 up to 32 bits), manufacturers chose to embed dedicated carry logic allowing the implementation of fast CRAs for such operand sizes. The design of modular multipliers taking advantage of such resources is therefore of interest. An application would for instance be the FPGA implementation of the Montgomery modular multiplication algorithm in a residue number system [START_REF] Bajard | A RNS Montgomery modular multiplication algorithm[END_REF].
2) Modular Reduction: Modulo F reduction can be implemented by means of comparisons and subtractions. It is sometimes easier to compute an (n + 1)-bit number à congruent to an (n + q)-bit number A modulo F . Let us define A k:j = k i=j a i 2 i-j , where k ≥ j. Using this notation, A is equal to A n+q-1:n 2 n + A n-1:0 . If q is small enough, we can store in a table all values of (A n+q-1:n 2 n ) mod F and compute à by means of a single CRA: à = (A n+q-1:n 2 n ) mod F +A n-1:0 .
Note that some algorithms studied in this survey also involve negative integers. We encode such numbers using the two's complement system. An n-bit number
A ∈ {-2 n-1 , . . . , 2 n-1 -1} is represented by A = -a n-1 2 n-1 + n-2 i=0 a i 2 i .
B. Carry-Save Numbers 1) Addition of Carry-Save Numbers: Figure 1b describes a carry-save adder (CSA). This operator computes in constant time the sum of three n-bit operands by means of n FAs. It returns two n-bit numbers R (s) and R (c) containing the sum and output carry bits of the FAs respectively. We have:
R = 2R (c) + R (s) = r (s) 0 + n-1 i=1 r (s) i + r (c) i-1 2 i + r (c) n-1 2 n = n i=0 r i 2 i , where r 0 = r (s) 0 , r n = r (c) n-1 , and r i = r (s) i + r (c) i-1 , 1 ≤ i ≤ n -1.
Each digit r i belonging to {0, 1, 2}, we obtain a radix-2 redundant number system. Unfortunately, comparison and modular reduction require a carry propagation and we would lose all benefits from this number system by introducing such operations in modular multiplication algorithms.
2) Modular Reduction: Let A be an n-bit two's complement number whose carry-save representation is given by A = A (s) +2A (c) . Koc ¸and Hung introduced a sign estimation technique which enables computing a number congruent to A modulo F by inspecting a few most significant bits of A (s) and A (c) [START_REF] Koc | Multi-operand modulo addition using carry save adders[END_REF], [START_REF]Carry-save adders for computing the product AB modulo N[END_REF], [START_REF]A fast algorithm for modular reduction[END_REF]. They define the truncation function Θ(A) as the operation which replaces the least significant τ bit of A with zeroes. The parameter τ control the cost and the quality of the estimation. Let k be the two's complement sum of Θ(A (s) ) + Θ(2A (c) ). The sign estimation function ES(A (s) , A (c) ) is then defined as follows [START_REF]A fast algorithm for modular reduction[END_REF]:
ES(A (s) , A (c) ) = (+) if k ≥ 0, (-) if k < -2 τ , (±) otherwise.
Koc ¸and Hung proved that, if ES(A (s) , A (c) ) = (+) or (-), then X ≥ 0 or X < 0, respectively [START_REF]A fast algorithm for modular reduction[END_REF]. If ES(A (s) , A (c) ) = (±), then -2 τ ≤ A < 2 τ . One can therefore add -F , 0, or F to A according to the result of the sign estimation to compute a number à congruent to A modulo F .
3) Modular Reduction when the Modulus is a Constant: Assume now that the n-bit modulus F is known at design time and consider a carry-save number A such that A (s) and A (c) are n s -and n c -bit integers respectively (n s and n c are usually greater than or equal to n). Let α ≤ n. Since
A = A (s) div 2 α + (2A (c) ) div 2 α • 2 α + A (s) mod 2 α + (2A (c) ) mod 2 α = A (s) ns-1:α + A (c) nc-1:α-1 • 2 α + A (s) α-1:0 + 2A (c) α-2:0 ,
we compute a number à congruent to A by means of a CSA and a table addressed by max(n
s + 1 -α, n c + 2 -α) bits. Let k = (A (s) ns-1:α + A (c)
nc-1:α-1 ) • 2 α . We have:
A ≡ k mod F + A (s) α-1:0 + 2A (c) α-2:0 (mod F ). (1)
We easily compute an upper bound for Ã. Since k mod F ≤ F -1, we have:
à ≤ F -1 + 2 α -1 + 2(2 α-1 -1) = F + 2 α+1 -4. (2)
C. High-Radix Carry-Save Numbers
Carry-save adders do not always take advantage of the dedicated carry logic available in modern FPGAs [START_REF] Beuchat | Automatic generation of modular multipliers for FPGA applications[END_REF]. To overcome this problem, modular multiplication can be performed in a high-radix carry-save number system, where a sum bit of the carry-save representation is replaced by a sum word. A q-digit high-radix carry-save number A is denoted by
A = (a q-1 , . . . , a 0 ) = a (c) q-1 , a (s) q-1 , . . . , a (c) 0 , a (s) 0
, where the jth digit a j consists of an n j -bit sum word a (s) j and a carry bit a (c) j such that a j = a (s)
j + a (c) j 2 nj . Let us define A (s) = a (s) 0 + a (s) 1 2 n0 + . . . + a (s)
q-1 2 n0+...+nq-2 and
A (c) = a (c) 0 2 n0 + a (c) 1 2 n0+n1 + . . . + a (c)
q-1 2 n0+...+nq-1 . With this notation, a number A is equal to A (s) +A (c) . This number system has nice properties to deal with large numbers on FPGAs:
• Its redundancy allows one to perform addition in constant time (the critical path of a high-radix carry-save adder only depends on max 0≤j≤q-1 n j ).
• The addition of a sum word a (s) j , a carry bit a (c) j-1 , and an n j -bit unsigned binary number is performed by means of a CRA. Unfortunately, MSE first algorithms involve left-shift operations which modify the representation of an operand. Figure 2 describes a 4-digit high-radix carry-save number A = 2260 with n 0 = n 1 = 3, n 2 = 4, and n 3 = 3. By shifting A, we obtain B = 2A, whose least significant sum word is now a 4-bit number.
D. Borrow-Save Numbers 1) Addition of Borrow-Save Numbers
: A radix-r signed- digit representation of a number A ∈ Z is given by A = n i=0 a i r i . The digits a i belong to D r = {-ρ, -ρ + 1, . . . , ρ - r 3 r 4 r (c) 3 r (s) 0 r (s) 1 r (s) 2 r (s) 3 r (c) 2 r (c) 1 r (c) 0 r 1 r 2 a 3 a 2 a 1 a 0 b 0 b 1 b 2 b 3 b 0 b 1 b 2 b 3 a 3 c 3 a 2 c 2 a 1 c 1 a 0 c 0 r 0 r 1 r 2 r 3
Full-adder
n 3 = 3 n 2 = 4 n 1 = 3 b (s) 3 b (s) 2 b (s) 1 b (s) 0 n 0 = 4
Sum words 1, ρ}, where ρ ≤ r -1 and 2ρ + 1 ≥ r. The second condition guarantees that every number has a representation (2ρ+1 = r). When 2ρ + 1 > r, the number system becomes redundant and allows one to perform addition in constant time under certain conditions [START_REF] Avizienis | Signed-digit number representations for fast parallel arithmetic[END_REF].
n 3 = 3 n 2 = 4 n 1 = 3 n 0 = 3 a (s) 3 a (s) 2 a (s) 1 a (s) 0 a (c) 2 a (c) 0 a (c) 1 b (c) 2 b (c) 0 b (c) 1 (
In this survey, we will consider only radix-2 signed-digits. Thus, we take advantage of the borrow-save notation introduced by Bajard et al. [START_REF] Bajard | Some operators for on-line radix-2 computations[END_REF]: each digit a i is encoded by a positive bit a + i and a negative bit a - i such that a i = a + i -a - i . A modified FA cell, called PPM cell, allows one to compute two bits r + i+1 and r - i such that 2r + i+1 -r - i = a + i + b + i -a - i . Note that the same cell is also able to return r - i+1 and
r + i such that 2r - i+1 -r + i = a - i + b - i -a + i .
In this case, it is usually referred to as MMP cell. The addition of two borrow-save numbers can be performed in constant time using the operator described by Figure 3a [START_REF] Bajard | Some operators for on-line radix-2 computations[END_REF].
2) Modular Reduction: Assume that A is an (n + 2)digit borrow-save number such that -2F < A < 2F . Takagi and Yajima proposed a constant time algorithm which returns an (n + 1)-digit number à congruent to A modulo F (Figure 3b) [START_REF] Takagi | Modular multiplication hardware algorithms with a redundant representation and their application to RSA cryptosystem[END_REF]. First, we add the three most significant digits of A and get a 4-bit two's complement number k = 4a n+1 + 2a n + a n-1 . Our hypotheses guarantee that -4 ≤ k ≤ 4 and
-2F < A < 0, if k < 0, -2 n-1 < A < 2 n-1 , if k = 0, and 0 < A < 2F , if k > 0.
Thus, it suffices to add F , 0, or -F to A according to k in order to get an (n+1)-digit number à such that -F < A < F . Since we assumed that the most significant bit of F is always set to one, we have
-F = -2 n-1 - n-2 i=0 f i 2 i = -2 n + 2 n-1 - n-2 i=0 f i 2 i = -2 n + n-2 i=0 (1 -f i )2 i + 1.
Consider now the (n+1)-digit borrow-save number U defined as follows:
U = F = n-1 i=0 f i 2 i if k < 0, 0 if k = 0, -F -1 = -2 n + n-2 i=0 (1 -f i )2 i if k > 0,
and note that most significant digit u n is the only one which can take a negative value. The (n + 1)-digit sum à = A + U can therefore be computed by a single stage of PPM cells and glue logic (Figure 3b). Since U = -F -1 when k is greater than 0, a small table generates ã+ 0 according to the following rule:
ã+ 0 = 1 if k > 0, 0 otherwise.
Consider now the addition of a + n-1 , a - n-1 , and u n-1 by means of a PPM cell. It generates two bits v and ã-
n-1 such that 2v -ã- n-1 = a + n-1 -a - n-1 + u n-1 .
The most significant digit ãn is then defined as follows: Thus, ãn only depends on k. Instead of explicitly computing v, we build a table addressed by a n+1 , a n , and a n-1 (Table I).
ãn = 2a n+1 + a n + v -1 if k > 0, 2a n+1 + a n + v otherwise. F - + + + - - + + + - - + + + - - -+ -+ - -+ -+ - -+ -+ - -+ -+ - + + + - - + + + - - + + + - - + + + - - + + + - a - 0 a + 0 a - 1 a + 1 a - 2 a - 3 a - 4 a + 4 a + 3 a + 2 0 a + 0 a + 1 a + 2 a + 3 a - 0 a - 1 a - 2 a - 3 b + 3 b - 3 b + 2 b - 2 b + 1 b - 1 b + 0 b - 0 0 r + 0 r - 0 r + 1 r + 2 r + 3 r + 4 r - 4 r - 1 r - 2 r -
a n+1 an a n-1 ãn 0 0 0 0 0 0 1 0 0 1 -1 0 1 -1 -1 0 0 1 0 0 1 -1 0 0 0 1 1 1 1 0 -1 1 1 -1 1 1 1 0 0 1 a n+1 an a n-1 ãn 0 0 -1 0 0 -1 1 0 -1 1 1 0 0 -1 0 0 -1 1 0 0 0 -1 1 -1 -1 0 -1 -1 -1 1 1 -1 -1 0 0 -1
E. Elements of F p n
There are several ways to encode elements of an extension field. In this paper, we will only consider the well-known polynomial representation, which is for instance often faster than normal basis in pairing-based applications [START_REF] Grabher | Hardware acceleration of the Tate Pairing in characteristic three[END_REF]. Let F (x) = x m + f m-1 x m-1 + . . . + f 1 x + f 0 be an irreducible polynomial over F p , where p is a prime. Then, GF(p n ) = GF(p)[x]/F (x), and an element a(x) ∈ GF(p n ) can be represented by a degree-(m -1) polynomial with coefficients in
F p : a(x) = a m-1 x m-1 + . . . + a 1 x + a 0 .
Note that the irreducible polynomials used in cryptographic applications are commonly binomials or trinomials, thus making modulo F operations easy to implement. F = x 97 +x 12 +2 is for instance irreducible over GF [START_REF] Guajardo | Efficient hardware implementation of finite fields with applications to cryptography[END_REF]. Assume that A is a degree-97 polynomial. It suffices to remove a 97 •F = a 97 x 97 + a 97 x 12 + 2a 97 from A to get A mod F and this operation involves only two multiplications and two subtractions over GF(3), namely a 12 -1 • a 97 and a 0 -2 • a 97 . Elements of GF(3) are usually encoded with two bits and such a modular reduction is performed by means of two 4-input tables.
III. HORNER'S RULE FOR MODULAR MULTIPLICATION
Recall that the celebrated Horner's rule suggests to compute the product of two n-bit integers or degree-(n-1) polynomials A and B as follows:
AB = (. . . ((a n-1 B) 1 + a n-2 B) 1 + . . .) 1 + a 0 B,
where 1 denotes the left-shift operation (i.e. multiplication by two for integers and multiplication by x for polynomials). This scheme can be expressed recursively as follows:
R[i] = R[i + 1] 1 + a i B, (3)
where the loop index i goes from n -1 to 0, R[n] = 0, and R[0] = AB. By performing a modular addition at each step, one easily determines the product AB mod F [START_REF] Blakley | A computer algorithm for calculating the product ab modulo m[END_REF]. However, computing a number or polynomial
R[i] congruent to R[i + 1] 1 + a i B
R[i + 1] 1 ≡ R[i + 1] 1 -k 1 F (mod F ) ≡ (R[i + 1] -k 2 F ) 1 (mod F ),
where k 1 and k 2 are integers or polynomials, we consider three families of algorithms. In left-shift schemes,
S[i] is equal to R[i + 1] 1 .
B ai aiB R[i] AB mod F S[i] R[i + 1] Register Modulo F Compute S[i] such that S[i] ≡ R[i + 1] 1 (mod F ) Compute R[i] such that R[i] ≡ aiB + S[i] (mod
IV. FIRST ARCHITECTURE: LEFT-SHIFT OPERATION FOLLOWED BY A MODULAR REDUCTION A. Borrow-Save Algorithms
Let A and B be two (n+1)-digit borrow-save numbers with -F < A, B < F . Takagi and Yajima proposed an algorithm computing an (n+1)-digit number R[0] ∈ {-F +1, . . . , F -1} congruent to AB modulo F [14] (Figure 5). At each step, the Modshift block returns an (n + 1)-digit number S[i] ∈ {-F + 1, . . . , F -1} congruent to the (n + 2)-digit number 2R[i + 1] according to the scheme described in Section II-D. The Modsum block contains a borrow-save adder which computes the sum
T [i] ∈ {-2F + 1, . . . , 2F -1} of S[i]
and a partial product a i B. The same approach allows one to determine a number R[i] ∈ {-F + 1, . . . , F -1} congruent to T [i] modulo F . A nice property of this algorithm is that both inputs and output belong to {-F + 1, . . . , F -1}. The conversion from borrow-save to integer involves at most two additions:
R = AB mod F = R + [0] -R -[0] if R + [0] -R -[0] ≥ 0, R + [0] -R -[0] + F otherwise, where R + [0] = n-1 i=0 r + i [0]2 i and R -[0] = n-1 i=0 r - i [0]2 i .
The number of iterations can be reduced by considering a higher radix. Radix-4 modular multipliers based on signed-digits are for instance described in [START_REF] Takagi | Modular multiplication hardware algorithms with a redundant representation and their application to RSA cryptosystem[END_REF], [START_REF] Takagi | A radix-4 modular multiplication hardware algorithm for modular exponentiation[END_REF].
B. Carry-Save Algorithm
Jeong and Burleson described a carry-save implementation of the algorithm by Takagi and Yajima [START_REF] Takagi | Modular multiplication hardware algorithms with a redundant representation and their application to RSA cryptosystem[END_REF] in the case where the modulus F is known at design time [START_REF] Jeong | VLSI array algorithms and architectures for RSA modular multiplication[END_REF] (Figure 6). The intermediate result Kim and Sobelman proposed an architecture based on four fast adders (e.g. carry-select adders or parallel-prefix adders) to perform this final modular reduction and to convert the result from carry-save to integer [START_REF] Kim | Digit-serial modular multiplication using skew-tolerant domino CMOS[END_REF] (Figure 7). They first compute an
R[i] is represented by two n-bit unsigned integers R (s) [i] and R (c) [i].
(n + 1)-bit integer U such that U = R (s) [0] + 2R (c) n-2:0 .
Then, a second adder and a table addressed by r
(c) n-1 [0] and u n return an (n + 1)-bit integer V = U n-1:0 + ((r (c) n-1 [0] + u n ) • 2 n ) mod F . Since V ≤ 2 n + F -2 < 3F
, it suffices to compute in parallel V -2F and V -F , and to select the result.
-F
V n:0 ((R (c) n-1 [0] + u n ) • 2 n ) mod F 2R (c) n-2:0 [0] U n-1:0 R (s) [0] r (c) n-1 [0]
C. Multiplication over F p n
Shu et al. designed an array multiplier processing D coefficients of the operand A at each clock cycle [START_REF] Shu | FPGA accelerated Tate pairing based cryptosystem over binary fields[END_REF] (Figure 8a). The intermediate result R[i] is a degree-(n -1) polynomial, thus avoiding the need for a final modular reduction. At each step, the Modshift block returns a degree-(n -1) polynomial 5. Architecture of the iteration stage proposed by Takagi and Yajima [START_REF] Takagi | Modular multiplication hardware algorithms with a redundant representation and their application to RSA cryptosystem[END_REF] for n = 6. sign estimation technique outlined in Section II-B [START_REF]A fast algorithm for modular reduction[END_REF]. They chose the parameter τ = n -1 to control the quality of the estimation and introduced a slightly different function defined as follows:
S[i] equal to x D R[i + 1] mod F . A (D + 1)-operand adder computes the sum of S[i] and D partial products reduced PPG B a i F V = ϕ(F, k) k = 4t 7 + 2t 4 + t 5 k = 4r 6 [i + 1]+ 2r 5 [i + 1] + r 4 [i + 1] U = ϕ(F, k) + - + - + - + - + - + r + 6 [i + 1] + - + - + r - 4 [i + 1] + - + - + r + 4 [i + 1] s 0 [i] s 6 [i] s 5 [i] s 4 [i] s 3 [i] s 2 [i] s 1 [i] t + 5 t - 5 t + 7 t - 7 t + 6 t - 6 r + 0 [i] r + 1 [i] r + 2 [i] r + 3 [i] r + 4 [i] r + 5 [i] r + 6 [i] r - 6 [i] + - + - + r - 5 [i] + - + - + r - 4 [i] + - + - + r - 3 [i] + - + - + r - 2 [i] + - + - + r - 1 [i] r - 0 [i] + - + - + Modsum + - + - + Modshift + - + - + r + 0 [i + 1] + - + - + r - 0 [i + 1] + - + - + r + 1 [i + 1] + - + - + r - 1 [i + 1] + - + - + r + 2 [i + 1] - + - + - r - 2 [i + 1] - + - + - r - 3 [i + 1] - + - + - r + 3 [i + 1] - + - + - r - 5 [i + 1] - + - + - r + 5 [i + 1] - + - + - r - 6 [i + 1] - + - + Fig.
r (c) 4 [i] r (s) 0 [i + 1] r (c) 0 [i + 1] r (c) 1 [i + 1] r (s) 2 [i + 1] r (c) 2 [i + 1] r (s) 3 [i + 1] r (s) 1 [i + 1] Modsum r (c) 5 [i] r (s) 5 [i] r (s) 4 [i] r (c) 3 [i] r (s) 3 [i] r (s) 0 [i] r (c) 0 [i] r (s) 1 [i] r (c) 1 [i] r (s) 2 [i] r (c) 2 [i] s (s) 5 [i] s (s) 4 [i] s (s) 3 [i] s (s) 2 [i] s (c) 1 [i] s (c) 2 [i] s (c) 3 [i] s (c) 4 [i] s (c) 5 [i] s (s) 1 [i] s (s) 0 [i] Modshift a i B r (s) 5 [i + 1] r (c) 5 [i + 1] r (c) 4 [i + 1] r (s) 4 [i + 1] r (c) 3 [i + 1]
ES'(R (s) [i + 1], R (c) [i + 1]) = (+) if k ≥ 2 n , (-) if k < -2 n+1 , (±) otherwise, (4)
where R (s) [i + 1] and R (c) [i + 1] are (n + 4)and (n + 3)bit two's complement numbers respectively. The two's complement number k is therefore computed as follows:
k = R (s) n+3:n-1 [i + 1] + R (c) n+2:n-2 [i + 1]
. Koc ¸and Hung established that all intermediate results of their algorithm belong to {-6F, -6F + 1, . . . , 7F -1, 7F }. Thus the computation of k does not generate an output carry and k is a 5-bit two's complement number. At each step, the Modsum block computes R[i] such that
R (s) [i]+2R (c) [i] = 2R (s) [i + 1] + 4R (c) [i + 1] + a i B -8F if ES'(k) = (+), 2R (s) [i + 1] + 4R (c) [i + 1] + a i B + 8F if ES'(k) = (-), 2R (s) [i + 1] + 4R (c) [i + 1] + a i B otherwise.
After n clock cycles, we get R[0] = AB + 8αF , with α ∈ Z. Koc ¸and Hung suggested to perform three additional iterations with a -1 = a -2 = a -3 = 0 in order to obtain R[-3] = 8AB + 8βF ∈ {-6F, . . . , 7F }, with β ∈ Z. Since R[-3] is a multiple of eight, a right-shift operation returns a number R congruent to AB modulo F , where -F < R < F . After the conversion to two's complement, the Modred module has to perform at most one addition. Figure 9 describes the iteration stage. We propose here an improved architecture which is based on the following observation: r We can therefore compute these bits while performing the sign estimation (recall that the same idea was exploited for the design of the borrow-save operator introduced by Takagi and Yajima [START_REF] Takagi | Modular multiplication hardware algorithms with a redundant representation and their application to RSA cryptosystem[END_REF] (Section IV-A)). The first step consists in computing the sum T [i] of a partial product a i B and 2R[i+1].
Note that r (c) 0 [i + 1] is always equal to zero. Thus, the adder consists of a 5-bit CRA and an (n -1)-input CSA (n -3 FAs and 2 HAs):
T (s) n+4:n [i] = R (s) n+3:n-1 [i + 1] + R (c) n+2:n-2 [i + 1] = k, T (s) n-1:1 [i] + 2T (c) n-1:1 [i] = R (s) n-2:0 [i + 1] + 2R (c) n-3:0 [i + 1] + a i B n-1:1 , t (s) 0 [i] = a i b 0 .
The sign estimation defined by Equation ( 4) is then computed as follows:
ES'(R (s) [i + 1],R (c) [i + 1]) = (+) if k4 (k 3 + k 2 + k 1 ) = 1, (-) if k 4 ( k3 + k2 + k1 k0 ) = 1, (±) otherwise.
These logic equations can be computed using Karnaugh maps. Let us define where es + = k4 (k 3 + k 2 + k 1 ) = 1 and es -= k 4 ( k3 + k2 + k1 k0 ). If the sign estimation block returns (+) (i.e. es + = 1), we have to subtract 8F from T [i]. Recall that the most significant bit of F is always set to one. Therefore, -8F -1 is encoded by an (n + 4)-bit two's complement number (10 fm-2 fm-3 . . . f1 f0
n -1 bits 111) 2 . We suggest to represent -8F as follows (Figure 9):
-8F = (10 fm-2 fm-3 . . . f1 f0
n -1 bits 000) 2 +2 2 es + +2(es + +es + ).
Finally, Table III summarizes the logic equations defining r
(c) n+2 [i], r (s) n+3 [i], and
r (s) n+2 [i].
2) Second Case: the Modulus is a Constant: If the modulus is known at design time, an architecture introduced by Kim and Sobelman [START_REF] Kim | Digit-serial modular multiplication using skew-tolerant domino CMOS[END_REF] allows one to replace the sign estimation unit with a table addressed by four bits (Figure 10). The authors suggest to compute a first carry-save number
T [i] such that T (s) n-1:0 [i]+2T (c) n-1:0 [i] = a i B+2R (s) n-2:0 [i+1]+4R (c) n-3:0 [i+1]. F a i B 5-bit CRA es + ES'(T [i]) PPG r (s) 3 [i + 1] r (c) 2 [i + 1] HA r (s) 2 [i + 1] FA r (c) 1 [i + 1] FA r (s) 1 [i + 1] FA r (s) 0 [i + 1] FA R (c) 8:4 [i + 1] FA R (s) 9:5 [i + 1] FA t (s) 6 [i] FA t (s) 7 [i] FA t (s) 0 [i] FA t (c) 1 [i] t (s) 2 [i] k = T (s) 10:6 [i] Modsum t (s) 1 [i] r (s) 1 [i] r (s) 0 [i] r (c) 1 [i] r (s) 2 [i] r (s) 3 [i] r (s) 4 [i] r (s) 5 [i] r (s) 6 [i] r (s) 7 [i] r (s) 8 [i] r (s) 9 [i] HA r (c) 2 [i] r (c) 3 [i] r (c) 4 [i] r (c) 5 [i] r (c) 6 [i] r (c) 7 [i] r (c) 8 [i] r (s) 4 [i + 1] r (c) 3 [i + 1]
HA Fig. 9. Architecture of the iteration stage proposed by Koc ¸and Hung [START_REF]A fast algorithm for modular reduction[END_REF] for n = 6.
TABLE III
ITERATION STAGE PROPOSED BY KOC ¸AND HUNG [START_REF]A fast algorithm for modular reduction[END_REF]: COMPUTATION OF r
(c) n+2 [i], r (s) n+3 [i], AND r (s) n+2 [i]. es + es - r (c) n+2 r (s) n+3 r (s) n+2 1 0 0 t(s) n+3 t (s) n+2 0 1 t(s) n+3 + t(s) n+2 t(s) n+3 t(s) n+2 t(s) n+2 0 0 0 t (s) n+3 t (s) n+2
Thus,
2R[i + 1] + a i B = T (s) n-1:0 [i] + 2T (c) n-1:0 [i] + R (s) n-1 [i + 1]+ R (c) n-1:n-2 [i + 1] • 2 n = T (s) n-1:0 [i] + 2T (c) n-2:0 [i] + R (s) n-1 [i + 1] + R (c) n-1:n-2 [i + 1]+ t (c) n-1 [i] • 2 n Let k = R (s) n-1 [i + 1] + R (c) n-1:n-2 [i + 1] + T (c) n-1 [i].
We easily check that 0 ≤ k ≤ 5. In order to compute a carry-save number
R[i] congruent to 2R[i+1]+a i B modulo F , it suffices to store the six possible values of U [i] = (k • 2 n ) mod F in a table and to add this unsigned integer to T (s) n-1:0 [i] + 2T (c) n-2:0 [i]
by means of a second CSA. Note that the least significant bit of T (c) [i] is always equal to zero and that R[i] ≤ 2 n+1 + F -5. This operator seems more attractive than the one by Jeong and Burleson [START_REF] Jeong | VLSI array algorithms and architectures for RSA modular multiplication[END_REF]: at the price of a slightly more complex table, the iteration stage requires two CSAs instead of three. The final modular reduction remains unfortunately expensive and can be computed with the Modred block illustrated on Figure 7.
Amanor et al. showed that, if both B and F are constants known at design time, the iteration stage consists of a table and a single CSA [START_REF] Amanor | Efficient hardware architectures for modular multiplication on FPGAs[END_REF] (Figure 11). Since the original algorithm requires an even more complex final modulo F reduction, we describe here a slightly modified version which allows one to perform this operation with the Modred block depicted by
B. Radix-2 Algorithms
Beuchat and Muller proposed two non-redundant radix-2 versions of Kim and Sobelman's recurrence in [START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF]. These algorithms are designed for the modular multiplication of operands up to 32 bits on FPGAs embedding dedicated carry logic. The first scheme carries out (AB+C) mod F according to the following iteration:
S[i] = 2R[i + 1], T [i] = S[i] + c i + a i B, R[i] = ϕ(T [i] div 2 n ) + T [i] mod 2 n , (5)
where ϕ(k) = (2 n • k) mod F (Figure 12). The main problem consists in finding the maximal values of R[i] and T [i], on which depends the size of the table implementing the ϕ(k) function. Contrary to algorithms in redundant number systems, for which one can only compute a rough estimate by now, the nonlinear recurrence relation defined by Equation ( 5) has been solved. This result allows one to establish several nice properties of the algorithm. Assume that A ∈ N, B ∈ {0, . . . , F -1}, and C ∈ N. Then T [i] is an (n+2)-bit number, ∀F ∈ {2 n-1 + 1, . . . , 2 n -1}, and the table is addressed by only two bits [START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF]. Furthermore, ϕ(k) is defined recursively on N * as follows:
r (c) 4 [i] r (c) 5 [i] r (s) 5 [i] r (s) 4 [i] r (c) 3 [i] r (s) 3 [i] r (s) 0 [i] r (c) 0 [i] r (s) 1 [i] r (c) 1 [i] r (s) 2 [i] r (c) 2 [i] a i B s (s) 1 [i] = r (s) 0 [i + 1] s (s) 2 [i] = r (s) 1 [i + 1] s (s) 3 [i] = r (s) 2 [i + 1] s (s) 4 [i] = r (s) 3 [i + 1] s (s) 6 [i] = r (s) 5 [i + 1] s (c) 1 [i] = r (c) 0 [i + 1] s (c) 2 [i] = r (c) 1 [i + 1] s (c) 3 [i] = r (c) 2 [i + 1] s (c) 4 [i] = r (c) 3 [i + 1] s (s) 5 [i] = r (s) 4 [i + 1] s (c) 5 [i] = r (c) 4 [i + 1] s (c) 6 [i] = r (c) 5 [i + 1] s (c) 0 [i] = 0 s (s) 0 [i] = 0 ROM Modsum Modshift t (c) 5 [i] t (c) 4 [i] t (c) 3 [i] t (c) 2 [i] t (c) 1 [i] T (s) 2 [i] t (s) 1 [i] t (s) 0 [i]
[i] r (s) 5 [i] r (c) 1 [i] r (c) 2 [i] r (c) 3 [i] r (c) 4 [i] ((R (c) 5:4 [i + 1] + r (s) 5 [i + 1])2 n + a i B) mod F R (c) 5:4 [i + 1] a i r (c) 4 [i + 1] r (s) 0 [i + 1] r (s) 1 [i + 1] r (s) 2 [i + 1] r (s) 4 [i + 1] r (c) 5 [i] r (s) 4
[i + 1] r (c) 2 [i + 1] r (c) 3 [i + 1] r (s) 0 [i] r (s) 1 [i] r (s) 2 [i] r (s) 3 [i] (s) 3 [i + 1] r (c) 1
ϕ(k) = ϕ(k -1) -2F + 2 n if ϕ(k -1) -2F + 2 n ≥ 0, ϕ(k -1) -F + 2 n otherwise, (6)
with ϕ(0) = 0. Note that two CRAs, an array of n AND gates, and three registers implement the above equation (Figure 12). Thus, the critical path is the same as the one of the circuit implementing the iteration stage. Note that, at the price of an additional clock cycle, one can build the table on-thefly without impacting on the computation time (Figure 12). The algorithm returns a number R[0] congruent to (AB + C) modulo F and a final modular reduction is required. The architecture of the circuit responsible for this operation depends 12). And yet, this first radix-2 algorithm has a drawback in the sense that R[0] is not a valid input. Since both right-to-left and left-to-right modular exponentiation algorithms involve the computation of (R[0] 2 ) mod F , a modulo F reduction is required at the end of each multiplication. A straightforward modification of the algorithm solves this issue: it suffices to compute R
on F : if 2 n-1 + 1 ≤ F ≤ 2 n-1 + 2 n-2 -1, one shows that R[0] < 3F ; if 2 n-1 + 2 n-2 ≤ F ≤ 2 n -1, then R[0] < 2F (Figure
[i] = ψ(T [i] div 2 n-1 ) + T [i] mod 2 n-1 , where ψ(k) = (2 n-1 • k) mod F . Let B max = 2 n+2 +11-4•(n mod 2) 3 .
Assume that A ∈ N, B ∈ {0, . . . , Y max }, and C ∈ N. Then, one can establish the following properties [START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF]:
Modred Ld3 Ld2 Ld1 Clr ×2 0 0 ϕ(3) 0 0 ϕ(2) ϕ(1) Clk P [r -1] P [r -2]
Preprocessing step
P [r -i]
T [i]n+1:n 2F -2 n F Control unit ϕ(T [i]n+1:n) n Start T [i] CRA CRA modF (AB + C) mod F B ai ci n n + 2 Modsum R[i + 1] n + 1 S[i] Modshift 2 n-1 + 2 n-2 ≤ F ≤ 2 n -1 n n + 1 2 n -F significant Most bit n R[0] R[0] 2 n-1 + 1 ≤ F ≤ 2 n-1 + 2 n-2 -1
Fig. 12. Architecture of the first iteration stage proposed by Beuchat and Muller [START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF].
• T [i] is an (n + 2)-bit number, ∀F ∈ {2 n-1 + 1, . . . , 2 n -1}, and the ψ table is addressed by three bits. Furthermore, one can also build the table on-the-fly at the price of an extra clock cycle (Figure 13). Further optimizations are possible when the modulus F is known at design time. Figure 14a describes the implementation of the ϕ function on Xilinx FPGAs. In this example, the operator is able to perform multiplication-addition modulo F 1 or F 2 according to a Select signal. Thus, each bit of ϕ is computed by means of a 3-input table addressed by T n+1 [i],
T n [i], and Select. Such tables are embedded in the LUTs of the CRA returning R[i]. The ψ function is implemented the same way (Figure 14b). However, since it depends on three bits, the operator handles a single modulus F .
VI. THIRD ARCHITECTURE: MODULAR REDUCTION FOLLOWED BY A LEFT-SHIFT OPERATION
The third family of algorithms aims at simplifying the final modular reduction at the price of an additional iteration. This elegant approach was introduced by Peeters et al. [START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] and can be applied to both prime fields and extension fields. Let us consider multiplication over F p to illustrate how such architectures work out the product AB mod F . The Modshift block computes a number U
A. Carry-Save Algorithm
The first carry-save modular multiplier featuring such a Modshift block was probably proposed by Bunimov and Schimmler [START_REF] Bunimov | Area and time efficient modular multiplication of large integers[END_REF]. However, this algorithm requires an (n + 2)bit integer R (s) [i] and an (n + 1)-bit integer R (c) [i] to encode the intermediate result R [i]. Since the authors do not perform an additional iteration, the final modular reduction proves to be more complex than the one of the carry-save modular multipliers studied in Section V. Peeters et al. designed a carrysave architecture which returns either R[-1] = AB mod F or R[-1] = (AB mod F ) + F [START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF]. The carry-save intermediate result R[i] consists of an (n + 1)-bit word R (s) [i] and an nbit word R (c) [i], whose least significant bit is always equal to zero (i.e. R (c) [i] ≤ 2 n -2). Therefore, we have
R[i] = R (s) n:n-2 [i] + R (c) n-1:n-3 [i] • 2 n-2 + R (s) n-3:0 [i] + 2R (c) n-4:0 [i]. Let us define the four bit integer U [i] such that U [i] = R (s) n:n-2 [i] + R (c) n-1:n-3 [i].
The Modshift block computes an number
S[i] = 2 U [i + 1] • 2 n-2 mod F + 2R (s) n-3:0 [i + 1] + 4R (c) n-4:0 [i + 1], which is congruent to R[i + 1] modulo F . However, Peeters et al. do not compute S[i] explicitly. They suggest to evaluate (k • 2 n-2 ) mod F and 2R (s) n-3:0 [i + 1] + 4R (c) n-4:0 [i + 1] + a i B
in parallel in order to shorten the critical path (Figure 15). The modulus must be known at design time in order to build the table storing the 15 possible values of (U
[i+1]•2 n-2 ) mod F . Note that a i B ≤ F -1, (U [i + 1] • 2 n-2 ) mod F ≤ F -1, R (s) n-3:0 [i+1] ≤ 2 n-2 -1, and R (c) n-4:0 ≤ 2 n-3 -2.
The number R[0] is therefore smaller than or equal to 3F +2 n -13 and the Modred block would have to subtract up to 4F to get the final result. Let us perform an additional iteration with a -1 = 0. We obtain an even number R
[-1] ≤ 2F + 2 n -12 which is CRA CRA 2F -2 n F n Start 0 P [r -1] P [r -2] 0 0 0 0 0 0 0 ψ(3) ψ (2)
ψ( 5) ψ( 4) congruent to 2AB modulo F . Therefore, we have to reduce R[-1]/2 ≤ F + 2 n-1 -6 < 2M . This operation requires at most one subtraction.
ψ(3) ψ(2) ψ(5) ψ (4
LUT T [i]1 0 1 0 ϕ(.)F 1 Select T [i]n T [i]n+1 ϕ(.)F 2 ϕ(.) P [i]1 ϕ(.) LUT T [i]1 Select T [i]n+1 T [i]n 1 T [i]n+1 T [i]n T [i]n-
B. High-Radix Carry-Save Algorithm
Since carry-save addition does not take advantage of the dedicated carry-logic available in several FPGA families, Beuchat and Muller [START_REF] Beuchat | Automatic generation of modular multipliers for FPGA applications[END_REF] proposed a high-radix carry-save implementation of the algorithm by Peeters et al. [START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] previously described. Assume that R[i + 1] and S[i] are now high-radix carry-save numbers. By shifting R[i+1], we define a new internal representation for S[i]. It is therefore necessary to perform a conversion while computing a number R[i] congruent to S[i] + a i B modulo F . Beuchat and Muller showed that the amount of hardware required for this task depends on the encoding of R[i] and the modulus F [START_REF] Beuchat | Automatic generation of modular multipliers for FPGA applications[END_REF]. They also proposed an algorithm which selects the optimal high-radix carry-save number system and generates the VHDL description of the modular multiplier. Such operators perform a multiplication in (n+1) clock cycles and return a high-radix carry save number R[-1] which is smaller than 2F . Thus, the final modulo F correction requires at most one subtraction.
C. Multiplication over F p n
The same approach allows one to design array multipliers over F p n . Song and Parhi suggested to compute at each step a degree- 8b). A degree-(n+D-1) polynomial S[i] allows one to accumulate these partial products:
(n + D -2) polynomial T [i] which is the sum of D partial products, i.e. T [i] = D-1 j=0 a Di+j x j B [1] (Figure
S[i] = T [i]+x D (S[i+1] mod F ). After n/D iterations, S[0] is a degree-(n + D -
VII. CONCLUSION
In order to compare the algorithms described in this survey, we wrote a generic VHDL library as well as automatic code generators, and performed a series of experiments involving a Spartan-3 XC3S1500 FPGA. Whereas the description of operators whose modulus is an input is rather straightforward, the computation of the tables involved in the algorithms for which the modulus is a constant known at design time proves to be tricky in VHDL. Since the language does not allow one to easily deal with big numbers, a first solution consists in writing a VHDL package for arbitrary precision arithmetic. Note that this approach slows down the synthesis of the VHDL code. Consider for instance the computation of the ϕ(k) function involved in the radix-2 algorithm (see Equation [START_REF] Kaihara | Bipartite modular multiplication[END_REF] in Section V-B). Synthesis tools have to interpret the code of the recursive function ϕ(k) in order to compute the constants (k • 2 n ) mod F . In some cases, it seems more advisable to write a program which automatically generates the VHDL description of the operator according to its modulus: the selection of a high-radix carry-save number system for the algorithm outlined in Section VI-B consists for instance in finding a shortest path in a directed acyclic graph [START_REF] Beuchat | Automatic generation of modular multipliers for FPGA applications[END_REF].
Figure 16 describes a comparison between carry-save and radix-2 iteration stages when the modulus is a constant. Among carry-save algorithms, the one by Kim and Sobelman [START_REF] Kim | Digit-serial modular multiplication using skew-tolerant domino CMOS[END_REF] leads to the smallest iteration stage. However, recall that it involves a complex Modred block and the architecture introduced by Peeters et al. [START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] proves to be the best choice. The operator introduced by Jeong and Burleson [START_REF] Jeong | VLSI array algorithms and architectures for RSA modular multiplication[END_REF] requires a larger area and is even slower than other carry-save implementations. Radix-2 algorithms take advantage of the dedicated carry logic and embed the ϕ(k) table in the LUTs of a CRA (Figure 13). This approach allows one to roughly divide by two the area on Xilinx devices at the price of a slightly lower clock frequency. Since these results do not include the Modred block, the delay of carry-save operators is underestimated. However, these results indicate that radix-2 algorithms are efficient for moduli up to 32 bits. For larger moduli, the highradix carry-save approach allows significant hardware savings without impacting on the computation time on Xilinx FPGAs (Figure 17). Note that borrow-save algorithms always lead to larger circuits on our target FPGA family. Experiment results indicate that the choice between the multipliers over GF(p m ) studied in this paper depends on the irreducible polynomial F (see also [START_REF] Beuchat | Multiplication over F p m on FPGA: A survey[END_REF]). Size of the modulus Fig. 16. Comparison between carry-save and radix-2 algorithms for several operand sizes. For each experiment, we consider, from left to right, the algorithms by Jeong and Burleson [START_REF] Jeong | VLSI array algorithms and architectures for RSA modular multiplication[END_REF] (carry-save), Kim and Sobelman [START_REF] Kim | Digit-serial modular multiplication using skew-tolerant domino CMOS[END_REF] (carry-save), Peeters et al. [START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] (carry-save), and Beuchat and Muller [START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF] (radix-2). [START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] and the high-radix carry-save scheme by Beuchat and Muller [START_REF] Beuchat | Automatic generation of modular multipliers for FPGA applications[END_REF]. Ten 256-bit prime moduli were randomly generated for this experiment.
A. Radix- 2 Integers 1 )
21 Addition of Radix-2 Integers: Let A and B be two n-bit unsigned integers such that A =
Fig. 1 .
1 Fig. 1. Carry-ripple adder and carry-save adder.
Fig. 2 .
2 Fig. 2. High-radix carry-save numbers.
5 UFig. 3 .
53 Fig. 3. Arithmetic operations in the borrow-save number system.
Fig. 4 .
4 Fig. 4. Modular multiplication based on Horner's rule.
Fig. 7 .
7 Fig. 7. Architecture of the Modred block proposed by Kim and Sobelman [21].
FAFig. 6 .
6 Fig.[START_REF] Kaihara | Bipartite modular multiplication[END_REF]. Architecture of the iteration stage proposed by Jeong and Burleson[START_REF] Jeong | VLSI array algorithms and architectures for RSA modular multiplication[END_REF] for n = 6.
1 degree n + 1 Fig. 8 .
118 Fig. 8. Array multipliers over GF(p n ) processing D = 2 coefficients of A at each clock cycle. (a) Architecture proposed by Shu et al. [24]. (b) Architecture introduced by Song and Parhi [1].
(s) n+3 [i], r (c) n+2 [i], and r (s) n+2 [i] only depend on k.
Figure 7 . 1 .
71 R(s) [i+1] and R(c) [i+1] are again two n-bit integers and the computation of a numberR[i] congruent to 2R[i + 1] + a i B is carried out according to Equation (1) with α = n -The table is addressed by a i and the 3-bit sum k = R (c) n-1:n-2 [i + 1] + r (s) n-2 [i + 1],and returns an n-bit integer (a i B + k2 n ) mod F .
Fig. 10 .
10 Fig.10. Architecture of the iteration stage proposed by Kim and Sobelman[START_REF] Kim | Digit-serial modular multiplication using skew-tolerant domino CMOS[END_REF] for n = 6.
Fig. 11 .
11 Fig.11. Architecture of the iteration stage proposed by Amanor et al.[START_REF] Amanor | Efficient hardware architectures for modular multiplication on FPGAs[END_REF] for n = 6.
• R[0] is smaller than 2F and at most one subtraction is required to compute AB mod F from R[0] . • R[0] is smaller than B max .Therefore modular exponentiation can be performed with R[0] instead of R.
[i] congruent to R[i + 1] modulo F and returns an even number S[i] = 2U [i]. Recall that algorithms based on Horner's rule compute a number R[0] = AB + αF congruent to AB modulo F , where α ∈ N. Let us perform an additional iteration with a-1 = 0. We have R[-1] = S[-1] = 2(R[0] -βF ) = 2(AB + (α -β)F ). Since R[-1] is even, we can shift it to get R[-1]/2 = AB + (α -β)F , which is congruent to AB modulo F . Furthermore, the upper bound of R[-1] turns out to be smaller than the one of R[0].
Fig. 13 .
13 Fig.[START_REF] Bajard | Some operators for on-line radix-2 computations[END_REF]. Architecture of the second iteration stage proposed by Beuchat and Muller[START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF].
1
1
1 a) 1 Fig. 14 .
1114 Fig.[START_REF] Takagi | Modular multiplication hardware algorithms with a redundant representation and their application to RSA cryptosystem[END_REF]. Optimizations of the algorithm proposed by Beuchat and Muller[START_REF] Beuchat | Modulo m multiplication-addition: Algorithms and FPGA implementation[END_REF] when the modulus is known at design time.
1) polynomial congruent to AB modulo F . Song and Parhi included specific hardware to carry out a final modular correction. However, we achieve the same result by performing an additional iteration with a -1 = 0[START_REF] Beuchat | An algorithm for the η T pairing calculation in characteristic three and its hardware implementation[END_REF]. Since T [-1] is equal to zero, we obtain R[-1] = S[-1] = x D (AB mod F ) and it suffices to rightshift this polynomial to get the result.
Fig. 15 .
15 Fig.[START_REF] Grabher | Hardware acceleration of the Tate Pairing in characteristic three[END_REF]. Architecture of the iteration stage proposed by Peeters et al.[START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] for n = 6.
Fig. 17 .
17 Fig. 17. Comparison between the carry-save algorithm proposed by Peeters et al.[START_REF] Peeters | XTR implementation on reconfigurable hardware[END_REF] and the high-radix carry-save scheme by Beuchat and Muller[START_REF] Beuchat | Automatic generation of modular multipliers for FPGA applications[END_REF]. Ten 256-bit prime moduli were randomly generated for this experiment.
TABLE I COMPUTATION
I OF THE MOST SIGNIFICANT DIGIT OF Ã.
TABLE II CLASSIFICATION
II OF MODULAR MULTIPLIERS BASED ON HORNER'S RULE ACCORDING TO THE ARCHITECTURE OF THE Modshift BLOCK.
Left-shift and Left-shift Modular reduction
modular reduction left-shift
Borrow-save Takagi and Yajima [14]
Takagi [17]
Jeong and Burleson [18] Koc ¸and Hung [9] Bunimov and Schimmler [19]
Carry-save Koc ¸and Hung [10] Peeters et al. [20]
Kim and Sobelman [21]
Amanor et al. [22]
High-radix Beuchat and Muller [11]
carry-save
Radix 2 Beuchat and Muller[23]
F p n Shu et al. [24] Song and Parhi [1]
The Modshift block implements Equation (1) and returns a carry-save number S[i] congruent to 2R[i + 1], while the Modsum block requires two CSAs to determine a number R[i] congruent to S[i] + a i B. According to Equation (2), R[i] is smaller than or equal to F + 2 n+1 -4 and the Modred block has to remove up to 4F to R[0] in order to get AB mod F .
ACKNOWLEDGMENTS
The authors would like to thank Nicolas Brisebarre and Jérémie Detrey for their useful comments. The work described in this paper has been supported in part by the New Energy and Industrial Technology Development Organization (NEDO), Japan, and by the Swiss National Science Foundation through the Advanced Researchers program while Jean-Luc Beuchat was at École Normale Supérieure de Lyon (grant PA002-101386). | 47,886 | [
"4171",
"837386",
"838982"
] | [
"35418",
"35860",
"35860",
"35860"
] |